Time: Dec 9 from 11am-12pm PST
Location: In person in Berkeley Way West 8th floor rest area (8006)
Talk title: Emergence and reasoning in large language models
Abstract: This talk will cover two ideas in large language models—emergence and reasoning. Emergent abilities in large language models are abilities that are not present in small models but are present in large models. The existence of emergent abilities implies that further scaling may lead to language models with even more new abilities. Reasoning is key to long-standing challenges in machine learning such as learning from only a few examples or from abstract instructions. Large language models have shown impressive reasoning abilities simply via chain-of-thought prompting, which encourages models to generate intermediate reasoning steps before giving the final answer.
Bio: Jason Wei is a senior research scientist at Google Brain. His work centers around three aspects of large language models: instruction finetuning, chain-of-thought prompting, and emergent abilities. He was previously in the AI residency program at Google, and before that he graduated from Dartmouth College.