Naomi Saphra will be giving a virtual talk on Friday, November 13th from 11am — 12pm. Zoom information will be distributed via the Berkeley NLP Seminar listserv.
Title: Learning Dynamics of Language Models
Abstract: When people ask why a neural network is so effective at solving some task, some researchers mean, “How does training impose bias towards effective representations?” This approach can lead to inspecting loss landscapes, analyzing convergence, or identifying critical learning periods. Other researchers mean, “How does this model represent linguistic structure?” This approach can lead to model probing, testing on challenge sets, or inspecting attention distributions. The work in this talk instead considers the question, “How does training impose bias towards linguistic structure?”
Bio: Naomi Saphra is a current PhD student at University of Edinburgh, studying the emergence of linguistic structure during neural network training: the intersection of linguistics, interpretability, formal grammars, and learning theory. They have a surprising number of awards and honors, but all are related to an accident of nature that has limited their ability to type and write. They have been called “the Bill Hicks of radical AI-critical comedy” exactly once.