Naomi Saphra will be giving a virtual talk on Friday, November 13th from 11am — 12pm. Zoom information will be distributed via the Berkeley NLP Seminar listserv.

Title: Learning Dynamics of Language Models

Abstract: When people ask why a neural network is so effective at solving some task, some researchers mean, “How does training impose bias towards effective representations?” This approach can lead to inspecting loss landscapes, analyzing convergence, or identifying critical learning periods. Other researchers mean, “How does this model represent linguistic structure?” This approach can lead to model probing, testing on challenge sets, or inspecting attention distributions. The work in this talk instead considers the question, “How does training impose bias towards linguistic structure?”

This question is of interest to NLP researchers as well as general machine learning researchers. Language has well-studied compositional behavior, offering a realistic but intuitive domain for studying the gradual development of structure over the course of training. Meanwhile, studying why existing models and training procedures are effective by relating them to language offers insights into improving them. I will discuss how models that target different linguistic properties diverge over the course of training, and relate their behavior to current practices and theoretical proposals. I will also propose a new method for analyzing hierarchical behavior in LSTMs, and apply it in synthetic experiments to illustrate how LSTMs learn like classical parsers.

Bio: Naomi Saphra is a current PhD student at University of Edinburgh, studying the emergence of linguistic structure during neural network training: the intersection of linguistics, interpretability, formal grammars, and learning theory. They have a surprising number of awards and honors, but all are related to an accident of nature that has limited their ability to type and write. They have been called “the Bill Hicks of radical AI-critical comedy” exactly once.