Please join us for another NLP Seminar at 4:00pm in 202 South Hall on Oct 21st.

Speaker: Ian Tenney (Google)

Title: Probing for Structure in Sentence Representations

Abstract:

With the development of ELMo, BERT, and successors, pre-trained sentence encoders have become nearly ubiquitous in NLP. But what makes these models so powerful? What are they learning? A flurry of recent work – cheekily dubbed “BERTology” – seeks to analyze and explain these models, treating the encoder as an object of scientific inquiry.

In this talk, I’ll discuss a few of these analyses, focusing on our own “edge probing” work which looks at how linguistic structure is represented in deep models. Using tasks like tagging, parsing, and coreference as analysis tools, we show that language models learn strong representations of syntax but are less adept at semantic phenomena. Moreover, we find evidence of sequential reasoning, reminiscent of traditional pipelined NLP systems.

This work was jointly conducted with Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R. Thomas McCoy, Najoung Kim, Benjamin Van Durme, Samuel R. Bowman, Dipanjan Das, and Ellie Pavlick.

Biography:

Ian Tenney is a software engineer on the Language team at Google Research in Mountain View. His research focuses on understanding and analysis of deep NLP models, particularly on how they encode linguistic structure, and how unsupervised or weakly-supervised learning can give rise to complex representations and reasoning. He was a Senior Researcher on the sentence representation team for the 2018 JSALT workshop, and from 2016 to 2018 taught in the MIDS program at UC Berkeley School of Information. He holds an M.S. in Computer Science and a B.S. in Physics from Stanford.