Month: October 2019

Please join us for another NLP Seminar at 4:00pm in 202 South Hall on Oct 21st.

Speaker: Ian Tenney (Google)

Title: Probing for Structure in Sentence Representations

Abstract:

With the development of ELMo, BERT, and successors, pre-trained sentence encoders have become nearly ubiquitous in NLP. But what makes these models so powerful? What are they learning? A flurry of recent work – cheekily dubbed “BERTology” – seeks to analyze and explain these models, treating the encoder as an object of scientific inquiry.

In this talk, I’ll discuss a few of these analyses, focusing on our own “edge probing” work which looks at how linguistic structure is represented in deep models. Using tasks like tagging, parsing, and coreference as analysis tools, we show that language models learn strong representations of syntax but are less adept at semantic phenomena. Moreover, we find evidence of sequential reasoning, reminiscent of traditional pipelined NLP systems.

This work was jointly conducted with Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R. Thomas McCoy, Najoung Kim, Benjamin Van Durme, Samuel R. Bowman, Dipanjan Das, and Ellie Pavlick.

Biography:

Ian Tenney is a software engineer on the Language team at Google Research in Mountain View. His research focuses on understanding and analysis of deep NLP models, particularly on how they encode linguistic structure, and how unsupervised or weakly-supervised learning can give rise to complex representations and reasoning. He was a Senior Researcher on the sentence representation team for the 2018 JSALT workshop, and from 2016 to 2018 taught in the MIDS program at UC Berkeley School of Information. He holds an M.S. in Computer Science and a B.S. in Physics from Stanford.

Please join us for another NLP Seminar at 11:00 am at Soda 380 on Tuesday, Oct 8.

Speaker: Alexander Rush (Cornell)

Title: Revisiting Grammar Induction

Abstract:

Deep learning for NLP has become synonymous with global models trained with unlimited data. These models are incredible; however, they seem unlikely to tell us much about the way they (or language) work. Less heralded has been the ways in which deep methods have helped with inference in classical factored models. In this talk, I revisit the problem of grammar induction, an important benchmark task in NLP, using a variety of variational methods. Recent work shows that these methods greatly increase the performance of unsupervised learning methods. I argue that these approaches can be used in conjunction with global models to provide control in modern systems.

Biography:

Alexander Sasha Rush is an Associate Professor at Cornell Tech. His group’s research is in the intersection of natural language processing, deep learning, and structured prediction with applications in machine translation, summarization, and text generation. He also supports open-source development including the OpenNMT project. His work has received several paper and demo awards at major NLP and visualization conferences, an NSF Career Award, and faculty awards.  He is currently the general chair of ICLR.