Please join us for another NLP Seminar at 4:00pm in 202 South Hall on Nov 18th. We will have two speakers visiting from Stanford.
Speaker 1: Urvashi Khandelwal
Title: Generalization through Memorization: Nearest Neighbor Language Models
Abstract:
Neural language models (LMs) are typically trained on large amounts of data. However, generalizing to a larger corpus or to a different domain requires additional training which is expensive. This raises an important question – how can LMs generalize better without additional training? In this talk, I will introduce kNN-LMs which extend a pre-trained LM by linearly interpolating it with a k-nearest neighbors (kNN) model. Distances are computed in the pre-trained LM embedding space, and neighbors can be drawn from any text collection, including the original LM training set. Experiments show that using the original LM training data alone, without further training, can improve performance quite a bit. In addition, kNN-LM efficiently scales up to larger training sets and allows for effective domain adaptation, by simply varying the nearest neighbor datastore, again without further training. Qualitatively, the model is particularly helpful in predicting rare patterns, such as factual knowledge. Together, these results strongly suggest that learning similarity between sequences of text is easier than predicting the next word, and that nearest neighbor search can help LMs to effectively use data without having to train on it.
Biography:
Urvashi is a fifth year Computer Science PhD student at Stanford University. She works with the Stanford Natural Language Processing group, where she is advised by Prof. Dan Jurafsky. She works at the intersection of machine learning and natural language processing. More specifically, she is interested in analyzing and improving neural language models as well as sequence generation models.
Speaker 2: John Hewitt
Title: Probing Neural NLP: Ideas and Problems
Abstract:
Recent work in NLP has attempted to explore the basic linguistic skills induced by neural NLP models. Probing methods ask these questions through supervised analyses of models’ representations of sentences. In this talk, I’ll cover a new way of thinking about how neural networks can implicitly encode discrete structures, and provide probing evidence that ELMo and BERT have internal representations of syntax. I’ll then introduce work challenging the premises of probing, demonstrating that the methodology can admit false positive results and showing how probes can be designed and interpreted to avoid this.
Biography:
John is a second year PhD student at Stanford University co-advised by Chris Manning and Percy Liang. He works on understanding the basic properties, capabilities, and limitations of neural networks for processing human language. He aims to understand neural models for understanding’s sake, while also using the insights gained to develop models that learn and transfer more robustly from less data. He is the recipient of the EMNLP 2019 best paper runner up award.