Feb 13: Stephan Meylan: Word Forms Are Optimized For Efficient Communication

Please join us for the NLP Seminar on Monday 2/13 at 3:30pm in 202 South Hall.  All are welcome!

Speaker:  Stephan Meylan (UC Berkeley)

Title: Word forms are optimized for efficient communication


The inverse relationship between word length and use frequency, first identified by G.K. Zipf in 1935, is a classic empirical law that holds across a wide range of human languages.  We demonstrate that length is one aspect of a much more general property of words: how distinctive they are with respect to other words in a language. Distinctiveness plays a critical role in recognizing words in fluent speech, in that it reflects the strength of potential competitors when selecting the best candidate for an ambiguous signal. Phonological information content, a measure of a word’s probability under a statistical model of a language’s sound or character sequences, concisely captures distinctiveness. Examining large-scale corpora from 13 languages, we find that distinctiveness significantly outperforms word length as a predictor of frequency. This finding provides evidence that listeners’ processing constraints shape fine-grained aspects of word forms across languages.

NLP Seminar Schedule Spring 2017

The NLP Seminar is back for Spring 2017!  We will retain our meeting time of Mondays from 3:30-4:30pm, in the same location, room 202 South Hall.

Here is the speaker list so far; more will appear:

Feb 13: Stephen Meylan, UC Berkeley

Feb 27: Jayant Krishnamurthy, Allen Institute for AI

March 6: Joel Tetreault, Grammarly (Note special time: 4pm!!)

April 10: Danqi Chen, Stanford

April 24: Marta Recasens, Google

 For up to the minute notifications, join the email list (UC Berkeley community only).

Nov 14: David Jurgens: Citation Classification for Behavioral Analysis of a Scientific Field

Please join us for the NLP Seminar on Monday 11/14 at 3:30pm in 202 South Hall.   All are welcome!

Speaker: David Jurgens (Stanford)

Title: Citation Classification for Behavioral Analysis of a Scientific Field


Citations are an important indicator of the state of a scientific field, reflecting how authors frame their work, and influencing uptake by future scholars. However, our understanding of citation behavior has been limited to small-scale manual citation analysis. We perform the largest behavioral study of citations to date, analyzing how citations are both framed and taken up by scholars in one entire field: natural language processing. We introduce a new dataset of nearly 2,000 citations annotated for function and centrality, and use it to develop a state-of-the-art classifier and label the entire ACL Reference Corpus. We then study how citations are framed by authors and use both papers and online traces to track how citations are followed by readers. We demonstrate that authors are sensitive to discourse structure and publication venue when citing, that online readers follow temporal links to previous and future work rather than methodological links, and that how a paper cites related work is predictive of its citation count. Finally, we use changes in citation roles to show that the field of NLP is undergoing a significant increase in consensus.

Preparatory Readings:

Oct 31: Jiwei Li: Teaching Machines to Converse

Please join us for the NLP Seminar this Monday, Oct 31 at 3:30pm in 202 South Hall.  All are welcome!

Speaker: Jiwei Li (Stanford)

Title: Teaching Machines to Converse


Recent neural networks models present both new opportunities and new challenges for developing conversational agents. In this talk, I will describe how we have advanced this line of research by addressing four different issues in neural dialogue generation: (1) overcoming the overwhelming prevalence of dull responses (e.g., “I don’t know”) generated from neural models; (2) enforcing speaker consistency; (3) applying reinforcement learning to foster sustained dialogue interactions. (4) How to teach a bot to interact with users and ask questions about things that he does not know.


Oct 17: Jacob Andreas: Reasoning about Pragmatics with Neural Listeners and Speakers

Please join us for the NLP Seminar this Monday (Oct 17) at 3:30pm in 202 South Hall.

Speaker: Jacob Andreas (Berkeley)

Title: Reasoning about pragmatics with neural listeners and speakers


We present a model for contrastively describing scenes, in which context-specific behavior results from a combination of inference-driven pragmatics and learned semantics. Like previous learned approaches to language generation, our model uses a simple feature-driven architecture (here a pair of neural “listener” and “speaker” models) to ground language in the world. Like inference-driven approaches to pragmatics, our model actively reasons about listener behavior when selecting utterances. For training, our approach requires only ordinary captions, annotated without demonstration of the pragmatic behavior the model ultimately exhibits. In human evaluations on a referring expression game, our approach succeeds 81% of the time, compared to 69% using existing techniques.

Preparatory Readings:

Oct 3: Sida Wang: Interactive Language Learning

Please join us for the NLP Seminar this Monday (Oct 3) at 3:30pm in 202 South Hall.

Speaker: Sida Wang (Stanford U)

Title: Interactive Language Learning


We introduce 2 parts of the interactive language learning setting. The first is learning from scratch and the second is learning from a community of goal-oriented language users, which is relevant to building adaptive natural language interfaces. The first part is inspired by Wittgenstein’s language games: a human wishes to accomplish some task (e.g., achieving a certain configuration of blocks), but can only communicate with a computer, who performs the actual actions (e.g., removing all red blocks). The computer initially knows nothing about language and therefore must learn it from scratch through interaction, while the human adapts to the computer’s capabilities. We created a game in a blocks world and collected interactions from 100 people playing it.
In the second part (about ongoing work), we explore the setting where a language is supported by a community of people (instead of private to each individual), and the computer has to learn from the aggregate knowledge of a community of goal-oriented language users. We explore how to use additional supervision such as definitions and demonstration


Sept 19: Percy Liang: Learning from Zero

Please join us for the NLP Seminar this Monday (Sept. 19) at 3:30pm in 202 South Hall.
(This is a rescheduling of a talk that was postponed from last semester.)
Speaker: Percy Liang (Stanford)

Title: Learning from Zero


Can we learn if we start with zero examples, either labeled or unlabeled?  This scenario arises in new user-facing systems (such as virtual assistants for new domains), where inputs should come from users, but no users exist until we have a working system, which depends on having training data.  I discuss recent work that circumvent this circular dependence by interleaving user interaction and learning.


Preparatory readings:

Liang, Percy. “Talking to computers in natural language.” XRDS: Crossroads, The ACM Magazine for Students 21.1 (2014): 18-21.
Liang, Percy. “Learning Executable Semantic Parsers for Natural Language Understanding.” arXiv preprint arXiv:1603.06677(2016).

Sept 12: Ellie Pavlick: Natural Language Inference in the Real World

Please join us for the NLP Seminar on Monday Sept 12 at 3:30pm in 202 South Hall. All are welcome!  (This semester we are posting preparatory readings, for those who are interested. Note also the room change to room 202 South Hall.)


Speaker: Ellie Pavlick, U Penn


Title: Natural Language Inference in the Real World

It is impossible to reason about natural language by memorizing all possible sentences. Rather, we rely on models of composition which allow the meanings of individual words to be combined to produce meanings of longer phrases. In natural language processing, we often employ models of composition that work well for carefully-curated datasets or toy examples, but prove to be very brittle when applied to the type of language that humans actually use.


This talk will discuss our work on applying natural language inference in the “real world.” I will describe observations from experimental studies of humans’ linguistic inferences, and describe the challenges they present for existing methods of automated natural language inference (with particular focus to the case of adjective-noun composition). I will also outline our current work on extending models of compositional entailment in order to better handle the types of imprecise inferences we observe in human language.

Slides: (pdf of slides)
Preparatory Reading:

NLP Seminar Fall 2016 Schedule (Mondays at 3:30pm)

Welcome back everyone to the Fall 2016 semester!  This fall we will have a new time for the NLP Seminar; we’ll be meeting on Mondays from 3:30-4:30pm.  The location is in room 202 South Hall.

Another exciting change is that UC Berkeley students can earn 1 unit of credit for attending the meeting on a weekly basis.  Grad students can sign up under I School 290  or CS 294.

Breaking news: undergraduates from any department can now sign up for INFO 190-001 (CCN 34812)

We’ll continue with invited speakers for alternating weeks, and in the intervening weeks we’ll discuss research papers and related activities.  We’ll continue to post speaker information on this web site.

The speaker lineup is:
9/12: Ellie Pavlick (U Penn)
9/19: Percy Liang (Stanford)
10/3: Sida Wang (Stanford)
10/17: Jacob Andreas (Berkeley)
10/31: Jiwei Li (Stanford)
11/14: David Jurgens (Stanford)
11/28: NLP professors (Berkeley)

April 25: Dan Gillick: Multilingual language processing from bytes

Please join us for the NLP Seminar this Thursday (April 21) at 4pm in 205 South Hall. All are welcome!

Speaker: Dan Gillick (Google)

Title: Multilingual language processing from bytes


I’ll describe my recent work on standard language processing tasks like part-of-speech tagging and named entity recognition where I replace the traditional pipeline of models with a recurrent neural network. In particular, the model reads one byte at a time (it doesn’t know anything about tokens or sentences) and produces output over byte spans. This allows for very compact, multilingual models that improve over models trained on a single language. I’ll show lots of results and we can discuss the merits and problems with this approach.

« Older posts

Video & Audio Comments are proudly powered by Riffly