Nov 14: David Jurgens: Citation Classification for Behavioral Analysis of a Scientific Field

Please join us for the NLP Seminar on Monday 11/14 at 3:30pm in 202 South Hall.   All are welcome!

Speaker: David Jurgens (Stanford)

Title: Citation Classification for Behavioral Analysis of a Scientific Field

Abstract:

Citations are an important indicator of the state of a scientific field, reflecting how authors frame their work, and influencing uptake by future scholars. However, our understanding of citation behavior has been limited to small-scale manual citation analysis. We perform the largest behavioral study of citations to date, analyzing how citations are both framed and taken up by scholars in one entire field: natural language processing. We introduce a new dataset of nearly 2,000 citations annotated for function and centrality, and use it to develop a state-of-the-art classifier and label the entire ACL Reference Corpus. We then study how citations are framed by authors and use both papers and online traces to track how citations are followed by readers. We demonstrate that authors are sensitive to discourse structure and publication venue when citing, that online readers follow temporal links to previous and future work rather than methodological links, and that how a paper cites related work is predictive of its citation count. Finally, we use changes in citation roles to show that the field of NLP is undergoing a significant increase in consensus.

Preparatory Readings:

Oct 31: Jiwei Li: Teaching Machines to Converse

Please join us for the NLP Seminar this Monday, Oct 31 at 3:30pm in 202 South Hall.  All are welcome!

Speaker: Jiwei Li (Stanford)

Title: Teaching Machines to Converse

Abstract:

Recent neural networks models present both new opportunities and new challenges for developing conversational agents. In this talk, I will describe how we have advanced this line of research by addressing four different issues in neural dialogue generation: (1) overcoming the overwhelming prevalence of dull responses (e.g., “I don’t know”) generated from neural models; (2) enforcing speaker consistency; (3) applying reinforcement learning to foster sustained dialogue interactions. (4) How to teach a bot to interact with users and ask questions about things that he does not know.

(Slides)

Oct 17: Jacob Andreas: Reasoning about Pragmatics with Neural Listeners and Speakers

Please join us for the NLP Seminar this Monday (Oct 17) at 3:30pm in 202 South Hall.

Speaker: Jacob Andreas (Berkeley)

Title: Reasoning about pragmatics with neural listeners and speakers

Abstract:

We present a model for contrastively describing scenes, in which context-specific behavior results from a combination of inference-driven pragmatics and learned semantics. Like previous learned approaches to language generation, our model uses a simple feature-driven architecture (here a pair of neural “listener” and “speaker” models) to ground language in the world. Like inference-driven approaches to pragmatics, our model actively reasons about listener behavior when selecting utterances. For training, our approach requires only ordinary captions, annotated without demonstration of the pragmatic behavior the model ultimately exhibits. In human evaluations on a referring expression game, our approach succeeds 81% of the time, compared to 69% using existing techniques.

Preparatory Readings:

Oct 3: Sida Wang: Interactive Language Learning

Please join us for the NLP Seminar this Monday (Oct 3) at 3:30pm in 202 South Hall.

Speaker: Sida Wang (Stanford U)

Title: Interactive Language Learning

Abstract:

We introduce 2 parts of the interactive language learning setting. The first is learning from scratch and the second is learning from a community of goal-oriented language users, which is relevant to building adaptive natural language interfaces. The first part is inspired by Wittgenstein’s language games: a human wishes to accomplish some task (e.g., achieving a certain configuration of blocks), but can only communicate with a computer, who performs the actual actions (e.g., removing all red blocks). The computer initially knows nothing about language and therefore must learn it from scratch through interaction, while the human adapts to the computer’s capabilities. We created a game in a blocks world and collected interactions from 100 people playing it.
In the second part (about ongoing work), we explore the setting where a language is supported by a community of people (instead of private to each individual), and the computer has to learn from the aggregate knowledge of a community of goal-oriented language users. We explore how to use additional supervision such as definitions and demonstration

(Slides)

Sept 19: Percy Liang: Learning from Zero

Please join us for the NLP Seminar this Monday (Sept. 19) at 3:30pm in 202 South Hall.
(This is a rescheduling of a talk that was postponed from last semester.)
Speaker: Percy Liang (Stanford)

Title: Learning from Zero

Abstract:

Can we learn if we start with zero examples, either labeled or unlabeled?  This scenario arises in new user-facing systems (such as virtual assistants for new domains), where inputs should come from users, but no users exist until we have a working system, which depends on having training data.  I discuss recent work that circumvent this circular dependence by interleaving user interaction and learning.

(Slides)

Preparatory readings:

Liang, Percy. “Talking to computers in natural language.” XRDS: Crossroads, The ACM Magazine for Students 21.1 (2014): 18-21.
Liang, Percy. “Learning Executable Semantic Parsers for Natural Language Understanding.” arXiv preprint arXiv:1603.06677(2016).

Sept 12: Ellie Pavlick: Natural Language Inference in the Real World

Please join us for the NLP Seminar on Monday Sept 12 at 3:30pm in 202 South Hall. All are welcome!  (This semester we are posting preparatory readings, for those who are interested. Note also the room change to room 202 South Hall.)

 

Speaker: Ellie Pavlick, U Penn

 

Title: Natural Language Inference in the Real World
 
Abstract:

It is impossible to reason about natural language by memorizing all possible sentences. Rather, we rely on models of composition which allow the meanings of individual words to be combined to produce meanings of longer phrases. In natural language processing, we often employ models of composition that work well for carefully-curated datasets or toy examples, but prove to be very brittle when applied to the type of language that humans actually use.

 

This talk will discuss our work on applying natural language inference in the “real world.” I will describe observations from experimental studies of humans’ linguistic inferences, and describe the challenges they present for existing methods of automated natural language inference (with particular focus to the case of adjective-noun composition). I will also outline our current work on extending models of compositional entailment in order to better handle the types of imprecise inferences we observe in human language.

Slides: (pdf of slides)
Preparatory Reading:

NLP Seminar Fall 2016 Schedule (Mondays at 3:30pm)

Welcome back everyone to the Fall 2016 semester!  This fall we will have a new time for the NLP Seminar; we’ll be meeting on Mondays from 3:30-4:30pm.  The location is in room 202 South Hall.

Another exciting change is that UC Berkeley students can earn 1 unit of credit for attending the meeting on a weekly basis.  Grad students can sign up under I School 290  or CS 294.

Breaking news: undergraduates from any department can now sign up for INFO 190-001 (CCN 34812)

We’ll continue with invited speakers for alternating weeks, and in the intervening weeks we’ll discuss research papers and related activities.  We’ll continue to post speaker information on this web site.

The speaker lineup is:
9/12: Ellie Pavlick (U Penn)
9/19: Percy Liang (Stanford)
10/3: Sida Wang (Stanford)
10/17: Jacob Andreas (Berkeley)
10/31: Jiwei Li (Stanford)
11/14: David Jurgens (Stanford)
11/28: NLP professors (Berkeley)

April 25: Dan Gillick: Multilingual language processing from bytes

Please join us for the NLP Seminar this Thursday (April 21) at 4pm in 205 South Hall. All are welcome!

Speaker: Dan Gillick (Google)

Title: Multilingual language processing from bytes

Abstract:

I’ll describe my recent work on standard language processing tasks like part-of-speech tagging and named entity recognition where I replace the traditional pipeline of models with a recurrent neural network. In particular, the model reads one byte at a time (it doesn’t know anything about tokens or sentences) and produces output over byte spans. This allows for very compact, multilingual models that improve over models trained on a single language. I’ll show lots of results and we can discuss the merits and problems with this approach.

April 7: Percy Liang: Learning From Zero

Please join us for the next NLP Seminar Thursday, April 7 at 4pm in 205 South Hall. All are welcome!

Speaker: Percy Liang (Stanford)
Title: Learning from Zero

Abstract:
Can we learn if we start with zero examples, either labeled or unlabeled? This scenario arises in new user-facing systems (such as virtual assistants for new domains), where inputs should come from users, but no users exist until we have a working system, which depends on having training data. I discuss recent work that circumvent this circular dependence by interleaving user interaction and learning.

Mar 10: Justine Kao: Computational approaches to creative and social language

Please join us for the next NLP Seminar Thursday, March 10 at 4pm in 205 South Hall. All are welcome!

Speaker: Justine Kao (Stanford)

Title: Computational approaches to creative and social language

Abstract:

Humans are highly creative users of language. From poetry, to puns, to political satire, much of human communication requires listeners to go beyond the literal meanings of words to infer an utterance’s subtext as well as the speaker’s intentions. Here I will present research on four types of creative language: hyperbole, irony, wordplay, and poetry. In the first part of the talk, I will describe an extension to Rational Speech Act (RSA) models, a family of computational models that view language understanding as recursive social reasoning. Using a series of behavioral experiments, I show that the RSA model predicts people’s interpretations of hyperbole and irony with high accuracy. In the second part of the talk, I will describe formal measures of humor and poetic beauty that incorporate insights from psychology and literary theory to produce quantitative predictions. Finally, I will discuss ways in which cognitive modeling and NLP can provide complementary approaches to understanding the social aspects of language use.

« Older posts

Video & Audio Comments are proudly powered by Riffly