Fall 2018: No Seminar Talks

The NLP Seminar is taking a break in Fall 2018; we won’t be scheduling speakers this semester.   See you in the Spring!

Apr 30: Marilyn Walker: Modeling Narrative Structure in Informal First-Person Narratives

Please join us for the last NLP Seminar of the semester on Monday, April 30,  at 4:00pm in 202 South Hall.   All are welcome!

Speaker:  Marilyn Walker (UCSC)

Title:  Modeling Narrative Structure in Informal First-Person Narratives

Abstract: 

Many genres of natural language text are narratively structured, reflecting the human bias towards organizing our experiences as narratives. Understanding narrative structure in full requires many discourse-level NLP components, including modeling the motivations, goals and desires of the protagonists, modelling the affect states of the protagonists and their transitions across story timepoints, and modelling the causal links between story events. This talk will focus on our recent work on modeling first-person participant goals and desires and their outcomes. I describe DesireDB, a collection of personal first-person stories from the Spinn3r corpus, which are annotated for statements of desire, textual evidence for desire fulfillment, and whether the stated desire is fulfilled given the evidence in the narrative context. I will describe experiments on tracking desire fulfillment using different methods, and show that a LSTM Skip-Thought model using the context both before and after the desire statement achieves an F-Measure of 0.7 on the corpus. I will also briefly discuss our work on modelling affect states and causal links between story events on the same corpus of informal stories.

The presented work was jointly conducted with Elahe Rahimtoroghi, Jiaqi Wu, Pranav Anand, Ruimin Wang, Lena Reed and Shereen Oraby.

Biography:

Marilyn Walker, is a Professor of Computer Science at UC Santa Cruz, and a fellow of the Association for Computational Linguistics (ACL), in recognition of her for fundamental contributions to statistical methods for dialog optimization, to centering theory, and to expressive generation for dialog. Her current research includes work on computational models of dialogue interaction and conversational agents, analysis of affect, sarcasm and other social phenomena in social media dialogue, acquiring causal knowledge from text, conversational summarization, interactive story and narrative generation, and statistical methods for training the dialogue manager and the language generation engine for dialogue systems.

Before coming to Santa Cruz in 2009, Walker was a professor of computer science at the University of Sheffield. From 1996 to 2003, she was a principal member of the research staff at AT&T Bell Labs and AT&T Research, where she worked on the AT&T Communicator project, developing a new architecture for spoken dialogue systems and statistical methods for dialogue management and generation. Walker has published more than 200 papers and has more than 10 U.S. patents granted. She earned an M.S. in computer science at Stanford University, and a Ph.D. in computer science at the University of Pennsylvania.

Mon Apr 16: Amber Boydstun: How Surges in Dominant Media Narratives Move Public Opinion

Please join us for our NLP Seminar next Monday, April 16, at 4:00pm in 202 South Hall.

Speaker: Amber Boydstun (Associate Professor of Political Science, UC Davis)

Title: How Surges in Dominant Media Narratives Move Public Opinion

Abstract:

Studies examining the potential effects of media coverage on public attitudes toward policy issues (e.g., abortion, capital punishment) have identified three variables that, depending on the issue, can wield significant influence: the tone of the coverage (positive/negative/neutral), the frames used (e.g., discussing the issue from an economic vs. a moral perspective), and the overall level of media attention to the issue.  Yet, to date, no study has examined all three variables in combination.  We fill this gap by building a theoretical argument for why, despite the important variance across different issues, in general a single measure should be able to predict significant shifts in public opinion: surges in media attention to “dominant media narratives,” or stories that consistently frame the issue the same way (e.g., economic) using the same tone (e.g., anti-immigration) relative to other competing narratives.  We test this hypothesis in U.S. newspaper coverage to three very different policy issues—immigration, same-sex marriage, and gun control—from 1992 to 2012.  We use manual content analysis linked with computational modeling, tracking tone (pro/anti/neutral), emphasis frames (e.g., economic, morality), and overall levels of attention. Using time series analysis of public opinion data, we show that, for all three issues, previous surges in dominant media narratives significantly shape opinion.  In short, when media coverage converges around a unified way of describing a policy issue, the public tends to follow.  Our study adds to the fields of political communication and public opinion and marks an advance in computational text analysis methods.  (Joint work with Dallas Card and Noah Smith)

Mar 12: Rob Voigt: Implicit Attitudes, NLP, and the “Real World”

Please join us for the NLP Seminar on Monday, March 12  at 4:00pm in 202 South Hall.   All are welcome!

Speaker: Rob Voigt (Stanford)

Title: Implicit Attitudes, NLP, and the “Real World”

Abstract:

While some forms of bias in language are explicit, such as overt references to stereotypes, much linguistic bias is far more subtle, where implicit attitudes towards social groups pervasively affect how we talk to and about members of those groups. As a result, such variation is often identifiable only in aggregate accounting for the contexts of language use. In this talk, I will present two projects from my dissertation which aim to complement NLP techniques with on-the-ground facts about the world to understand the joint linguistic and extralinguistic factors that contribute to social biases.

First, I’ll present the results of a study using body camera footage from the Oakland Police Department as interactional data for analyzing racial disparities in officer language. Applying a computational linguistic model of respect across a month of everyday traffic stops, we found that officers were less respectful to black than to white community members, even after controlling for social factors like officer race and contextual factors like the location of the stop and the severity of the offense. Second, I’ll present ongoing work exploring representations of immigrants in the US news media over historical time. Our results thus far suggest cyclic patterns of linguistic “othering” that recur with each immigrant group as they arrive and are directly connected to economic and demographic circumstances of those groups.

( Slides )

Feb 26: Jonathan Kummerfeld: Representing Online Conversation Structure with Graphs

Please join us for the NLP Seminar on Monday, February 26  at 4:00pm in 202 South Hall.   All are welcome!

Speaker:  Jonathan Kummerfeld: U Michigan

Title:  Representing Online Conversation Structure with Graphs: A New Corpus and Model

Abstract: 

When a group of people communicate online, their conversation is rarely linear, with each message responding only to the one immediately before it. To build systems that understand a group conversation we need a way to identify the discourse structure–what each message is responding to. I’ll speak about a new corpus we constructed with reply structure annotations for 19,924 messages across 58 hours of IRC discussion. Using our annotations we analyse strengths and weaknesses of a recent heuristically extracted set of conversations that have formed the basis of extensive work on dialogue systems (Lowe et al., 2015). Finally, I’ll present statistical models for the task, which improve thread extraction performance from 25.7 F (heuristic) to 60.3 F (our approach). Using our model we extract a new set of conversations that provide high quality data for use in downstream dialogue system development.

( Slides )

Jan 22: Jacob Andreas: Learning from Language

Please join us for the NLP Seminar on Monday, January 22,  at 4:00pm in 202 South Hall.   All are welcome!

Speaker:  Jacob Andreas (Berkeley)

Title:  Learning from Language

Abstract:

The named concepts and compositional operators present in natural language provide a rich source of information about the kinds of abstractions humans use to navigate the world. Can this information help us build better machine learning models? We’ll explore three different ways of using language to support learning: to provide structure to question answering models, fast training and improved generalization for reinforcement learners, and interpretability to general-purpose deep models.

( Slides )

NLP Seminar Schedule Spring 2018

The NLP Seminar continues in Spring 2018!  We will continue meeting Mondays from 4:00-5:00pm, in  room 202 South Hall.  We’ll be meeting approximately once a month this semester.  We are still filling out the schedule; this is a list of the calendar so far:

Sep 22:  Jacob Andreas, UC Berkeley

Feb 26: Jonathan Kummerfeld, U Michigan

Mar 12: Rob Voigt, Stanford

Apr 16:  Amber Boydstun, UC Davis

Apr 30: Lyn Walker, UC Santa Cruz

For up to the minute notifications, join the email list (UC Berkeley community only).

Nov 13: He He: Learning Agents That Interact With Humans

Please join us for the NLP Seminar on Monday, November 13,  at 4:00pm in 202 South Hall.  All are welcome!

Speaker:  He He (Stanford)

Title:  Learning agents that interact with humans

Abstract:

The future of virtual assistants, self-driving cars, and smart homes require intelligent agents that work intimately with users. Instead of passively following orders given by users, an interactive agent must actively collaborate with people through communication, coordination, and user-adaptation. In this talk, I will present our recent work towards building agents that interact with humans. First, we propose a symmetric collaborative dialogue setting in which two agents, each with some private knowledge, must communicate in natural language to achieve a common goal. We present a human-human dialogue dataset that poses new challenges to existing models, and propose a neural model with dynamic knowledge graph embedding. Second, we study the user-adaptation problem in quizbowl – a competitive, incremental question-answering game. We show that explicitly modeling of different human behavior leads to more effective policies that exploits sub-optimal players. I will conclude by discussing opportunities and open questions in learning interactive agents.

(Slides)

Oct 30: Christopher Potts: Enriching distributional linguistic representations with structured resources

Please join us for the NLP Seminar on Monday, October 30, at 4:00pm in 202 South Hall.  All are welcome!

Speaker: Christopher Potts (Stanford Linguistics)

Title:  Enriching distributional linguistic representations with structured resources

Abstract:

One of the most powerful ideas in natural language processing is that we can represent words and phrases using dense vectors learned from co-occurrence patterns in text. Such representations have proven themselves in many settings, and one might even argue that they make good on a common intuition among linguists: that words tend to be incredibly complex and related to each other in all sorts of subtle ways. However, co-occurrence patterns alone tend to yield only a blurry picture of the rich relationships that exist between concepts, which raises the question of how best to incorporate additional information from more structured resources. This talk will explore methods for achieving this synthesis, with special emphasis on the retrofitting method pioneered by Faruqui et al. (2015), in which existing representations are updated based on their position in a knowledge graph. I’ll describe and motivate a generalization of Faruqui et al.’s framework that explicitly models graph relations as functions (Lengerich et al. 2017), and I’ll discuss some potential pitfalls of retrofitting (Cases et al. 2017). My overall goal is to stimulate discussion about how to obtain semantically nuanced distributed representations that are useful in diverse tasks.

( Slides )

References:

Cases, Ignacio; Minh-Thang Luong; and Christopher Potts. 2017. On the effective use of pretraining for natural language inference. Ms., Stanford University. arxiv.org/abs/1710.02076

Faruqui, Manaal; Jesse Dodge; Sujay K. Jauhar; Chris Dyer; Eduard Hovy; and Noah A. Smith. 2015. Retrofitting word vectors to semantic lexicons. NAACL. www.aclweb.org/anthology/N15-1184

Lengerich, Benjamin J.; Andrew L. Maas; and Christopher Potts. 2017. Retrofitting distributional embeddings to knowledge graphs with functional relations. Ms., Carnegie Mellon University, Stanford University, and Roam Analytics. arxiv.org/abs/1708.00112

Oct 9: Siva Reddy: Linguists-defined vs. Machine-induced Natural Language Structures for Executable Semantic Parsing

Please join us for the next NLP Seminar on Monday, October 9, at 4:00pm in 202 South Hall.

Speaker: Siva Reddy (Stanford)

Title:  Linguists-defined vs. Machine-induced Natural Language Structures for Executable Semantic Parsing

Abstract:

Querying a database to retrieve an answer, telling a robot to perform an action, or teaching a computer to play a game are tasks requiring communication with machines in a language interpretable by them. Here we consider the task of converting human languages to a knowledge-base (KB) language for question-answering. While human languages have latent structures, machine interpretable languages have explicit formal structures. The computational linguistics community has created several treebanks to understand the formal structures of human languages, e.g., universal dependencies. But are these useful for deriving machine interpretable formal structures?

In the first part of the talk, I will discuss how to convert universal dependencies in multiple languages to both general-purpose and kb-executable logical forms. In the second part, I will present a neural model on how to induce task-specific natural language structures. I will discuss the similarities and differences between linguists-defined and machine-induced structures, and pros and cons of each.

Bio:

Siva Reddy is a postdoc at the Stanford NLP group working with Chris Manning. His research focuses on finding fundamental representations of language, mostly interpretable, which are useful for NLP applications, especially machine understanding. In this direction, he is currently exploring whether linguistic representations are necessary or all we need is end-to-end learning. His postdoc is partly funded by a Facebook AI Research grant. Prior to the postdoc, he was a Google PhD Fellow at the University of Edinburgh under the supervision of Mirella Lapata and Mark Steedman. He worked with Google Parsing team as an intern during his PhD, and as a full-time employee for Adam Kilgarriff’s Sketch Engine before his PhD. His team won the first place in SemEval 2011 Compositionality Detection task and a best paper at IJCNLP 2011. Apart from language, he loves nature and badminton.

« Older posts