Category: seminars (Page 7 of 8)

Please join us for the NLP Seminar this Monday (Oct 3) at 3:30pm in 202 South Hall.

Speaker: Sida Wang (Stanford U)

Title: Interactive Language Learning

Abstract:

We introduce 2 parts of the interactive language learning setting. The first is learning from scratch and the second is learning from a community of goal-oriented language users, which is relevant to building adaptive natural language interfaces. The first part is inspired by Wittgenstein’s language games: a human wishes to accomplish some task (e.g., achieving a certain configuration of blocks), but can only communicate with a computer, who performs the actual actions (e.g., removing all red blocks). The computer initially knows nothing about language and therefore must learn it from scratch through interaction, while the human adapts to the computer’s capabilities. We created a game in a blocks world and collected interactions from 100 people playing it.
In the second part (about ongoing work), we explore the setting where a language is supported by a community of people (instead of private to each individual), and the computer has to learn from the aggregate knowledge of a community of goal-oriented language users. We explore how to use additional supervision such as definitions and demonstration

(Slides)

Please join us for the NLP Seminar this Monday (Sept. 19) at 3:30pm in 202 South Hall.
(This is a rescheduling of a talk that was postponed from last semester.)
Speaker: Percy Liang (Stanford)

Title: Learning from Zero

Abstract:

Can we learn if we start with zero examples, either labeled or unlabeled?  This scenario arises in new user-facing systems (such as virtual assistants for new domains), where inputs should come from users, but no users exist until we have a working system, which depends on having training data.  I discuss recent work that circumvent this circular dependence by interleaving user interaction and learning.

(Slides)

Preparatory readings:

Liang, Percy. “Talking to computers in natural language.” XRDS: Crossroads, The ACM Magazine for Students 21.1 (2014): 18-21.
Liang, Percy. “Learning Executable Semantic Parsers for Natural Language Understanding.” arXiv preprint arXiv:1603.06677(2016).

Please join us for the NLP Seminar on Monday Sept 12 at 3:30pm in 202 South Hall. All are welcome!  (This semester we are posting preparatory readings, for those who are interested. Note also the room change to room 202 South Hall.)

 

Speaker: Ellie Pavlick, U Penn

 

Title: Natural Language Inference in the Real World
 
Abstract:

It is impossible to reason about natural language by memorizing all possible sentences. Rather, we rely on models of composition which allow the meanings of individual words to be combined to produce meanings of longer phrases. In natural language processing, we often employ models of composition that work well for carefully-curated datasets or toy examples, but prove to be very brittle when applied to the type of language that humans actually use.

 

This talk will discuss our work on applying natural language inference in the “real world.” I will describe observations from experimental studies of humans’ linguistic inferences, and describe the challenges they present for existing methods of automated natural language inference (with particular focus to the case of adjective-noun composition). I will also outline our current work on extending models of compositional entailment in order to better handle the types of imprecise inferences we observe in human language.

Slides: (pdf of slides)
Preparatory Reading:

Welcome back everyone to the Fall 2016 semester!  This fall we will have a new time for the NLP Seminar; we’ll be meeting on Mondays from 3:30-4:30pm.  The location is in room 202 South Hall.

Another exciting change is that UC Berkeley students can earn 1 unit of credit for attending the meeting on a weekly basis.  Grad students can sign up under I School 290  or CS 294.

Breaking news: undergraduates from any department can now sign up for INFO 190-001 (CCN 34812)

We’ll continue with invited speakers for alternating weeks, and in the intervening weeks we’ll discuss research papers and related activities.  We’ll continue to post speaker information on this web site.

The speaker lineup is:
9/12: Ellie Pavlick (U Penn)
9/19: Percy Liang (Stanford)
10/3: Sida Wang (Stanford)
10/17: Jacob Andreas (Berkeley)
10/31: Jiwei Li (Stanford)
11/14: David Jurgens (Stanford)
11/28: NLP professors (Berkeley)

Please join us for the NLP Seminar this Thursday (April 21) at 4pm in 205 South Hall. All are welcome!

Speaker: Dan Gillick (Google)

Title: Multilingual language processing from bytes

Abstract:

I’ll describe my recent work on standard language processing tasks like part-of-speech tagging and named entity recognition where I replace the traditional pipeline of models with a recurrent neural network. In particular, the model reads one byte at a time (it doesn’t know anything about tokens or sentences) and produces output over byte spans. This allows for very compact, multilingual models that improve over models trained on a single language. I’ll show lots of results and we can discuss the merits and problems with this approach.

Please join us for the next NLP Seminar Thursday, April 7 at 4pm in 205 South Hall. All are welcome!

Speaker: Percy Liang (Stanford)
Title: Learning from Zero

Abstract:
Can we learn if we start with zero examples, either labeled or unlabeled? This scenario arises in new user-facing systems (such as virtual assistants for new domains), where inputs should come from users, but no users exist until we have a working system, which depends on having training data. I discuss recent work that circumvent this circular dependence by interleaving user interaction and learning.

Please join us for the next NLP Seminar Thursday, March 10 at 4pm in 205 South Hall. All are welcome!

Speaker: Justine Kao (Stanford)

Title: Computational approaches to creative and social language

Abstract:

Humans are highly creative users of language. From poetry, to puns, to political satire, much of human communication requires listeners to go beyond the literal meanings of words to infer an utterance’s subtext as well as the speaker’s intentions. Here I will present research on four types of creative language: hyperbole, irony, wordplay, and poetry. In the first part of the talk, I will describe an extension to Rational Speech Act (RSA) models, a family of computational models that view language understanding as recursive social reasoning. Using a series of behavioral experiments, I show that the RSA model predicts people’s interpretations of hyperbole and irony with high accuracy. In the second part of the talk, I will describe formal measures of humor and poetic beauty that incorporate insights from psychology and literary theory to produce quantitative predictions. Finally, I will discuss ways in which cognitive modeling and NLP can provide complementary approaches to understanding the social aspects of language use.

Please join us for the next NLP Seminar on Thursday February 25 at 4pm in 205 South Hall. All are welcome!

Speaker: David Mimno (Cornell)

Title: Topic models without the randomness: new perspectives on deterministic algorithms

Abstract:

Topic models provide a useful way to identify and measure constructs in large text collections, such as themes, genres, discourses, and topics. But running popular algorithms multiple times on the same documents can produce different results, raising questions about the reliability of any resulting conclusions. I will summarize an exciting new line of research in deterministic algorithms for topic inference that trade stronger model assumptions for provably optimal performance. This new approach not only leads to better models but better computational scalability and a richer understanding of connections between topic models and related methods like LSI and word embeddings.

Please join us for the next NLP Seminar on Thursday Feb 11 at 4pm in 205 South Hall. All are welcome!

Speaker: Angel Chang (Stanford)
Title: Interactive text to 3D scene generation

Abstract:
Designing 3D scenes is currently a creative task that requires significant expertise and effort in using complex 3D design interfaces. This design process starts in contrast to the easiness with which people can use language to describe real and imaginary environments. We present an interactive text to 3D scene generation system that allows a user to design 3D scenes using natural language. A user provides input text from which we extract explicit constraints on the objects that should appear in the scene. Given these explicit constraints, the system then uses a spatial knowledge base learned from an existing database of 3D scenes and 3D object models to infer an arrangement of the objects forming a natural scene matching the input description. Using textual commands the user can then iteratively refine the created scene by adding, removing, replacing, and manipulating objects.

Please join us for the next NLP Seminar on Thursday January 28 at 4pm in 205 South Hall. All are welcome!

Speaker: Greg Durrett (UC Berkeley)

Title: Harnessing Big Data for Text Analysis with Joint Models

Abstract:
One reason that analyzing text is hard is that it involves dealing with deeply entangled linguistic variables: objects like syntactic structures, semantic types, and discourse relations depend on one another in complex ways. Our work uses joint modeling approaches to tackle several facets of text analysis, combining model components both across and within subtasks. This model structure allows us to pass information between these entangled subtasks and propagate high-confidence predictions rather than errors. Critically, our models have the capacity to learn key linguistic phenomena as well as other important patterns in the data; that is, linguistics tells us how to structure these models, then the data injects knowledge into them. We describe state-of-the-art systems for a range of tasks, including syntactic parsing, entity analysis, and document summarization.

« Older posts Newer posts »