Author: Marti Hearst (Page 4 of 5)

Please join us for the next NLP Seminar Thursday, March 10 at 4pm in 205 South Hall. All are welcome!

Speaker: Justine Kao (Stanford)

Title: Computational approaches to creative and social language

Abstract:

Humans are highly creative users of language. From poetry, to puns, to political satire, much of human communication requires listeners to go beyond the literal meanings of words to infer an utterance’s subtext as well as the speaker’s intentions. Here I will present research on four types of creative language: hyperbole, irony, wordplay, and poetry. In the first part of the talk, I will describe an extension to Rational Speech Act (RSA) models, a family of computational models that view language understanding as recursive social reasoning. Using a series of behavioral experiments, I show that the RSA model predicts people’s interpretations of hyperbole and irony with high accuracy. In the second part of the talk, I will describe formal measures of humor and poetic beauty that incorporate insights from psychology and literary theory to produce quantitative predictions. Finally, I will discuss ways in which cognitive modeling and NLP can provide complementary approaches to understanding the social aspects of language use.

Please join us for the next NLP Seminar on Thursday February 25 at 4pm in 205 South Hall. All are welcome!

Speaker: David Mimno (Cornell)

Title: Topic models without the randomness: new perspectives on deterministic algorithms

Abstract:

Topic models provide a useful way to identify and measure constructs in large text collections, such as themes, genres, discourses, and topics. But running popular algorithms multiple times on the same documents can produce different results, raising questions about the reliability of any resulting conclusions. I will summarize an exciting new line of research in deterministic algorithms for topic inference that trade stronger model assumptions for provably optimal performance. This new approach not only leads to better models but better computational scalability and a richer understanding of connections between topic models and related methods like LSI and word embeddings.

Please join us for the next NLP Seminar on Thursday Feb 11 at 4pm in 205 South Hall. All are welcome!

Speaker: Angel Chang (Stanford)
Title: Interactive text to 3D scene generation

Abstract:
Designing 3D scenes is currently a creative task that requires significant expertise and effort in using complex 3D design interfaces. This design process starts in contrast to the easiness with which people can use language to describe real and imaginary environments. We present an interactive text to 3D scene generation system that allows a user to design 3D scenes using natural language. A user provides input text from which we extract explicit constraints on the objects that should appear in the scene. Given these explicit constraints, the system then uses a spatial knowledge base learned from an existing database of 3D scenes and 3D object models to infer an arrangement of the objects forming a natural scene matching the input description. Using textual commands the user can then iteratively refine the created scene by adding, removing, replacing, and manipulating objects.

Please join us for the next NLP Seminar on Thursday January 28 at 4pm in 205 South Hall. All are welcome!

Speaker: Greg Durrett (UC Berkeley)

Title: Harnessing Big Data for Text Analysis with Joint Models

Abstract:
One reason that analyzing text is hard is that it involves dealing with deeply entangled linguistic variables: objects like syntactic structures, semantic types, and discourse relations depend on one another in complex ways. Our work uses joint modeling approaches to tackle several facets of text analysis, combining model components both across and within subtasks. This model structure allows us to pass information between these entangled subtasks and propagate high-confidence predictions rather than errors. Critically, our models have the capacity to learn key linguistic phenomena as well as other important patterns in the data; that is, linguistics tells us how to structure these models, then the data injects knowledge into them. We describe state-of-the-art systems for a range of tasks, including syntactic parsing, entity analysis, and document summarization.

We’ll be meeting Spring semester at the same time and same location: alternating Thursdays at 4pm, 205 South Hall.

Our schedule is:

Greg Durrett (Jan 28)
Angel Chang (Feb 11)
David Mimno (Feb 25)
Justine Kao (Mar 10)
Spring Break (Mar 24)
Percy Liang (Apr 7)
Dan Gillick (Apr 21)

Please join us for the next NLP Seminar this Thursday (Dec. 3) at 4pm in 205 South Hall.

Speaker: Vinodkumar Prabhakaran, Stanford (http://www.cs.stanford.edu/~vinod)

Title: Social Power in Interactions: Computational Analysis and Detection of Power

Abstract:

Abstract: In this talk, I will present study done as part of my thesis research on how social power relations affect the way people interact with one another and how we can use statistical machine learning techniques to detect these power relations automatically. This study is performed in the domain of organizational emails using the Enron email corpus. I will first present the problem of predicting superior-subordinate relationship between pairs of people, based solely on the language and structure of interactions within single email threads. We found many dialog behavior patterns that are salient to the direction of power. For example, superiors tend to send shorter messages, and use more overt displays of power than subordinates. I will then present the results of our investigation on how the gender of the participants impacts the manifestations of power. For example, do male superiors and female superiors differ in how often they use overt displays of power?

For this week’s meeting, rather than focusing on completed work, those
interested are invited to request feedback on current and future research efforts.

Those working on something now who could benefit from feedback from the seminar, or who have a half-baked idea on some new direction that you want feedback on shaping, plan to talk for about 10 minutes each. Graduate students and undergraduate seminar participants are welcome to participate.

Nov 5: Radu Soricut: Unsupervised Morphology Induction Using Word Embeddings

Please join us for the next NLP Seminar this Thursday (Nov. 5) at 4pm in 205 South Hall.

Speaker: Radu Soricut (Google)

Title: Unsupervised Morphology Induction Using Word Embeddings

Abstract:

We present a language agnostic, unsupervised method for inducing morphological transformations between words. The method relies on certain regularities manifest in high dimensional vector spaces. We show that this method is capable of discovering a wide range of morphological rules, which in turn are used to build morphological analyzers. We evaluate this method across six different languages and nine datasets, and show significant improvements across all languages.

Slides: (pdf)

Oct 22: Marcus Rohrbach: LRCN — An Architecture for Visual Recognition, Description, and Question Answering

Speaker: Marcus Rohrbach (UC Berkeley)

Title: LRCN – an Architecture for Visual Recognition, Description, and Question Answering (UC Berkeley)

Abstract:

Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent, or “temporally deep”, are effective for tasks involving sequences, visual and otherwise. We developed the novel Long-term Recurrent Convolutional Network (LRCN) suitable for large-scale visual learning which is end-to-end trainable. In this talk I will demonstrate the value of this model on video recognition tasks, image description and retrieval problems, video narration challenges, as well as visual Question Answering. In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are “doubly deep” in that they can be compositional in spatial and temporal “layers”. Our model consequently has advantages when target concepts are complex and/or training data are limited as we show in several benchmarks. I will conclude with some ongoing projects on visual grounding and how we want to describe novel visual concepts.

Oct 8: Taylor Berg-Kirkpatrick: Unsupervised Transcription of Language and Music

Speaker: Taylor Berg-Kirkpatrick (UC Berkeley)

Title: Unsupervised Transcription of Language and Music

Abstract:

A variety of transcription tasks–for example, both historical document transcription and polyphonic music transcription–can be viewed as linguistic decipherment problems. I’ll describe an approach to such problems that involves building a detailed generative model of the relationship between the input (e.g. an image of a historical document) and its transcription (the text the document contains). It turns out that these models can be learned in a completely unsupervised fashion–without ever seeing an example of an input annotated with its transcription–effectively deciphering the hidden correspondence. I’ll demo two state-of-the-art systems, one for historical document transcription and one for polyphonic piano music transcription, that outperform supervised methods.

Slides: (pdf)

« Older posts Newer posts »