Page 8 of 8

We’ll be meeting Spring semester at the same time and same location: alternating Thursdays at 4pm, 205 South Hall.

Our schedule is:

Greg Durrett (Jan 28)
Angel Chang (Feb 11)
David Mimno (Feb 25)
Justine Kao (Mar 10)
Spring Break (Mar 24)
Percy Liang (Apr 7)
Dan Gillick (Apr 21)

Please join us for the next NLP Seminar this Thursday (Dec. 3) at 4pm in 205 South Hall.

Speaker: Vinodkumar Prabhakaran, Stanford (http://www.cs.stanford.edu/~vinod)

Title: Social Power in Interactions: Computational Analysis and Detection of Power

Abstract:

Abstract: In this talk, I will present study done as part of my thesis research on how social power relations affect the way people interact with one another and how we can use statistical machine learning techniques to detect these power relations automatically. This study is performed in the domain of organizational emails using the Enron email corpus. I will first present the problem of predicting superior-subordinate relationship between pairs of people, based solely on the language and structure of interactions within single email threads. We found many dialog behavior patterns that are salient to the direction of power. For example, superiors tend to send shorter messages, and use more overt displays of power than subordinates. I will then present the results of our investigation on how the gender of the participants impacts the manifestations of power. For example, do male superiors and female superiors differ in how often they use overt displays of power?

For this week’s meeting, rather than focusing on completed work, those
interested are invited to request feedback on current and future research efforts.

Those working on something now who could benefit from feedback from the seminar, or who have a half-baked idea on some new direction that you want feedback on shaping, plan to talk for about 10 minutes each. Graduate students and undergraduate seminar participants are welcome to participate.

Nov 5: Radu Soricut: Unsupervised Morphology Induction Using Word Embeddings

Please join us for the next NLP Seminar this Thursday (Nov. 5) at 4pm in 205 South Hall.

Speaker: Radu Soricut (Google)

Title: Unsupervised Morphology Induction Using Word Embeddings

Abstract:

We present a language agnostic, unsupervised method for inducing morphological transformations between words. The method relies on certain regularities manifest in high dimensional vector spaces. We show that this method is capable of discovering a wide range of morphological rules, which in turn are used to build morphological analyzers. We evaluate this method across six different languages and nine datasets, and show significant improvements across all languages.

Slides: (pdf)

Oct 22: Marcus Rohrbach: LRCN — An Architecture for Visual Recognition, Description, and Question Answering

Speaker: Marcus Rohrbach (UC Berkeley)

Title: LRCN – an Architecture for Visual Recognition, Description, and Question Answering (UC Berkeley)

Abstract:

Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent, or “temporally deep”, are effective for tasks involving sequences, visual and otherwise. We developed the novel Long-term Recurrent Convolutional Network (LRCN) suitable for large-scale visual learning which is end-to-end trainable. In this talk I will demonstrate the value of this model on video recognition tasks, image description and retrieval problems, video narration challenges, as well as visual Question Answering. In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are “doubly deep” in that they can be compositional in spatial and temporal “layers”. Our model consequently has advantages when target concepts are complex and/or training data are limited as we show in several benchmarks. I will conclude with some ongoing projects on visual grounding and how we want to describe novel visual concepts.

Oct 8: Taylor Berg-Kirkpatrick: Unsupervised Transcription of Language and Music

Speaker: Taylor Berg-Kirkpatrick (UC Berkeley)

Title: Unsupervised Transcription of Language and Music

Abstract:

A variety of transcription tasks–for example, both historical document transcription and polyphonic music transcription–can be viewed as linguistic decipherment problems. I’ll describe an approach to such problems that involves building a detailed generative model of the relationship between the input (e.g. an image of a historical document) and its transcription (the text the document contains). It turns out that these models can be learned in a completely unsupervised fashion–without ever seeing an example of an input annotated with its transcription–effectively deciphering the hidden correspondence. I’ll demo two state-of-the-art systems, one for historical document transcription and one for polyphonic piano music transcription, that outperform supervised methods.

Slides: (pdf)

Sep 24: Jacob Andreas: Alignment-based compositional semantics for instruction following

Speaker: Jacob Andreas (UC Berkeley)

Title: Alignment-based compositional semantics for instruction following (EMNLP 2015)

Abstract:

This paper describes an alignment-based model for interpreting natural language instructions in context. We approach instruction following as a search over plans, scoring sequences of actions conditioned on structured observations of text and the environment. By explicitly modeling both the low-level compositional structure of individual actions and the high-level structure of full plans, we are able to learn both grounded representations of sentence meaning and pragmatic constraints on interpretation. To demonstrate the model’s flexibility, we apply it to a diverse set of benchmark tasks. On every task, we outperform strong task-specific baselines, and achieve several new state-of-the-art results.

Slides: (pdf)  (keynote)

Sep 10: Long Duong: A Neural Network Model for Low-Resource Universal Dependency Parsing

Speaker: Long Duong (U Melbourne, visiting ICSI)

Title: A Neural Network Model for Low-Resource Universal Dependency Parsing (EMNLP 2015)

Abstract:

Accurate dependency parsing requires large treebanks, which are only available for a few languages. We propose a method that takes advantage of shared structure across languages to build a mature parser using less training data. We propose a model for learning a shared “universal” parser that operates over an interlingual continuous representation of language, along with language-specific mapping components. Compared with supervised learning, our methods give a consistent 8-10% improvement across several treebanks in low-resource simulations.

Newer posts »