Please join us for the NLP Seminar on Monday Sept 12 at 3:30pm in 202 South Hall. All are welcome! (This semester we are posting preparatory readings, for those who are interested. Note also the room change to room 202 South Hall.)
It is impossible to reason about natural language by memorizing all possible sentences. Rather, we rely on models of composition which allow the meanings of individual words to be combined to produce meanings of longer phrases. In natural language processing, we often employ models of composition that work well for carefully-curated datasets or toy examples, but prove to be very brittle when applied to the type of language that humans actually use.
This talk will discuss our work on applying natural language inference in the “real world.” I will describe observations from experimental studies of humans’ linguistic inferences, and describe the challenges they present for existing methods of automated natural language inference (with particular focus to the case of adjective-noun composition). I will also outline our current work on extending models of compositional entailment in order to better handle the types of imprecise inferences we observe in human language.
Slides: (pdf of slides)
- Introduces the task of RTE (textual entailment): The PASCAL recognising textual entailment challenge. Dagan et al (2006). u.cs.biu.ac.il/~dagan/publications/RTEChallenge.pd
- Describes an approach based on natural logic: Natural Logic for Textual Inference. MacCartney and Manning (2007). nlp.stanford.edu/pubs/natlog-wtep07.pdf
- Gives a good flavor of what the more modern data sets and neural-based approaches to RTE look like: A large annotated corpus for learning natural language inference. Bowman et al (2015). nlp.stanford.edu/pubs/snli_paper.pdf (Links to an external site.)