Month: May 2021

Sam Bowman will be giving a virtual talk on Friday, June 4th, from 11am — 12pm. Zoom information will be distributed via the Berkeley NLP Seminar listserv.

Title: What Will it Take to Fix Benchmarking in Natural Language Understanding?

Abstract: Evaluation for many natural language understanding (NLU) tasks is broken: Unreliable and biased systems score so highly on standard benchmarks that there is little room for researchers who develop better systems to demonstrate their improvements. The recent trend to abandon IID benchmarks in favor of adversarially-constructed, out-of-distribution test sets ensures that current models will perform poorly, but ultimately only obscures the abilities that we want our benchmarks to measure. In this talk, based primarily on a recent position paper with George Dahl, I lay out four criteria that I argue NLU benchmarks should meet. I claim most current benchmarks fail at these criteria, and that adversarial data collection does not meaningfully address the causes of these failures. Instead, restoring a healthy evaluation ecosystem will require significant progress in the design of benchmark datasets, the reliability with which they are annotated, their size, and the ways they handle social bias.

Bio: Sam Bowman has been on the faculty at NYU since 2016, when he completed PhD with Chris Manning and Chris Potts at Stanford. At NYU, he is a member of the Center for Data Science, the Department of Linguistics, and Courant Institute’s Department of Computer Science. His research focuses on data, evaluation techniques, and modeling techniques for sentence and paragraph understanding in natural language processing, and on applications of machine learning to scientific questions in linguistic syntax and semantics. He is the senior organizer behind the GLUE and SuperGLUE benchmark competitions; he organized a twenty-three-person research team at JSALT 2018; and he received a 2015 EMNLP Best Resource Paper Award, a 2019 *SEM Best Paper Award, and a 2017 Google Faculty Research Award.

The Berkeley NLP Seminar will continue to be held virtually for Summer 2021, hosted virtually on Zoom. Currently, we have two talks scheduled:

  • Friday, June 4th: Sam Bowman. New York University. 11am – 12pm (PDT). “What Will it Take to Fix Benchmarking in Natural Language Understanding?”
  • Tuesday, June 22nd. Yanai Elazar. Bar-Ilan University. 10am – 11am (PDT).
  • Wednesday, July 14th: Clara Meister. ETH Zürich. 10am – 11am (PDT).

If you are interested in joining our mailing list, please contact nicholas_tomlin@berkeley.edu.