07 February 2016

Sam Bowman talks at Computer Science

Sam Bowman of Stanford University will be giving the talk “Modeling Natural Language Semantics with Learned Representations” on Thursday, February 11, at 4PM in CS room 151. An abstract of his talk follows

The last few years have seen many striking successes from artificial neural network models on hard natural language processing tasks. These models replace complex hand-engineered systems for extracting and representing the meanings of sentences with learned functions that construct and use their own internal vector-based representations. Though these learned representations are effective in many domains, they aren't interpretable in familiar terms and their ability to capture the full range of meanings expressible in natural language is not yet well understood.

In this talk, I argue that neural network models are capable of learning to represent and reason with the meanings of sentences. First, I use entailment experiments over artificial languages to show that existing models can learn to reason logically over clean language-like data. I then introduce a large new corpus of entailments in English and use experiments on that corpus to show that these abilities extend to natural language as well. Finally, I briefly present ongoing work on a new model that uses the semantic principle of compositionality to more efficiently and effectively learn to understand natural language.