Short Bio
Abstract
Neural language models, which can be pretrained on very large corpora, turn out to “know” a lot about the world, in the sense that they can be trained to answer questions surprisingly reliably. However, “language models as knowledge graphs” have many disadvantages: for example, they cannot be easily updated when information changes. I will describe recent work in my team and elsewhere on incorporating symbolic knowledge into language models and question-answering systems, and also comment on some of the remaining challenges associated with integrating symbolic KG-like reasoning and neural NLP.
To attend, follow this link: https://auckland.zoom.us/s/93286369395