Where and when: Thursday, Nov 18 at 2-3pm in Google Meets (see the calendar invitation for the link)

Speaker: Tim Hartill, supervised by Michael Witbrock and Pat Riddle

Abstract: Sequence-to-sequence Transformer-based Language Models pretrained on large text corpora have shown impressive ability to memorise and recall singular facts. However, an ability to reason over facts learned in disparate training instances to derive plausible answers to compositional questions remains an open challenge. We explore methods in which a capability to answer unseen compositional questions can be learned and discuss future directions.