Where and when: Thursday, July 1 at 2-3pm in 303S-561
Speaker: Qianqian Qi (Beryl), PhD student, supervised by Michael Witbrock and Jiamou Liu
Abstract: Controlling text generation according to some specification is an important research area. Semantic representation to the text generation process is treated as a conditional language model with the generation coming from a distribution according to input semantic attributes. From input wise, the input can be in many formats such as image, speech, text, etc. Currently, we mainly focus on table-to-text generation and our research goal is about long-term coherent and goal-directed text generation. Recent neural models for table-to-text generation engage a large number of parallel table data and text pairs in the training process. A challenge in text generation is that often the information provided by the input data alone is insufficient to obtain a target output. For example, we often find that an input table has missing information that is an important piece of information we want reflected in the related output. Hence in this research, we plan to enhance the fidelity of generated text with the incorporation of additional dataset, that is of a different type to the original input data in our text generation system. With this additional input information, which contains information that was otherwise unavailable in the original input dataset, our hypothesis will help drive the generation of text towards a form closer to our target output text.