Fall 2022 Schedule

From CompSemWiki
Revision as of 15:53, 21 November 2019 by CompSemUser (talk | contribs)
Jump to navigationJump to search
2019.08.28 Welcome Meeting, Plan fall schedule
2019.09.04 Jon ACL best paper nominee: https://arxiv.org/abs/1907.06679.

Xiaolei ACL paper: https://www.aclweb.org/anthology/P19-1403.

Lunch

2019.09.05 (Thurs 3:30 pm) CS Colloquium - Dan Jurafsky
2019.09.11 Yoshi Presenting a NAACL paper https://www.aclweb.org/anthology/N19-1162
2019.09.18 Akanksha presenting ACL paper https://www.aclweb.org/anthology/P19-1568
2019.09.25 AIDA virtual site visit
2019.10.02 Rakuten overview, Abhidip presenting BERT[1]and XLNet [2]
2019.10.09 Ghazaleh, Situated Open World Reference Resolution for Human-Robot Dialogue[3]
2019.10.16 (10 am - 1 pm) CLASIC/CLEAR Open House & Industry day (lunch at Fleming). 10 min talks and posters by PhD/MS students
2019.10.17 (Thurs 3:30 pm) CS Colloquium: Jinho Choi
2019.10.23 Ali Almelhem (Economics department) presenting own research
2019.10.30 Rehan presenting his research. Slides[4]

Kristin EMNLP-LOUHI practice talk

Vivian and Jon EMNLP posters

2019.11.06 Michael Reagan: Extraction of force-dynamic image schemas for event structure representation of procedural text,

Abstract: Event structure decomposition is integral to machine reading comprehension, dialog state tracking, and other natural language understanding tasks where models of entity states and temporality as well as of event-event, participant-event, and participant-participant relations are desirable. Previous work in event representation has yet to show how organizing conceptual structures into well-defined, flexible units representing both background knowledge of the world and how that knowledge is expressed in a certain language might improve AI reasoning capabilities. In preparation for my dissertation research into this question, in this talk I will argue for a fine-grained approach to event representation based on theoretical work done in cognitive semantics: force dynamics (Talmy 1988; Croft 2012), a image schematic model of causation that my research may show can support common-sense reasoning applications. Event representation is often done at the macro-level, with a focus on sequences of events in narratives, procedures, etc.; in contrast, at a micro-level we examine subevents of clausal events by employing the entity-centric, fine granularity of a force-dynamic approach to characterize participant interaction as a function of time, qualitative change, and transmission of force. As a proof of concept linking micro- and macro-level analyses, I propose designing, implementing, and evaluating a computational model for the extraction of participant histories as storylines from scientific, procedural text. One task will be to compare how well force-dynamic event structure can be extracted using a symbolic approach based on a model of argument structure and verb classes versus a neural approach using pre-trained language models. A second task will be to examine the use of dynamic knowledge graphs as a representation for evolving fine-grained storyline event structure and the tracking of entity states. A third task will be to examine the applicability of image schemas for effect prediction via a generative process with force-dynamic structures as priors. The primary hypotheses of the study will be examined along with a tentative timeline for how my research may progress.


Bio: Michael Regan is a PhD student in Linguistics and MS student in Computer Science at the University of New Mexico, and a Professional Research Assistant in Computer Science at the University of Colorado Boulder. His research interests include cognitive semantics, event structure representation, multilingual NLP, and representation learning.

2019.11.13 Chelsea prelim.
2019.11.20 Parth Jawale, presenting Tao Li, Vivek Srikumar, Augmenting Neural Nets with First Order Logic, [5], ACL 2019, follow-on paper, Tao Li, Vivek Gupta, Maitrey Mehta, Vivek Srikumar, A Logic-Driven Framework for Consistency of Neural Models, [6], EMNLP 2019.
2019.11.27 Fall break
2019.12.04 Sam: Extractive Adversarial Networks: High-Recall Explanations for Identifying Personal Attacks in Social Media Posts [7]

Abstract: We introduce an adversarial method for producing high-recall explanations of neural text classifier decisions. Building on an existing architecture for extractive explanations via hard attention, we add an adversarial layer which scans the residual of the attention for remaining predictive signal. Motivated by the important domain of detecting personal attacks in social media comments, we additionally demonstrate the importance of manually setting a semantically appropriate `default' behavior for the model by explicitly manipulating its bias term. We develop a validation set of human-annotated personal attacks to evaluate the impact of these changes.

2019.12.11 Sarah Moeller - Lorelei denouement
2019.12.18 Final exams


Past Schedules