Fall 2022 Schedule
Location: Hybrid - Buchanan 126, and the zoom link below
Time: Wednesdays at 10:30am, Mountain Time
Zoom link: https://cuboulder.zoom.us/j/97014876908
Date | Title |
---|---|
24.08.22 | Planning, introductions, welcome! |
31.08.22 | PhD students present! Ongoing projects and opportunities
|
07.09.22 | PhD students continue to present!
20 minutes per project
|
14.09.22 | And more presentations from our fabulous students and colleagues!
|
21.09.22 | lunch at the Taj |
28.09.22 | James Pustejovsky
Dense Paraphrasing for Textual Enrichment: Question Answering and Inference Abstract: Much of the current computational work on inference in NLP can be associated with one of two techniques: the first focuses on a specific notion of text-based question answering (QA), using large pre-trained language models (LLMs). To examine specific linguistic properties present in the model, « probing tasks » (diagnostic classifiers) have been developed to test capabilities that the LLM demonstrates on interpretable semantic inferencing tasks, such as age and object comparisons, hypernym conjunction, antonym negation, and others. The second is Knowledge Graph-based inference and QA, where triples are mined from Wikipedia, ConceptNet, WikiData, and other non-corpus resources, and then used for answering questions involving multiple components of the KG (multi-hop QA). While quite impressive with benchmarked metrics in QA, both techniques are completely confused by (a) syntactically missing semantic content, and (b) the semantics accompanying the consequences of events and actions in narratives. In this talk, I discuss a model we have developed to enrich the surface form of texts, using type-based semantic operations to « textually expose » the deeper meaning of the corpus that was used to make the original embeddings in the language model. This model, Dense Paraphrasing, is a linguistically-motivated, textual enrichment strategy, that textualizes the compositional operations inherent in a semantic model, such as Generative Lexicon Theory or CCG. This involves broadly three kinds of interpretive processes: (i) recognizing the diverse variability in linguistic forms that can be associated with the same underlying semantic representation (paraphrases); (ii) identifying semantic factors or variables that accompany or are presupposed by the lexical semantics of the words present in the text, through dropped, hidden or shadow arguments; and (iii) interpreting or computing the dynamic consequences of actions and events in the text. After performing these textual enrichment algorithms, we fine-tune the LLM which allows more robust inference and QA task performance. James Pustejovsky, Professor TJX Feldberg Chair in Computer Science Department of Computer Science Chair of CL MS Program Chair of Linguistics Program |
05.10.22 | Martha, COLING keynote // Daniel poster presentation dry run |
12.10.22 | COLING / paper review |
19.10.22 | CLASIC Open House, 11am-1pm
This is largely an informational event for students interested in the CLASIC (Computational Linguistics, Analytics, Search, and InformatiCs) Master's program and/or the new LING to CLASIC BAM program. The event will include short talks from graduates of the CLASIC program, and then lunch. Please register if you're interested - until 5pm Monday October 17th. |
21.10.22 | FRIDAY Carolyn Rose, Carnegie Mellon(ICS/iSAT event)
Special time and place: 11am-12:15pm MT, Muenzinger D430 / zoom Title: A Layered Model of Learning during Collaborative Software Development: Programs, Programming, and Programmers Collaborative software development, whether synchronous or asynchronous, is a creative, integrative process in which something new comes into being through the joint engagement, something new that did not fully exist in the mind of any one person prior to the engagement. One can view this engagement from a macro-level perspective, focusing on large scale development efforts of 100 or more developers, organized into sub-teams, producing collections complex software products like Mozilla. Past work in the area of software engineering has explored the symbiosis between the management structure of a software team and the module structure of the resulting software. In this talk, we focus instead on small scale software teams of between 2 and 5 developers, working on smaller-scale efforts of between one hour and 9 months, through more fine grained analysis of collaborative processes and collaborative products. In this more tightly coupled engagement within small groups, we see again a symbiosis between people, processes, and products. This talk bridges between the field of Computer-Supported Collaborative Learning and the study of software teams in the field of Software Engineering by investigating the inner-workings of small scale collaborative software development. Building on over a decade of AI-enabled collaborative learning experiences in the classroom and online, in this talk we report our work in progress beginning with classroom studies in large online software courses with substantial teamwork components. In our classroom work, we have adapted an industry standard team practice referred to as Mob Programming into a paradigm called Online Mob Programming (OMP) for the purpose of encouraging teams to reflect on concepts and share work in the midst of their project experience. At the core of this work are process mining technologies that enable real time monitoring and just-in-time support for learning during productive work. Recent work on deep-learning approaches to program understanding bridge between investigations of processes and products. |
26.10.22 | No meeting -- go to Barbara's talk on Friday, and Nathan's on Monday! |
28.10.22 | FRIDAY Barbara diEugenio (ICS talk, noon)
Special time and place: 12-1:30pm MT, Muenzinger D430 / zoom Title: Knowledge Co-Construction and Initiative in Peer Learning for introductory Computer Science Peer learning has often been shown to be an effective mode of learning for all participants; and knowledge co-construction (KCC), when participants work together to build knowledge, has been shown to correlate with learning in peer interactions. However, KCC is hard to identify and/or support computationally. We conducted an extensive analysis of a corpus of peer-learning interactions in introductory Computer Science: we found a strong relationship between KCC and the linguistic notion of initiative shift, and moderate correlations between initiative shifts and learning. The results of this analysis were incorporated into KSC-PaL, an artificial agent that can collaborate with a human student via natural-language dialog and actions within a graphical workspace. Evaluations of KSC-PaL showed that the agent was able to encourage shifts in initiative in order to promote learning and that students learned using the agent. This work (joint with Cindy Howard, now at Lewis University), was part of two larger projects that studied tutoring dialogues and peer learning interactions for introductory Computer Science, and that resulted in two Intelligent Tutoring Systems, iList and Chiqat-Tutor. Barbara Di Eugenio, PhD, Professor and Director of Graduate Studies, Department of Computer Science, University of Illinois, Chicago |
31.10.22 | MONDAY Nathan Schneider (Ling Circle Talk, 4pm)
Special time and place: 4pm, UMC 247 / zoom (passcode: 795679) Title: The Ins and Outs of Preposition Semantics: Challenges in Comprehensive Corpus Annotation and Automatic Disambiguation In most linguistic meaning representations that are used in NLP, prepositions fly under the radar. I will argue that they should instead be put front and center given their crucial status as linkers of meaning—whether for spatial and temporal relations, for predicate-driven roles, or in special constructions. To that end, we have sought to characterize and disambiguate semantic functions expressed by prepositions and possessives in English (Schneider et al., ACL 2018), and similar markers in other languages (Mandarin Chinese, Korean, Hindi, and German). This approach can be broadened to other constructions and integrated in full-sentence lexical semantic tagging as well as graph-structured meaning representation parsing. Other investigations include crowdsourced annotation, contextualized preposition embeddings, and preposition use in fluent nonnative English. Nathan Schneider, Associate Professor, Depts. of Computer Science and Linguistics, Georgetown University
|
02.11.22 | *** No meeting - UMRs team at Brandeis *** |
09.11.22 | Practice talks
|
16.11.22 | Maggie Perkoff, prelim |
23.11.22 | *** No meeting - fall break *** |
30.11.22 | HuggingFace demo - Trevor Ward |
07.12.22 | Ananya Ganesh, prelim |
Past Schedules
- Spring 2022 Schedule
- Fall 2021 Schedule
- Spring 2021 Schedule
- Fall 2020 Schedule
- Spring 2020 Schedule
- Fall 2019 Schedule
- Spring 2019 Schedule
- Fall 2018 Schedule
- Summer 2018 Schedule
- Spring 2018 Schedule
- Fall 2017 Schedule
- Summer 2017 Schedule
- Spring 2017 Schedule
- Fall 2016 Schedule
- Spring 2016 Schedule
- Fall 2015 Schedule
- Spring 2015 Schedule
- Fall 2014 Schedule
- Spring 2014 Schedule
- Fall 2013 Schedule
- Summer 2013 Schedule
- Spring 2013 Schedule
- Fall 2012 Schedule
- Spring 2012 Schedule
- Fall 2011 Schedule
- Summer 2011 Schedule
- Spring 2011 Schedule
- Fall 2010 Schedule
- Summer 2010 Schedule
- Spring 2010 Schedule
- Fall 2009 Schedule
- Summer 2009 Schedule
- Spring 2009 Schedule
- Fall 2008 Schedule
- Summer 2008 Schedule