Difference between revisions of "Meeting Schedule"

From CompSemWiki
Jump to navigationJump to search
 
(117 intermediate revisions by the same user not shown)
Line 1: Line 1:
'''Location:''' Hybrid - Buchanan 430, and the zoom link below
+
'''Location:'''  
 +
* Jan 8 - Feb 5: Lucile Berkeley Buchanan Building (LBB) 430
 +
* Feb 12 onwards: Muenzinger D430, '''except''':
 +
* '''CLASIC Open House (3/12)''' will be in LBB 124
 +
* '''Adam's Defense (4/2)''' will be in LBB 430.
 +
* '''Ali Marashian's Area Exam (4/9)''' will be in LBB 430.
 +
* '''Elizabeth Spaulding's Defense (4/16)''' will be zoom-only
  
'''Time:''' Wednesdays at 10:30am, Mountain Time
+
 
 +
 
 +
'''Time:''' Wednesdays at 11:30am, Mountain Time
  
 
'''Zoom link:''' https://cuboulder.zoom.us/j/97014876908
 
'''Zoom link:''' https://cuboulder.zoom.us/j/97014876908
Line 13: Line 21:
  
 
|- style="border-top: 2px solid DarkGray;"
 
|- style="border-top: 2px solid DarkGray;"
| 08/30/23 || '''Planning, introductions, welcome!'''
+
| 01/08/2025 || Invited talk: '''Denis Peskoff''' https://denis.ai/
 +
 
 +
'''Title''': Perspectives on Prompting
 +
 
 +
'''Abstract''': Natural language processing is in a state of flux.  I will talk about three recent papers appearing in ACL and EMNLP conferences that are a zeitgeist of the current uncertainty of direction.  First, I will talk about a paper that evaluated the responses of large language models to domain questions.  Then, I will talk about a paper that used prompting to study the language of the Federal Reserve Board.  Last, I will discuss a new paper on identifying generated content in Wikipedia.  In addition, I will highlight a mega-paper I was involved in about prompting.
 +
 
 +
'''Bio''': Denis Peskoff just finished a postdoc at Princeton University working with Professor Brandon Stewart.  He completed his PhD in computer science at the University of Maryland with Professor Jordan Boyd-Graber and a bachelor’s degree at the Georgetown School of Foreign Service.  His research has incorporated domain experts—leading board game players, Federal Reserve Board members, doctors, scientists—to solve natural language processing challenges.
  
 
|- style="border-top: 2px solid DarkGray;"
 
|- style="border-top: 2px solid DarkGray;"
| 09/06/2023 || ACL talk videos (Geoffrey Hinton)
+
| 01/15/2025 || '''Planning, introductions, welcome!'''
  
 
|- style="border-top: 2px solid DarkGray;"
 
|- style="border-top: 2px solid DarkGray;"
| 09/13/2023 || Ongoing projects talks (Susan: AIDA, KAIROS, DWD)
+
| 01/22/2025 || LSA Keynote -- Chris Potts
  
 
|- style="border-top: 2px solid DarkGray;"
 
|- style="border-top: 2px solid DarkGray;"
| 09/20/2023 || Brunch and garden party outside in the Shakespeare Garden! (no zoom)
+
| 01/23/25 (Thu CS seminar) || '''Chenhao Tan''', CS Colloquium, 3:30pm, ECCR 265
 +
'''Title''': Alignment Beyond Human Preferences: Use Human Goals to Guide AI towards Complementary AI
 +
 
 +
'''Abstract''': A lot of recent work has been dedicated to guide pretrained AI with human preferences. In this talk, I argue that human preferences are often insufficient for complementing human intelligence and demonstrate the key role of human goals with two examples. First, hypothesis generation is critical for scientific discoveries. Instead of removing hallucinations, I will leverage data and labels as a guide to lead hallucination towards effective hypotheses.
 +
Second, I will use human perception as a guide for developing case-based explanations to support AI-assisted decision making. In both cases, faithfulness is "compromised" for achieving human goals. I will conclude with future directions towards complementary AI.
 +
 
 +
 
 +
'''Bio''': Chenhao Tan is an associate professor of computer science and data science at the University of Chicago, and is also a visiting scientist at Abridge. He obtained his PhD degree in the Department of Computer Science at Cornell University and bachelor's degrees in computer science and in economics from Tsinghua University. Prior to joining the University of Chicago, he was an assistant professor at the University of Colorado Boulder and a postdoc at the University of Washington. His research interests include human-centered AI, natural language processing, and computational social science. His work has been covered by many news media outlets, such as the New York Times and the Washington Post. He also won a Sloan research fellowship, an NSF CAREER award, an NSF CRII award, a Google research scholar award, research awards from Amazon, IBM, JP Morgan, and Salesforce, a Facebook fellowship, and a Yahoo! Key Scientific Challenges award.
  
 
|- style="border-top: 2px solid DarkGray;"
 
|- style="border-top: 2px solid DarkGray;"
| 09/27/2023 || Felix Zheng - practice talk, Ongoing projects (Martha: UMR. Jim: ISAT. Rehan: Event Coref Projects)
+
| 01/29/25 || '''Laurie Jones''' from Information Science
 +
'''Abstract:''' Laure is coming to seek feedback about two projects she's been working on from the Boulder NLP community.
 +
 
 +
'''Similarity through creation and consumption:''' Initial work looking at similarity between Wikipedia articles surrounding the Arab Spring present diverging perspective in English and Arabic. However, this was not identified through content analysis but rather through leveraging other digital trace data sources such as the blue links (outlinks) and inter-language links (ILLs). I am hoping to identify the Arab Spring article’s ecosystem to inform relationships between articles through the lens of creation and consumption. I am planning to leverage network analysis and graph theory to identify articles that are related along shared editors, outlinks, and clickstreams. Then with the pareto principle, identify densely correlated articles and present an ecosystem that isn't exclusively correlated through content. This I hope can then inform language models, providing additional language-agnostic contextualization. I would love feedback on the application and theoretical contextualization of this method
 +
 
 +
'''Collective Memory expression in LLMs:''' As LLMs get integrated into search engines and other accessible methods of querying, they will get utilized more as a historical documentation and referenced as fact. Because they are built upon sources that include bias of not only political perspective but also linguistic and geographical perspectives, the narratives these LLMs will present about the past is collectively informed, its own collective memory. However, what does that mean when you transcend some of these perspectives? Utilizing prompt engineering, I am investigating the 2 widely used large language models, Chat-GPT and Gemini. I hope to cross reference prompts, feigning user identification and cross-utilizing perspectives based on country of origin, language, and temporal framing. I will then utilize a similarity metric to contrast LLM responses, identifying discrepancies and similarities across these perspectives. This much more in its infancy and I'd love possible perspectives on theoretical lineage and cross-language LLM assessment.
 +
 
 +
'''Bio''': Laurie Jones is a PhD student in Information Science. She has a BS in Computer Science and a minor in Arabic from Washington and Lee University in Virginia. Now under Brian Keegan in information science and Alexandra Siegel in political science, Laurie does cross-language cross-platform analysis of English and Arabic content asymmetry. She uses computational social science methods like natural language processing and network analysis as well as her knowledge of the Arabic language to understand collective memory and conflict power processes across languages and platforms.
  
 
|- style="border-top: 2px solid DarkGray;"
 
|- style="border-top: 2px solid DarkGray;"
| 10/04/2023 || Ongoing projects talks, focus on low-resource and endangered languages (UMR2, LECS lab, NALA)
+
| 02/05/25 || '''Bhargav Shandilya''' 's Area Exam
  
|- style="border-top: 2px solid DarkGray;"
+
'''Title''': From Relevance to Reasoning - Evaluation Paradigms for Retrieval Augmented Generation
| 10/11/2023 || Ongoing projects talks, LECS lab and BLAST lab
 
  
 +
'''Abstract''': Retrieval Augmented Generation (RAG) has emerged as a cost-effective alternative to fine-tuning Large Language Models (LLMs), enabling models to access external knowledge for improved performance on domain-specific tasks. While RAG architectures are well-studied, developing robust evaluation frameworks remains challenging due to the complexity of assessing both retrieval and generation components. This survey examines the evolution of RAG evaluation methods, from early metrics like KILT scores to sophisticated frameworks such as RAGAS and ARES, which assess multiple dimensions including context relevance, answer faithfulness, and information integration. Through the lens of documentary linguistics, this survey analyzes how these evaluation paradigms can be adapted for low-resource language applications, where challenges like noisy data and inconsistent document structures necessitate specialized evaluation approaches. By synthesizing insights from foundational studies, this study provides a systematic analysis of evaluation strategies and their implications for developing more robust, adaptable RAG systems across diverse linguistic contexts.
 +
 
|- style="border-top: 2px solid DarkGray;"
 
|- style="border-top: 2px solid DarkGray;"
| 10/18/2023 || Téa Wright thesis proposal, BLAST lab
+
| 02/12/25 || '''Michael Ginn''' 's Area Exam
  
-----
+
'''Title''': Extracting Automata from Modern Neural Networks
  
'''Téa Wright'''
+
'''Abstract:''' It may be desirable to extract an approximation of a trained neural network as a finite-state automaton, for reasons including interpretability, efficiency, and predictability. Early research on recurrent neural networks (RNNs) proposed methods to convert trained RNNs into finite- state automata by quantizing the continuous hidden state space of the RNN into a discrete state space. However, these methods depend on the assumption of a rough equivalence between these state spaces, which is less straightforward for modern recurrent networks and transformers. In this survey, we review methods for automaton extraction, specifically highlighting the challenges and proposed methods for extraction with modern neural networks.
  
'''Research Proposal: Pretrained multilingual model Adaptation for Low Resource Languages with OCR'''
+
|- style="border-top: 2px solid DarkGray;"
 +
| 02/19/25 || '''Amy Burkhardt''', Cambium Assessment
  
Pretrained multilingual models (PMMs) have advanced the natural language processing (NLP) field over recent years, but they often struggle when confronted with low-resource languages. This proposal will explore the challenges of adapting PMMs to such languages, with a current focus on Lakota and Dakota. Of the data available for endangered languages, much of it is in formats that are not machine readable. As a result, endangered languages are left out of NLP technologies. Using optical character recognition (OCR) to digitize these resources is beneficial for this dilemma, but also introduces noise.
+
'''Title''': AI and NLP in Education: Research, Implementation, and Lessons from Industry
  
The goal of this research is to determine how this noise affects model adaptation and performance for zero-shot and few-shot learning for low-resource languages. The project will involve data collection and scanning, annotation for a gold evaluation dataset, and evaluation of multiple language models across different adaptation methods and levels of noise. Additionally, we hope to expand this pipeline to more scripts and languages.
+
'''Abstract''': This talk will provide a behind-the-scenes look at conducting research on AI in education within an industry setting. First, I’ll offer a broader context of working on a machine learning team, highlighting the diverse skill sets and projects involved. Then, through a case study of a NLP-based writing feedback tool, I’ll walk through how we built and evaluated the tool, sharing key lessons learned from its implementation.
 +
 +
'''Bio:''' Amy Burkhardt is a Senior Scientist at Cambium Assessment, specializing in AI applications for education. She holds a PhD in Research and Evaluation Methodology from the University of Colorado, as well as a certificate in Human Language Technology. Prior to joining Cambium Assessment, she served as the Director of Research and Partnerships for the Rapid Online Assessment of Reading (ROAR) at Stanford University.
 +
 
 +
|- style="border-top: 2px solid DarkGray;"
 +
| 02/26/25 || No Meeting
  
The potential implications of this study are broad: generalizability to languages not included in the study as well as providing insight into how noise affects model adaptation and the types of noise that are most harmful. This project aims to address the unique challenges of Lakota and Dakota as well as develop the field’s understanding of how models may be adapted to include low-resource languages, working towards more inclusive NLP technologies.
+
|- style="border-top: 2px solid DarkGray;"
 +
| 03/05/25 || '''Benet Post''''s Talk
  
 +
'''Title''': Multi-Dialectical NLP Tools for Quechua
  
|- style="border-top: 2px solid DarkGray;"
+
'''Abstract''': This preliminary study introduces a multi- dialectical NLP approach for Quechua dialects that combines neural architectures with symbolic linguistic knowledge, specifically leveraging lexical markers and polypersonal verbal agreement to tackle low-resource and morphologically complex data. By embedding rule-based morphological cues into a transformer-based classifier, this work significantly outperforms purely data-driven or statistical baselines. In addition to boosting classification accuracy across more than twenty Quechuan varieties, the method exposes previously undocumented linguistic phenomena in respect to polypersonal verbal agreement phenomena. The findings highlight how neurosymbolic models can advance both language technology and linguistic research by respecting the dialectal diversity within an under-resourced language family, ultimately raising the bar for dialect-sensitive NLP tools designed to empower speakers of these languages digitally.
| 10/25/2023 || BLAST lab, Daniel Acuña (Daniel's talk will start at 11:20)
 
  
'''Daniel Acuña'''
+
---
  
'''The differential and irreplaceable contributions of academia and industry to AI research'''
+
Anschutz Talk
  
Striking recent advances by industry’s artificial intelligence (AI) have stunned the academic world, making us rethink whether academia should just follow industry’s lead. Due to its open publication, citation, and code-sharing culture, AI offers a rare opportunity to investigate whether these recent advances are outliers or something more systematic. In the present study, we investigate the impact and novelty of academic and industry AI research across 58 conferences—the primary publication medium of AI—involving 292,185 articles and 524 state-of-the-art models from 1995 to 2020. Our findings reveal an overall seismic shift in impact and novelty metrics, which started around 2015, presumably motivated by deep learning. In the most recent measures, an article published by an exclusively industry team dominates impact, with a 73.78 percent higher chance of being highly cited, 12.80 percent higher chance of being citation-disruptive, and several times more likely to produce state-of-the-art models. In contrast, we find that academic teams dominate novelty, having a striking 2.8 times more likelihood of producing novel, atypical work. Controlling for potential confounding factors such as subfield, team size, seniority, and prestige, we find that academia–industry collaborations are unable to simultaneously replicate the impact and novelty of non-collaborative teams, suggesting each environment offers irreplaceable contributions to advance AI.
+
'''Title''': Evaluating LLMs for Long Context Clinical Summarization with Temporal Reasoning
 
   
 
   
 +
'''Abstract''': Recent advances in LLMs have shown potential in clinical text summarization, but their ability to handle long patient trajectories with multi-modal data spread across time remains underexplored. This study systematically evaluates several state-of-the-art open-source LLMs and their Retrieval Augmented Generation (RAG) variants on long-context clinical summarization. We examine their ability to synthesize structured and unstructured Electronic Health Records (EHR) data while reasoning over temporal coherence, by re-engineering existing tasks, including discharge summarization and diagnosis prediction from two publicly available EHR datasets. Our results indicate that long context window improves input integration, but do not consistently enhance clinical reasoning, and LLMs are still struggling with temporal progression and rare disease prediction. While RAG show improvements in hallucination in some cases, it does not fully address these limitations.
  
 
|- style="border-top: 2px solid DarkGray;"
 
|- style="border-top: 2px solid DarkGray;"
| 11/1/2023 || TBD
+
| 03/12/25 || CLASIC Industry Day
  
 
|- style="border-top: 2px solid DarkGray;"
 
|- style="border-top: 2px solid DarkGray;"
| 11/8/2023 || Luke Gessler
+
| 03/19/25 || '''Dananjay Srinivas'''' Area Exam (Late start, 12-1)
  
 +
'''Title''': Assessing progress in Natural Language Inference in the age of Neural Networks
 +
 +
'''Abstract''': Over the last decade, the space of natural language inference (NLI) has seen a lot of progress, primarily through novel constructions of inference tasks that benefit from neural approaches. This has led to claims of neural models’ abilities to understand and reason over natural language. Simultaneously, subsequent works also empirically find limitations with NLI methods and tasks, challenging previous claims of neural networks’ ability to operate on logical semantics. In this talk, we synthesize NLI task formulations and relevant empirical findings from prior scholarship to qualitative assess the soundness and limitations of neural approaches to NLI. We find from our synthesis, that though neural approaches to NLI is a well explored space, certain foundational questions still remain unanswered, affecting the fidelity of neural inference. We share key findings for future research on NLI, as well as discuss ideas on how we believe the space of NLI should be transformed in order to build language technology that can robustly operate on logical semantics.
 +
 +
 
|- style="border-top: 2px solid DarkGray;"
 
|- style="border-top: 2px solid DarkGray;"
| 11/15/2023 || TBD
+
| 03/26/25 || '''No meeting - Spring Break'''
  
 
|- style="border-top: 2px solid DarkGray;"
 
|- style="border-top: 2px solid DarkGray;"
| 11/22/2023 || *** fall break ***
+
| 04/02/25 || '''Adam Wiemerslage''''s Defense
 +
 
 +
'''Title''': Generalizing Low-Resource Morphology: Cognitive and Neural Perspectives on Inflection
 +
 
 +
'''Abstract''': State of the art NLP methods to leverage enormous amounts of digital text are transforming the experience of working with computers and accessing the internet for many people. However, for most of the world’s languages, there is insufficient digital data to make recently popular technology like large language models (LLMs) possible. New technology like LLMs are typically not well- suited for underrepresented languages—often referred to as low-resource languages in NLP—without sufficient digital data. In this case, simpler language technologies like dictionaries, morphological analyzers, and text normalizers are useful. This is especially apparent for language documentary life- cycles, building educational tools, and the development of language typology databases. With this in mind, we propose techniques for automatically expanding coverage of morphological databases and develop methods for building morphological tools for the large set of languages with few available resources. We then study the generation capabilities of neural network models that learn from these resources. Finally, we propose methods for training neural networks when only small amounts of data are available, taking inspiration from the recent successes of self-supervised pretraining in high-resource NLP.
 +
 
  
 
|- style="border-top: 2px solid DarkGray;"
 
|- style="border-top: 2px solid DarkGray;"
| 11/29/2023 || Jon's Proposal
+
| 04/09/25 || '''Ali Marashian''''s Area Exam
 +
 
 +
'''Title''': Meditations on the Available Resources for Low-Resource NMT
 +
 
 +
'''Abstract''': In spite of the progress in NMT in the last decade, most languages in the world do not have sufficient digitized data to train neural models on. Different approaches to remedy the
 +
problems of low-resource languages utilize different resources.
 +
In this presentation, we will look into the available categories of resources through the lens of practicality: parallel data, monolingual data, pretrained multilingual models, grammar books and morphological information, and automatic evaluation metrics. We conclude by highlighting the importance of more focus on data collection as well as on the interpretability of some of the available tools.
 +
 
  
 
|- style="border-top: 2px solid DarkGray;"
 
|- style="border-top: 2px solid DarkGray;"
| 12/06/2023 || Adam's Proposal
+
| 04/16/25 || '''Elizabeth Spaulding''''s Defense
 +
 
 +
'''Title''': The Meaning of Agency and Patiency to Machines and People
 +
 
 +
'''Abstract''': This thesis establishes the capabilities and limitations of various language modeling technologies on the task of semantic proto-role labeling (SPRL), which assigns relational properties such as volition, awareness, and change of state to event participants in sentences. First, we demonstrate the feasibility and best practices of SPRL learned and inferred jointly with other information extraction tasks. We also show that language model output categorizes entities in sentences consistently across verb-invariant and verb-specific linguistic theories of agency, adding to the growing body of evidence of language model event reasoning capabilities. Further, we introduce a method for adopting semantic proto-role labeling systems and proto-role theory as a tool for analyzing events and participants by using it to quantify implicit human perceptions of agency and experience in text. We discuss the implications of our findings as a whole and identify multiple paths for future work, including deeper annotator involvement in future annotation of SPRL, SPRL analysis on machine-generated text, and cross-lingual studies of SPRL. Pursuing these future directions could improve both the theoretical frameworks and the computational methods, and help uncover how both people and machines structure and process events.
  
 
|- style="border-top: 2px solid DarkGray;"
 
|- style="border-top: 2px solid DarkGray;"
| 12/13/2023 || Elizabeth's Proposal
+
| 04/23/25 || '''Maggie Perkoff''''s Defense
 +
 
 +
'''Title''': Bringing Everyone In: The Future of Collaboration with Conversational AI
 +
 +
'''Abstract''': Collaborative learning enables students to build rapport with their peers while building upon their own knowledge.  Teachers can weave collaborative learning opportunities into the classroom by having students work together in small groups.  However, these collaborations can break down when students are confused with the material, one person dominates the conversation, or when some of the participants struggle to connect with their peers.  Unfortunately, a single teacher cannot attend to the needs of all groups at the same time.  In these cases, pedagogical conversational agents (PCAs) have the potential to support teachers and students alike by taking on a collaboration facilitator role.  These agents engage students in productive dialog by providing appropriate interventions in a learning setting.  With the rapid improvement of large language models (LLMs), these agents can easily be backed by a generative model that can adapt to new domains and variations in communication styles. Integrating LLMs into PCAs requires understanding the desired teacher behavior in different scenarios and constraining the outputs of the model to match them.  This dissertation explores how to design, develop, and evaluate PCAs that incorporate LLMs to support students collaborating in small groups.  One of the products of this research is the Jigsaw Interactive Agent (JIA), a multi-modal PCA that provides real-time support to students via a chat interface.  In this work, we describe the multi-modal system to analyze students' discourse that JIA relies on, test different methods for constraining the JIA outputs in a lab setting, and evaluate the use a retrieval-augmented generation approach to enhance the outputs with curriculum materials.  Furthermore, we propose a framework for expanding JIA's capabilities to support neurodivergent students.  Ultimately, this dissertation aims to align advancements in LLM-based conversational agents with the perspectives and expertise of the teachers and students who can greatly benefit from their usage.
  
 
|- style="border-top: 2px solid DarkGray;"
 
|- style="border-top: 2px solid DarkGray;"
| 12/20/2023 || Rehan's Dissertation
+
| 04/30/25 || '''NAACL -- no meeting'''
 +
|- style="border-top: 2px solid DarkGray;"
 +
| 05/07/25 || '''Final's Week -- no meeting'''
  
|- style="border-top: 2px solid DarkGray;"
 
 
|}
 
|}
 
  
 
=Past Schedules=
 
=Past Schedules=
 +
* [[Fall 2024 Schedule]]
 +
* [[Spring 2024 Schedule]]
 +
* [[Fall 2023 Schedule]]
 
* [[Spring 2023 Schedule]]
 
* [[Spring 2023 Schedule]]
 
* [[Fall 2022 Schedule]]
 
* [[Fall 2022 Schedule]]

Latest revision as of 08:32, 25 April 2025

Location:

  • Jan 8 - Feb 5: Lucile Berkeley Buchanan Building (LBB) 430
  • Feb 12 onwards: Muenzinger D430, except:
  • CLASIC Open House (3/12) will be in LBB 124
  • Adam's Defense (4/2) will be in LBB 430.
  • Ali Marashian's Area Exam (4/9) will be in LBB 430.
  • Elizabeth Spaulding's Defense (4/16) will be zoom-only


Time: Wednesdays at 11:30am, Mountain Time

Zoom link: https://cuboulder.zoom.us/j/97014876908

Date Title
01/08/2025 Invited talk: Denis Peskoff https://denis.ai/

Title: Perspectives on Prompting

Abstract: Natural language processing is in a state of flux. I will talk about three recent papers appearing in ACL and EMNLP conferences that are a zeitgeist of the current uncertainty of direction. First, I will talk about a paper that evaluated the responses of large language models to domain questions. Then, I will talk about a paper that used prompting to study the language of the Federal Reserve Board. Last, I will discuss a new paper on identifying generated content in Wikipedia. In addition, I will highlight a mega-paper I was involved in about prompting.

Bio: Denis Peskoff just finished a postdoc at Princeton University working with Professor Brandon Stewart. He completed his PhD in computer science at the University of Maryland with Professor Jordan Boyd-Graber and a bachelor’s degree at the Georgetown School of Foreign Service. His research has incorporated domain experts—leading board game players, Federal Reserve Board members, doctors, scientists—to solve natural language processing challenges.

01/15/2025 Planning, introductions, welcome!
01/22/2025 LSA Keynote -- Chris Potts
01/23/25 (Thu CS seminar) Chenhao Tan, CS Colloquium, 3:30pm, ECCR 265

Title: Alignment Beyond Human Preferences: Use Human Goals to Guide AI towards Complementary AI

Abstract: A lot of recent work has been dedicated to guide pretrained AI with human preferences. In this talk, I argue that human preferences are often insufficient for complementing human intelligence and demonstrate the key role of human goals with two examples. First, hypothesis generation is critical for scientific discoveries. Instead of removing hallucinations, I will leverage data and labels as a guide to lead hallucination towards effective hypotheses. Second, I will use human perception as a guide for developing case-based explanations to support AI-assisted decision making. In both cases, faithfulness is "compromised" for achieving human goals. I will conclude with future directions towards complementary AI.


Bio: Chenhao Tan is an associate professor of computer science and data science at the University of Chicago, and is also a visiting scientist at Abridge. He obtained his PhD degree in the Department of Computer Science at Cornell University and bachelor's degrees in computer science and in economics from Tsinghua University. Prior to joining the University of Chicago, he was an assistant professor at the University of Colorado Boulder and a postdoc at the University of Washington. His research interests include human-centered AI, natural language processing, and computational social science. His work has been covered by many news media outlets, such as the New York Times and the Washington Post. He also won a Sloan research fellowship, an NSF CAREER award, an NSF CRII award, a Google research scholar award, research awards from Amazon, IBM, JP Morgan, and Salesforce, a Facebook fellowship, and a Yahoo! Key Scientific Challenges award.

01/29/25 Laurie Jones from Information Science

Abstract: Laure is coming to seek feedback about two projects she's been working on from the Boulder NLP community.

Similarity through creation and consumption: Initial work looking at similarity between Wikipedia articles surrounding the Arab Spring present diverging perspective in English and Arabic. However, this was not identified through content analysis but rather through leveraging other digital trace data sources such as the blue links (outlinks) and inter-language links (ILLs). I am hoping to identify the Arab Spring article’s ecosystem to inform relationships between articles through the lens of creation and consumption. I am planning to leverage network analysis and graph theory to identify articles that are related along shared editors, outlinks, and clickstreams. Then with the pareto principle, identify densely correlated articles and present an ecosystem that isn't exclusively correlated through content. This I hope can then inform language models, providing additional language-agnostic contextualization. I would love feedback on the application and theoretical contextualization of this method

Collective Memory expression in LLMs: As LLMs get integrated into search engines and other accessible methods of querying, they will get utilized more as a historical documentation and referenced as fact. Because they are built upon sources that include bias of not only political perspective but also linguistic and geographical perspectives, the narratives these LLMs will present about the past is collectively informed, its own collective memory. However, what does that mean when you transcend some of these perspectives? Utilizing prompt engineering, I am investigating the 2 widely used large language models, Chat-GPT and Gemini. I hope to cross reference prompts, feigning user identification and cross-utilizing perspectives based on country of origin, language, and temporal framing. I will then utilize a similarity metric to contrast LLM responses, identifying discrepancies and similarities across these perspectives. This much more in its infancy and I'd love possible perspectives on theoretical lineage and cross-language LLM assessment.

Bio: Laurie Jones is a PhD student in Information Science. She has a BS in Computer Science and a minor in Arabic from Washington and Lee University in Virginia. Now under Brian Keegan in information science and Alexandra Siegel in political science, Laurie does cross-language cross-platform analysis of English and Arabic content asymmetry. She uses computational social science methods like natural language processing and network analysis as well as her knowledge of the Arabic language to understand collective memory and conflict power processes across languages and platforms.

02/05/25 Bhargav Shandilya 's Area Exam

Title: From Relevance to Reasoning - Evaluation Paradigms for Retrieval Augmented Generation

Abstract: Retrieval Augmented Generation (RAG) has emerged as a cost-effective alternative to fine-tuning Large Language Models (LLMs), enabling models to access external knowledge for improved performance on domain-specific tasks. While RAG architectures are well-studied, developing robust evaluation frameworks remains challenging due to the complexity of assessing both retrieval and generation components. This survey examines the evolution of RAG evaluation methods, from early metrics like KILT scores to sophisticated frameworks such as RAGAS and ARES, which assess multiple dimensions including context relevance, answer faithfulness, and information integration. Through the lens of documentary linguistics, this survey analyzes how these evaluation paradigms can be adapted for low-resource language applications, where challenges like noisy data and inconsistent document structures necessitate specialized evaluation approaches. By synthesizing insights from foundational studies, this study provides a systematic analysis of evaluation strategies and their implications for developing more robust, adaptable RAG systems across diverse linguistic contexts.

02/12/25 Michael Ginn 's Area Exam

Title: Extracting Automata from Modern Neural Networks

Abstract: It may be desirable to extract an approximation of a trained neural network as a finite-state automaton, for reasons including interpretability, efficiency, and predictability. Early research on recurrent neural networks (RNNs) proposed methods to convert trained RNNs into finite- state automata by quantizing the continuous hidden state space of the RNN into a discrete state space. However, these methods depend on the assumption of a rough equivalence between these state spaces, which is less straightforward for modern recurrent networks and transformers. In this survey, we review methods for automaton extraction, specifically highlighting the challenges and proposed methods for extraction with modern neural networks.

02/19/25 Amy Burkhardt, Cambium Assessment

Title: AI and NLP in Education: Research, Implementation, and Lessons from Industry

Abstract: This talk will provide a behind-the-scenes look at conducting research on AI in education within an industry setting. First, I’ll offer a broader context of working on a machine learning team, highlighting the diverse skill sets and projects involved. Then, through a case study of a NLP-based writing feedback tool, I’ll walk through how we built and evaluated the tool, sharing key lessons learned from its implementation.

Bio: Amy Burkhardt is a Senior Scientist at Cambium Assessment, specializing in AI applications for education. She holds a PhD in Research and Evaluation Methodology from the University of Colorado, as well as a certificate in Human Language Technology. Prior to joining Cambium Assessment, she served as the Director of Research and Partnerships for the Rapid Online Assessment of Reading (ROAR) at Stanford University.

02/26/25 No Meeting
03/05/25 Benet Post's Talk

Title: Multi-Dialectical NLP Tools for Quechua

Abstract: This preliminary study introduces a multi- dialectical NLP approach for Quechua dialects that combines neural architectures with symbolic linguistic knowledge, specifically leveraging lexical markers and polypersonal verbal agreement to tackle low-resource and morphologically complex data. By embedding rule-based morphological cues into a transformer-based classifier, this work significantly outperforms purely data-driven or statistical baselines. In addition to boosting classification accuracy across more than twenty Quechuan varieties, the method exposes previously undocumented linguistic phenomena in respect to polypersonal verbal agreement phenomena. The findings highlight how neurosymbolic models can advance both language technology and linguistic research by respecting the dialectal diversity within an under-resourced language family, ultimately raising the bar for dialect-sensitive NLP tools designed to empower speakers of these languages digitally.

---

Anschutz Talk

Title: Evaluating LLMs for Long Context Clinical Summarization with Temporal Reasoning

Abstract: Recent advances in LLMs have shown potential in clinical text summarization, but their ability to handle long patient trajectories with multi-modal data spread across time remains underexplored. This study systematically evaluates several state-of-the-art open-source LLMs and their Retrieval Augmented Generation (RAG) variants on long-context clinical summarization. We examine their ability to synthesize structured and unstructured Electronic Health Records (EHR) data while reasoning over temporal coherence, by re-engineering existing tasks, including discharge summarization and diagnosis prediction from two publicly available EHR datasets. Our results indicate that long context window improves input integration, but do not consistently enhance clinical reasoning, and LLMs are still struggling with temporal progression and rare disease prediction. While RAG show improvements in hallucination in some cases, it does not fully address these limitations.

03/12/25 CLASIC Industry Day
03/19/25 Dananjay Srinivas' Area Exam (Late start, 12-1)

Title: Assessing progress in Natural Language Inference in the age of Neural Networks

Abstract: Over the last decade, the space of natural language inference (NLI) has seen a lot of progress, primarily through novel constructions of inference tasks that benefit from neural approaches. This has led to claims of neural models’ abilities to understand and reason over natural language. Simultaneously, subsequent works also empirically find limitations with NLI methods and tasks, challenging previous claims of neural networks’ ability to operate on logical semantics. In this talk, we synthesize NLI task formulations and relevant empirical findings from prior scholarship to qualitative assess the soundness and limitations of neural approaches to NLI. We find from our synthesis, that though neural approaches to NLI is a well explored space, certain foundational questions still remain unanswered, affecting the fidelity of neural inference. We share key findings for future research on NLI, as well as discuss ideas on how we believe the space of NLI should be transformed in order to build language technology that can robustly operate on logical semantics.


03/26/25 No meeting - Spring Break
04/02/25 Adam Wiemerslage's Defense

Title: Generalizing Low-Resource Morphology: Cognitive and Neural Perspectives on Inflection

Abstract: State of the art NLP methods to leverage enormous amounts of digital text are transforming the experience of working with computers and accessing the internet for many people. However, for most of the world’s languages, there is insufficient digital data to make recently popular technology like large language models (LLMs) possible. New technology like LLMs are typically not well- suited for underrepresented languages—often referred to as low-resource languages in NLP—without sufficient digital data. In this case, simpler language technologies like dictionaries, morphological analyzers, and text normalizers are useful. This is especially apparent for language documentary life- cycles, building educational tools, and the development of language typology databases. With this in mind, we propose techniques for automatically expanding coverage of morphological databases and develop methods for building morphological tools for the large set of languages with few available resources. We then study the generation capabilities of neural network models that learn from these resources. Finally, we propose methods for training neural networks when only small amounts of data are available, taking inspiration from the recent successes of self-supervised pretraining in high-resource NLP.


04/09/25 Ali Marashian's Area Exam

Title: Meditations on the Available Resources for Low-Resource NMT

Abstract: In spite of the progress in NMT in the last decade, most languages in the world do not have sufficient digitized data to train neural models on. Different approaches to remedy the problems of low-resource languages utilize different resources. In this presentation, we will look into the available categories of resources through the lens of practicality: parallel data, monolingual data, pretrained multilingual models, grammar books and morphological information, and automatic evaluation metrics. We conclude by highlighting the importance of more focus on data collection as well as on the interpretability of some of the available tools.


04/16/25 Elizabeth Spaulding's Defense

Title: The Meaning of Agency and Patiency to Machines and People

Abstract: This thesis establishes the capabilities and limitations of various language modeling technologies on the task of semantic proto-role labeling (SPRL), which assigns relational properties such as volition, awareness, and change of state to event participants in sentences. First, we demonstrate the feasibility and best practices of SPRL learned and inferred jointly with other information extraction tasks. We also show that language model output categorizes entities in sentences consistently across verb-invariant and verb-specific linguistic theories of agency, adding to the growing body of evidence of language model event reasoning capabilities. Further, we introduce a method for adopting semantic proto-role labeling systems and proto-role theory as a tool for analyzing events and participants by using it to quantify implicit human perceptions of agency and experience in text. We discuss the implications of our findings as a whole and identify multiple paths for future work, including deeper annotator involvement in future annotation of SPRL, SPRL analysis on machine-generated text, and cross-lingual studies of SPRL. Pursuing these future directions could improve both the theoretical frameworks and the computational methods, and help uncover how both people and machines structure and process events.

04/23/25 Maggie Perkoff's Defense

Title: Bringing Everyone In: The Future of Collaboration with Conversational AI

Abstract: Collaborative learning enables students to build rapport with their peers while building upon their own knowledge. Teachers can weave collaborative learning opportunities into the classroom by having students work together in small groups. However, these collaborations can break down when students are confused with the material, one person dominates the conversation, or when some of the participants struggle to connect with their peers. Unfortunately, a single teacher cannot attend to the needs of all groups at the same time. In these cases, pedagogical conversational agents (PCAs) have the potential to support teachers and students alike by taking on a collaboration facilitator role. These agents engage students in productive dialog by providing appropriate interventions in a learning setting. With the rapid improvement of large language models (LLMs), these agents can easily be backed by a generative model that can adapt to new domains and variations in communication styles. Integrating LLMs into PCAs requires understanding the desired teacher behavior in different scenarios and constraining the outputs of the model to match them. This dissertation explores how to design, develop, and evaluate PCAs that incorporate LLMs to support students collaborating in small groups. One of the products of this research is the Jigsaw Interactive Agent (JIA), a multi-modal PCA that provides real-time support to students via a chat interface. In this work, we describe the multi-modal system to analyze students' discourse that JIA relies on, test different methods for constraining the JIA outputs in a lab setting, and evaluate the use a retrieval-augmented generation approach to enhance the outputs with curriculum materials. Furthermore, we propose a framework for expanding JIA's capabilities to support neurodivergent students. Ultimately, this dissertation aims to align advancements in LLM-based conversational agents with the perspectives and expertise of the teachers and students who can greatly benefit from their usage.

04/30/25 NAACL -- no meeting
05/07/25 Final's Week -- no meeting

Past Schedules