Difference between revisions of "Meeting Schedule"

From CompSemWiki
Jump to navigationJump to search
 
(48 intermediate revisions by the same user not shown)
Line 1: Line 1:
'''Location:''' Hybrid - Buchanan 430, and the zoom link below
+
'''Location:'''  
 +
* Jan 8 - Feb 5: Lucile Berkeley Buchanan Building (LBB) 430
 +
* Feb 12 onwards: Muenzinger D430
  
'''Time:''' Wednesdays at 10:30am, Mountain Time
+
'''Time:''' Wednesdays at 11:30am, Mountain Time
  
 
'''Zoom link:''' https://cuboulder.zoom.us/j/97014876908
 
'''Zoom link:''' https://cuboulder.zoom.us/j/97014876908
Line 13: Line 15:
  
 
|- style="border-top: 2px solid DarkGray;"
 
|- style="border-top: 2px solid DarkGray;"
| 01/24/2024 || '''Planning, introductions, welcome!'''
+
| 01/08/2025 || Invited talk: Denis Peskoff https://denis.ai/
 +
 
 +
'''Title''': Perspectives on Prompting
 +
 
 +
'''Abstract''': Natural language processing is in a state of flux.  I will talk about three recent papers appearing in ACL and EMNLP conferences that are a zeitgeist of the current uncertainty of direction.  First, I will talk about a paper that evaluated the responses of large language models to domain questions.  Then, I will talk about a paper that used prompting to study the language of the Federal Reserve Board.  Last, I will discuss a new paper on identifying generated content in Wikipedia.  In addition, I will highlight a mega-paper I was involved in about prompting.
 +
 
 +
'''Bio''': Denis Peskoff just finished a postdoc at Princeton University working with Professor Brandon Stewart.  He completed his PhD in computer science at the University of Maryland with Professor Jordan Boyd-Graber and a bachelor’s degree at the Georgetown School of Foreign Service.  His research has incorporated domain experts—leading board game players, Federal Reserve Board members, doctors, scientists—to solve natural language processing challenges.
  
 
|- style="border-top: 2px solid DarkGray;"
 
|- style="border-top: 2px solid DarkGray;"
| 01/31/2024 || Brunch Social
+
| 01/15/2025 || '''Planning, introductions, welcome!'''
  
 
|- style="border-top: 2px solid DarkGray;"
 
|- style="border-top: 2px solid DarkGray;"
| 02/07/2024 || '''No Meeting''' - Virtual PhD Open House
+
| 01/22/2025 ||  
  
 
|- style="border-top: 2px solid DarkGray;"
 
|- style="border-top: 2px solid DarkGray;"
| 02/14/2024 || ACL paper clinic
+
| 01/23/25 (Thu CS seminar) || Chenhao Tan CS Colloquium, 3:30pm
  
 
|- style="border-top: 2px solid DarkGray;"
 
|- style="border-top: 2px solid DarkGray;"
| 02/21/2024 || Cancelled in favor of LING Circle talk by Professor Gibbs
+
| 01/29/25 ||  
  
 +
|- style="border-top: 2px solid DarkGray;"
 +
| 02/05/25 || Michael Ginn's Prelim
  
 
|- style="border-top: 2px solid DarkGray;"
 
|- style="border-top: 2px solid DarkGray;"
| 02/28/2024 || Short talks by Kathy McKeown and Robin Burke
+
| 02/12/25 ||  
  
Kathy's web page: ''' https://www.cs.columbia.edu/~kathy/
+
|- style="border-top: 2px solid DarkGray;"
 +
| 02/19/25 ||
  
Title: Addressing Large Language Models that Lie: Case Studies in Summarization
+
|- style="border-top: 2px solid DarkGray;"
 
+
| 02/26/25 ||
Kathleen McKeown
 
Columbia University
 
 
The advent of large language models promises a new level of performance in generation of text of all kinds, enabling generation of text that is far more fluent, coherent and relevant than was previously possible. However, they also introduce a major new problem: they wholly hallucinate facts out of thin air. When summarizing an input document, they may incorrectly intermingle facts from the input, they may introduce facts that were not mentioned at all, and worse yet, they may even make up things that are not true in the real world. In this talk, I will discuss our work in characterizing the kinds of errors that can occur and methods that we have developed to help mitigate hallucination in language modeling approaches to text summarization for a variety of genres.
 
 
Kathleen R. McKeown is the Henry and Gertrude Rothschild Professor of Computer Science at Columbia University and the Founding Director of the Data Science Institute, serving as Director from 2012 to 2017. In earlier years, she served as Department Chair (1998-2003) and as Vice Dean for Research for the School of Engineering and Applied Science (2010-2012). A leading scholar and researcher in the field of natural language processing, McKeown focuses her research on the use of data for societal problems; her interests include text summarization, question answering, natural language generation, social media analysis and multilingual applications. She has received numerous honors and awards, including 2023 IEEE Innovation in Societal Infrastructure Award, American Philosophical Society Elected member, American Academy of Arts and Science elected member, American Association of Artificial Intelligence Fellow, a Founding Fellow of the Association for Computational Linguistics and an Association for Computing Machinery Fellow. Early on she received the National Science Foundation Presidential Young Investigator Award, and a National Science Foundation Faculty Award for Women. In 2010, she won both the Columbia Great Teacher Award—an honor bestowed by the students—and the Anita Borg Woman of Vision Award for Innovation.
 
 
 
 
 
Title: Multistakeholder fairness in recommender systems
 
 
 
Robin Burke
 
University of Colorado Boulder
 
 
Abstract: Research in machine learning fairness makes two key simplifying assumptions that have proven challenging to move beyond. One assumption is that we can productively concentrate on a uni-dimensional version of the problem: achieving fairness for a single protected group defined by a single sensitive feature. The second assumption is that technical solutions need not engage with the essentially political nature of claims surrounding fairness. I argue that relaxing these assumptions is necessary for machine learning fairness to achieve practical utility. While some recent research in rich subgroup fairness has considered ways to relax the first assumption, these approaches require that fairness be defined in the same way for all groups, which amounts to a hardening of the second assumption. In this talk, I argue for a formulation of machine learning fairness based on social choice and exemplify the approach in the area of recommender systems. Social choice is inherently multi-agent, escaping the single group assumption and, in its classic formulation, places no constraints on agents' preferences. In addition, social choice was developed to formalize political decision-making mechanisms, such as elections, and therefore offers some hope of directly addressing the inherent politics of fairness. Social choice has complexities of its own, however, and the talk will outline a research agenda aimed at understanding the challenges and opportunities afforded by this approach to machine learning fairness.
 
 
Bio: Information Science Department Chair and Professor Robin Burke conducts research in personalized recommender systems, a field he helped found and develop. His most recent projects explore fairness, accountability and transparency in recommendation through the integration of objectives from diverse stakeholders. Professor Burke is the author of more than 150 peer-reviewed articles in various areas of artificial intelligence including recommender systems, machine learning and information retrieval. His work has received support from the National Science Foundation, the National Endowment for the Humanities, the Fulbright Commission and the MacArthur Foundation, among others.
 
  
 
|- style="border-top: 2px solid DarkGray;"
 
|- style="border-top: 2px solid DarkGray;"
| 03/06/2024 || '''Jon Cai''', CU Boulder Computer Science, PhD proposal defense
+
| 03/05/25 ||  
 
 
'''Title:'''
 
Learning Fast and Slow with Semantics
 
 
 
'''Abstract:'''
 
Abstract Meaning Representation(AMR) is a linguistic formalism that capture and encode semantics of natural language. It is one of the most widely accepted implementation over the truth value based theory of meanings. The impact of AMR has broadened since its introduction from its original design objective to help machine translation to more NLP tasks such as information extraction, summarizations and multi-modality semantic alignments etc. Meanwhile, AMR serves as a theoretical tool for computational semantics researches to advance semantic theories.  Being able to model holistic semantics thus become one of the ultimate goal for NLP and computational linguistics community. Despite the amazing advancement of LLMs in recent years, we still see gaps between shallow and deep semantic understanding of machine learning models. In this proposal, we go through the generalization issues that AMR parsing models renders and our proposed solutions over how could we design new methodologies and analytical tools to help us navigate the labyrinth of modeling semantics via AMR.
 
  
 
|- style="border-top: 2px solid DarkGray;"
 
|- style="border-top: 2px solid DarkGray;"
| 03/13/2024 || Veronica Qing Lyu,
+
| 03/12/25 ||  
  
'''Title:'''Faithful Chain of Thought Reasoning.  (''' https://aclanthology.org/2023.ijcnlp-main.20/ }
 
 
'''Abstract:'''
 
While Chain-of-Thought (CoT) prompting boosts Language Models' (LM) performance on a gamut of complex reasoning tasks, the generated reasoning chain does not necessarily reflect how the model arrives at the answer (aka. faithfulness). We propose Faithful CoT, a reasoning framework involving two stages: Translation (Natural Language query → symbolic reasoning chain) and Problem Solving (reasoning chain → answer), using an LM and a deterministic solver respectively. This guarantees that the reasoning chain provides a faithful explanation of the final answer. Aside from interpretability, Faithful CoT also improves empirical performance: it outperforms standard CoT on 9 of 10 benchmarks from 4 diverse domains, with a relative accuracy gain of 6.3% on Math Word Problems (MWP), 3.4% on Planning, 5.5% on Multi-hop Question Answering (QA), and 21.4% on Relational Inference. Furthermore, with GPT-4 and Codex, it sets the new state-of-the-art few-shot performance on 7 datasets (with 95.0+ accuracy on 6 of them), showing a strong synergy between faithfulness and accuracy.
 
 
'''Bio:'''
 
Veronica Qing Lyu is a fifth-year PhD student in Computer and Information Science at the University of Pennsylvania, advised by Chris Callison-Burch and Marianna Apidianaki. Her current research interests lie in the intersection of linguistics and natural language processing, explainable AI, and reasoning. Her paper "Faithful Chain-of-Thought Reasoning" received the Area Chair Award at IJCNLP-AACL 2023 (Interpretability and Analysis of Models for NLP track). She will co-organize a tutorial on “Explanations in the Era of Large Language Models” in NAACL 2024. Before Penn, she studied linguistics as an undergraduate student at the Department of Foreign Languages and Literatures at Tsinghua University.
 
 
|- style="border-top: 2px solid DarkGray;"
 
|- style="border-top: 2px solid DarkGray;"
| 03/20/2024 || Jie's Practice Talk
+
| 03/19/25 ||  
  
 
|- style="border-top: 2px solid DarkGray;"
 
|- style="border-top: 2px solid DarkGray;"
| 03/27/2024 || '''No Meeting''' - Spring Break
+
| 03/26/25 || '''No meeting - Spring Break'''
  
 
|- style="border-top: 2px solid DarkGray;"
 
|- style="border-top: 2px solid DarkGray;"
| 04/03/2024 || CLASIC Industry Day
+
| 04/02/25 ||  
  
 
|- style="border-top: 2px solid DarkGray;"
 
|- style="border-top: 2px solid DarkGray;"
| 04/10/2024 || iSAT Dry Run or other?
+
| 04/09/25 ||  
  
 
|- style="border-top: 2px solid DarkGray;"
 
|- style="border-top: 2px solid DarkGray;"
| 04/17/2024 || Maggie's Proposal
+
| 04/16/25 ||  
  
 
|- style="border-top: 2px solid DarkGray;"
 
|- style="border-top: 2px solid DarkGray;"
| 04/24/2024 || Téa's Senior Thesis Defense
+
| 04/23/25 ||  
  
 
|- style="border-top: 2px solid DarkGray;"
 
|- style="border-top: 2px solid DarkGray;"
| 05/01/2024 || Sagi's Proposal
+
| 04/30/25 ||  
  
 
|- style="border-top: 2px solid DarkGray;"
 
|- style="border-top: 2px solid DarkGray;"
| 05/08/2024 || Mary's Prelim
+
| 05/07/25 ||  
 
 
  
  
Line 100: Line 81:
  
 
=Past Schedules=
 
=Past Schedules=
 +
* [[Fall 2024 Schedule]]
 +
* [[Spring 2024 Schedule]]
 
* [[Fall 2023 Schedule]]
 
* [[Fall 2023 Schedule]]
 
* [[Spring 2023 Schedule]]
 
* [[Spring 2023 Schedule]]

Latest revision as of 19:39, 9 January 2025

Location:

  • Jan 8 - Feb 5: Lucile Berkeley Buchanan Building (LBB) 430
  • Feb 12 onwards: Muenzinger D430

Time: Wednesdays at 11:30am, Mountain Time

Zoom link: https://cuboulder.zoom.us/j/97014876908

Date Title
01/08/2025 Invited talk: Denis Peskoff https://denis.ai/

Title: Perspectives on Prompting

Abstract: Natural language processing is in a state of flux. I will talk about three recent papers appearing in ACL and EMNLP conferences that are a zeitgeist of the current uncertainty of direction. First, I will talk about a paper that evaluated the responses of large language models to domain questions. Then, I will talk about a paper that used prompting to study the language of the Federal Reserve Board. Last, I will discuss a new paper on identifying generated content in Wikipedia. In addition, I will highlight a mega-paper I was involved in about prompting.

Bio: Denis Peskoff just finished a postdoc at Princeton University working with Professor Brandon Stewart. He completed his PhD in computer science at the University of Maryland with Professor Jordan Boyd-Graber and a bachelor’s degree at the Georgetown School of Foreign Service. His research has incorporated domain experts—leading board game players, Federal Reserve Board members, doctors, scientists—to solve natural language processing challenges.

01/15/2025 Planning, introductions, welcome!
01/22/2025
01/23/25 (Thu CS seminar) Chenhao Tan CS Colloquium, 3:30pm
01/29/25
02/05/25 Michael Ginn's Prelim
02/12/25
02/19/25
02/26/25
03/05/25
03/12/25
03/19/25
03/26/25 No meeting - Spring Break
04/02/25
04/09/25
04/16/25
04/23/25
04/30/25
05/07/25


Past Schedules