Difference between revisions of "Meeting Schedule"
CompSemUser (talk | contribs) |
CompSemUser (talk | contribs) |
||
(4 intermediate revisions by the same user not shown) | |||
Line 40: | Line 40: | ||
|- style="border-top: 2px solid DarkGray;" | |- style="border-top: 2px solid DarkGray;" | ||
− | | 01/29/25 || | + | | 01/29/25 || Laurie Jones from Information Science |
+ | '''Abstract:''' Laure is coming to seek feedback about two projects she's been working on from the Boulder NLP community. | ||
+ | |||
+ | '''Similarity through creation and consumption:''' Initial work looking at similarity between Wikipedia articles surrounding the Arab Spring present diverging perspective in English and Arabic. However, this was not identified through content analysis but rather through leveraging other digital trace data sources such as the blue links (outlinks) and inter-language links (ILLs). I am hoping to identify the Arab Spring article’s ecosystem to inform relationships between articles through the lens of creation and consumption. I am planning to leverage network analysis and graph theory to identify articles that are related along shared editors, outlinks, and clickstreams. Then with the pareto principle, identify densely correlated articles and present an ecosystem that isn't exclusively correlated through content. This I hope can then inform language models, providing additional language-agnostic contextualization. I would love feedback on the application and theoretical contextualization of this method | ||
+ | |||
+ | '''Collective Memory expression in LLMs:''' As LLMs get integrated into search engines and other accessible methods of querying, they will get utilized more as a historical documentation and referenced as fact. Because they are built upon sources that include bias of not only political perspective but also linguistic and geographical perspectives, the narratives these LLMs will present about the past is collectively informed, its own collective memory. However, what does that mean when you transcend some of these perspectives? Utilizing prompt engineering, I am investigating the 2 widely used large language models, Chat-GPT and Gemini. I hope to cross reference prompts, feigning user identification and cross-utilizing perspectives based on country of origin, language, and temporal framing. I will then utilize a similarity metric to contrast LLM responses, identifying discrepancies and similarities across these perspectives. This much more in its infancy and I'd love possible perspectives on theoretical lineage and cross-language LLM assessment. | ||
+ | |||
+ | '''Bio''': Laurie Jones is a PhD student in Information Science. She has a BS in Computer Science and a minor in Arabic from Washington and Lee University in Virginia. Now under Brian Keegan in information science and Alexandra Siegel in political science, Laurie does cross-language cross-platform analysis of English and Arabic content asymmetry. She uses computational social science methods like natural language processing and network analysis as well as her knowledge of the Arabic language to understand collective memory and conflict power processes across languages and platforms. | ||
|- style="border-top: 2px solid DarkGray;" | |- style="border-top: 2px solid DarkGray;" | ||
− | | 02/05/25 || Bhargav's | + | | 02/05/25 || Bhargav Shandilya's Area Exam |
|- style="border-top: 2px solid DarkGray;" | |- style="border-top: 2px solid DarkGray;" | ||
− | | 02/12/25 || Michael Ginn's | + | | 02/12/25 || Michael Ginn's Area Exam |
+ | |||
+ | '''Title''': Extracting Automata from Modern Neural Networks | ||
+ | |||
+ | '''Abstract:''' It may be desirable to extract an approximation of a trained neural network as a finite-state automaton, for reasons including interpretability, efficiency, and predictability. Early research on recurrent neural networks (RNNs) proposed methods to convert trained RNNs into finite- state automata by quantizing the continuous hidden state space of the RNN into a discrete state space. However, these methods depend on the assumption of a rough equivalence between these state spaces, which is less straightforward for modern recurrent networks and transformers. In this survey, we review methods for automaton extraction, specifically highlighting the challenges and proposed methods for extraction with modern neural networks. | ||
|- style="border-top: 2px solid DarkGray;" | |- style="border-top: 2px solid DarkGray;" | ||
− | | 02/19/25 || Amy | + | | 02/19/25 || Amy Burkhardt's Talk |
|- style="border-top: 2px solid DarkGray;" | |- style="border-top: 2px solid DarkGray;" | ||
− | | 02/26/25 || Benet's Research Talk + Someone | + | | 02/26/25 || Benet's Research Talk + Someone else? |
|- style="border-top: 2px solid DarkGray;" | |- style="border-top: 2px solid DarkGray;" | ||
Line 61: | Line 72: | ||
|- style="border-top: 2px solid DarkGray;" | |- style="border-top: 2px solid DarkGray;" | ||
− | | 03/19/25 || | + | | 03/19/25 || Dananjay Srinivas' Area Exam (Late start, 12-1) |
|- style="border-top: 2px solid DarkGray;" | |- style="border-top: 2px solid DarkGray;" | ||
Line 70: | Line 81: | ||
|- style="border-top: 2px solid DarkGray;" | |- style="border-top: 2px solid DarkGray;" | ||
− | | 04/09/25 || Ali's | + | | 04/09/25 || Ali Marashian's Area Exam |
|- style="border-top: 2px solid DarkGray;" | |- style="border-top: 2px solid DarkGray;" | ||
− | | 04/16/25 || Elizabeth's Defense | + | | 04/16/25 || Elizabeth Spaulding's Defense |
|- style="border-top: 2px solid DarkGray;" | |- style="border-top: 2px solid DarkGray;" | ||
− | | 04/23/25 || Maggie's Defense | + | | 04/23/25 || Maggie Perkoff's Defense |
|- style="border-top: 2px solid DarkGray;" | |- style="border-top: 2px solid DarkGray;" |
Latest revision as of 10:37, 27 January 2025
Location:
- Jan 8 - Feb 5: Lucile Berkeley Buchanan Building (LBB) 430
- Feb 12 onwards: Muenzinger D430
Time: Wednesdays at 11:30am, Mountain Time
Zoom link: https://cuboulder.zoom.us/j/97014876908
Date | Title |
---|---|
01/08/2025 | Invited talk: Denis Peskoff https://denis.ai/
Title: Perspectives on Prompting Abstract: Natural language processing is in a state of flux. I will talk about three recent papers appearing in ACL and EMNLP conferences that are a zeitgeist of the current uncertainty of direction. First, I will talk about a paper that evaluated the responses of large language models to domain questions. Then, I will talk about a paper that used prompting to study the language of the Federal Reserve Board. Last, I will discuss a new paper on identifying generated content in Wikipedia. In addition, I will highlight a mega-paper I was involved in about prompting. Bio: Denis Peskoff just finished a postdoc at Princeton University working with Professor Brandon Stewart. He completed his PhD in computer science at the University of Maryland with Professor Jordan Boyd-Graber and a bachelor’s degree at the Georgetown School of Foreign Service. His research has incorporated domain experts—leading board game players, Federal Reserve Board members, doctors, scientists—to solve natural language processing challenges. |
01/15/2025 | Planning, introductions, welcome! |
01/22/2025 | LSA Keynote -- Chris Potts |
01/23/25 (Thu CS seminar) | Chenhao Tan, CS Colloquium, 3:30pm, ECCR 265
Title: Alignment Beyond Human Preferences: Use Human Goals to Guide AI towards Complementary AI Abstract: A lot of recent work has been dedicated to guide pretrained AI with human preferences. In this talk, I argue that human preferences are often insufficient for complementing human intelligence and demonstrate the key role of human goals with two examples. First, hypothesis generation is critical for scientific discoveries. Instead of removing hallucinations, I will leverage data and labels as a guide to lead hallucination towards effective hypotheses. Second, I will use human perception as a guide for developing case-based explanations to support AI-assisted decision making. In both cases, faithfulness is "compromised" for achieving human goals. I will conclude with future directions towards complementary AI.
|
01/29/25 | Laurie Jones from Information Science
Abstract: Laure is coming to seek feedback about two projects she's been working on from the Boulder NLP community. Similarity through creation and consumption: Initial work looking at similarity between Wikipedia articles surrounding the Arab Spring present diverging perspective in English and Arabic. However, this was not identified through content analysis but rather through leveraging other digital trace data sources such as the blue links (outlinks) and inter-language links (ILLs). I am hoping to identify the Arab Spring article’s ecosystem to inform relationships between articles through the lens of creation and consumption. I am planning to leverage network analysis and graph theory to identify articles that are related along shared editors, outlinks, and clickstreams. Then with the pareto principle, identify densely correlated articles and present an ecosystem that isn't exclusively correlated through content. This I hope can then inform language models, providing additional language-agnostic contextualization. I would love feedback on the application and theoretical contextualization of this method Collective Memory expression in LLMs: As LLMs get integrated into search engines and other accessible methods of querying, they will get utilized more as a historical documentation and referenced as fact. Because they are built upon sources that include bias of not only political perspective but also linguistic and geographical perspectives, the narratives these LLMs will present about the past is collectively informed, its own collective memory. However, what does that mean when you transcend some of these perspectives? Utilizing prompt engineering, I am investigating the 2 widely used large language models, Chat-GPT and Gemini. I hope to cross reference prompts, feigning user identification and cross-utilizing perspectives based on country of origin, language, and temporal framing. I will then utilize a similarity metric to contrast LLM responses, identifying discrepancies and similarities across these perspectives. This much more in its infancy and I'd love possible perspectives on theoretical lineage and cross-language LLM assessment. Bio: Laurie Jones is a PhD student in Information Science. She has a BS in Computer Science and a minor in Arabic from Washington and Lee University in Virginia. Now under Brian Keegan in information science and Alexandra Siegel in political science, Laurie does cross-language cross-platform analysis of English and Arabic content asymmetry. She uses computational social science methods like natural language processing and network analysis as well as her knowledge of the Arabic language to understand collective memory and conflict power processes across languages and platforms. |
02/05/25 | Bhargav Shandilya's Area Exam |
02/12/25 | Michael Ginn's Area Exam
Title: Extracting Automata from Modern Neural Networks Abstract: It may be desirable to extract an approximation of a trained neural network as a finite-state automaton, for reasons including interpretability, efficiency, and predictability. Early research on recurrent neural networks (RNNs) proposed methods to convert trained RNNs into finite- state automata by quantizing the continuous hidden state space of the RNN into a discrete state space. However, these methods depend on the assumption of a rough equivalence between these state spaces, which is less straightforward for modern recurrent networks and transformers. In this survey, we review methods for automaton extraction, specifically highlighting the challenges and proposed methods for extraction with modern neural networks. |
02/19/25 | Amy Burkhardt's Talk |
02/26/25 | Benet's Research Talk + Someone else? |
03/05/25 | |
03/12/25 | CLASIC Industry Day |
03/19/25 | Dananjay Srinivas' Area Exam (Late start, 12-1) |
03/26/25 | No meeting - Spring Break |
04/02/25 | Adam Wiemerslage's Defense |
04/09/25 | Ali Marashian's Area Exam |
04/16/25 | Elizabeth Spaulding's Defense |
04/23/25 | Maggie Perkoff's Defense |
04/30/25 | NAACL, maybe no meeting? |
05/07/25 | Jon Cai's Defense
|
Past Schedules
- Fall 2024 Schedule
- Spring 2024 Schedule
- Fall 2023 Schedule
- Spring 2023 Schedule
- Fall 2022 Schedule
- Spring 2022 Schedule
- Fall 2021 Schedule
- Spring 2021 Schedule
- Fall 2020 Schedule
- Spring 2020 Schedule
- Fall 2019 Schedule
- Spring 2019 Schedule
- Fall 2018 Schedule
- Summer 2018 Schedule
- Spring 2018 Schedule
- Fall 2017 Schedule
- Summer 2017 Schedule
- Spring 2017 Schedule
- Fall 2016 Schedule
- Spring 2016 Schedule
- Fall 2015 Schedule
- Spring 2015 Schedule
- Fall 2014 Schedule
- Spring 2014 Schedule
- Fall 2013 Schedule
- Summer 2013 Schedule
- Spring 2013 Schedule
- Fall 2012 Schedule
- Spring 2012 Schedule
- Fall 2011 Schedule
- Summer 2011 Schedule
- Spring 2011 Schedule
- Fall 2010 Schedule
- Summer 2010 Schedule
- Spring 2010 Schedule
- Fall 2009 Schedule
- Summer 2009 Schedule
- Spring 2009 Schedule
- Fall 2008 Schedule
- Summer 2008 Schedule