Difference between revisions of "Spring 2022 Schedule"

From CompSemWiki
Jump to navigationJump to search
Line 146: Line 146:
 
'''Abstract'''
 
'''Abstract'''
 
Computational lexical resources are regularly used in various NLP applications, providing richer, more precise semantic representations. Among these resources, VerbNet considers both syntactic and semantic factors in defining a class of verbs, relying heavily on meaning-preserving diathesis alternations. This property of VerbNet might turn into a problem in some applications since some useful fine-grained semantic distinctions might get overlooked. There are some VerbNet classes that are so heterogeneous as to be difficult to characterize semantically, e.g., Other_cos-45.4. In this talk, we start by discussing a recent addition to the VerbNet class semantics, verb-specific semantic features, that provides significant enrichment to the information associated with verbs in each VerbNet class. Based on these features, we can group together verbs sharing semantic features within a class, forming more semantically coherent subclasses. In the second part of this talk, we discuss an application of VerbNet in reasoning about entity state changes, such as change of location, whether an entity gets created or destroyed in a process, and potentially the locus of each change, i.e. where was the entity when the change started, and where did it end up. Our developed system, Lexis, uses the VerbNet semantic parse for each sentence, and extracts changes in the state of entities using information from the VerbNet first-order-logic semantic representations, assisted by dependency parsing, coreference resolution, and hypernym relations defined in ConceptNet, which is a commonsense knowledge graph. We will have a brief error analysis, and present the results of the evaluation on the ProPara dataset.
 
Computational lexical resources are regularly used in various NLP applications, providing richer, more precise semantic representations. Among these resources, VerbNet considers both syntactic and semantic factors in defining a class of verbs, relying heavily on meaning-preserving diathesis alternations. This property of VerbNet might turn into a problem in some applications since some useful fine-grained semantic distinctions might get overlooked. There are some VerbNet classes that are so heterogeneous as to be difficult to characterize semantically, e.g., Other_cos-45.4. In this talk, we start by discussing a recent addition to the VerbNet class semantics, verb-specific semantic features, that provides significant enrichment to the information associated with verbs in each VerbNet class. Based on these features, we can group together verbs sharing semantic features within a class, forming more semantically coherent subclasses. In the second part of this talk, we discuss an application of VerbNet in reasoning about entity state changes, such as change of location, whether an entity gets created or destroyed in a process, and potentially the locus of each change, i.e. where was the entity when the change started, and where did it end up. Our developed system, Lexis, uses the VerbNet semantic parse for each sentence, and extracts changes in the state of entities using information from the VerbNet first-order-logic semantic representations, assisted by dependency parsing, coreference resolution, and hypernym relations defined in ConceptNet, which is a commonsense knowledge graph. We will have a brief error analysis, and present the results of the evaluation on the ProPara dataset.
 +
 +
Recording
 +
[https://cuboulder.zoom.us/rec/share/WvxaguLU8pU_wx_GbeA662w7gOxemM6ZFO1iLHE-Zj1cfn84Ym1bDWMfm9GeUXyT.iFWecEwE7Fi4WSjc]
 +
Passcode: HoH&2aaU
  
 
|- style="border-top: 2px solid DarkGray;"
 
|- style="border-top: 2px solid DarkGray;"

Revision as of 19:15, 25 August 2022

Location: Hybrid (starting Feb 23) - Fleming 279, and the zoom link below

Time: Wednesdays at 10:30am, Mountain Time

Zoom link: https://cuboulder.zoom.us/j/97014876908

Date Title
1.12.22 Planning, introductions, welcome!

CompSem meetings will be virtual until further notice (https://cuboulder.zoom.us/j/97014876908)

01.19.22 Kai Larsen, CU Boulder Leeds School of Business

Validity in Design Research

Research in design science has always recognized the importance of evaluating its knowledge outcomes, particularly of assessing the efficacy, utility, and attributes of the artifacts produced (e.g., A.I. systems, machine learning models, theories, frameworks). However, demonstrating the validity of design science research (DSR) is challenging and not well understood. This paper defines DSR validity and proposes a DSR Validity Framework. We evaluate the framework by assembling and analyzing an extensive data set of research validities papers from various disciplines, including design science. We then analyze the use of validity concepts in DSR and validate the framework. The results demonstrate that the DSR Validity Framework may be used to guide how validity can, and should, be used as an integral aspect of design science research. We further describe the steps for selecting appropriate validities for projects and formulate efficacy validity and characteristic validity claims suitable for inclusion in manuscripts.

Keywords: Design science research (DSR), research validity, validity framework, artifact, evaluation, efficacy validity, characteristic validity.

01.26.22 Elizabeth Spaulding, prelim

Prelim topic: Evaluation for Abstract Meaning Representations

Abstract Meaning Representation (AMR) is a semantic representation language that provides a way to represent the meaning of a sentence in the form of a graph. The task of AMR parsing—automatically extracting AMR graphs from natural language text—necessitates evaluation metrics to develop neural parsers. My prelim is a review of AMR evaluation metrics and the strengths and weaknesses of each approach, as well as a discussion of gaps and unexplored questions in the current literature.

02.02.22 NO MEETING
02.09.22 SCiL live session!
02.16.22 NO MEETING
02.23.22 CompSem meetings go back to being hybrid! (Fleming 279 or https://cuboulder.zoom.us/j/97014876908)


Invited talk: Aniello de Santo, University of Utah

Bridging Typology and Learnability via Formal Language Theory

The complexity of linguistic patterns is object of extensive debate in research programs focused on probing the inherent structure of human language abilities. But in what sense is a linguistic phenomenon more complex than another, and what can complexity tell us about the connection between linguistic typology and human cognition? In this talk, I overview a line of research approaching these questions from the perspective of recent advances in formal language theory.

I will first broadly discuss how language theoretical characterizations allow us to focus on essential properties of linguistic patterns under study. I will emphasize how typological insights can help us refine existing mathematical characterizations, arguing for a two-way bridge between disciplines, and show how the theoretical predictions made by logic/algebraic formalization of typological generalizations can be used to test learning biases in humans (and machines).

In doing so, I aim to illustrate the relevance of mathematically grounded approaches to cognitive investigations into linguistic generalizations, and thus further fruitful cross-disciplinary collaborations.


Bio Sketch:

Aniello De Santo is an Assistant Professor in the Linguistics Department at the University of Utah.

Before joining Utah, he received a PhD in Linguistics from Stony Brook University. His research broadly lies at the intersection between computational, theoretical, and experimental linguistics. He is particularly interested in investigating how linguistic representations interact with general cognitive processes, with particular focus on sentence processing and learnability. In his past work, he has mostly made use of symbolic approaches grounded in formal language theory and rich grammar formalisms (Minimalist Grammars, Tree Adjoining Grammars).

03.02.22 Kevin Cohen, Computational Bioscience Program, U. Colorado School of Medicine

Chalk talk: Studying the science in biomedical natural language processing

At this CompSem meeting, I will give a talk chalk on a grant proposal that I am preparing. "Chalk talks" are a kind of presentation that you will have to do when you hit the job market, and once you've found that wonderful job, they may be a regular part of your faculty responsibilities. I will begin with an introduction to the form and functions of this kind of talk, go over the review criteria for the kind of grant for which I am applying, and then give a chalk talk on my proposal. Please come ready to critique it harshly--my grandmother will tell me how great it is.

03.09.22 Ghazaleh Kazeminejad, proposal defense

NOTE: Special start time 10am

Topic: Neural-Symbolic NLP: exploiting computational lexical resources

Recent major advances in Natural Language Processing (NLP) have relied on a distributional approach, representing language numerically to enable complex mathematical operations and algorithms. These numeric representations have been based on the probabilistic distributions of linguistic units. The main recent breakthrough in NLP has been the result of feeding massive data to the machine and using neural network architectures, allowing the machine to learn a model that approximates a given language (grammar and lexicon). Following this paradigm shift, NLP researchers introduced transfer learning, enabling researchers with less powerful computational resources to use their pre-trained language models and transfer what the machine has learned to a new downstream NLP task. However, there are some NLP tasks, particularly in the realm of Natural Language Understanding (NLU), where surface level representations and purely statistical models may benefit from symbolic knowledge and deeper level representations. In this work, we explore contributions that symbolic computational lexical resources can still make to system performances on two different tasks. In particular, we propose to expose the model to symbolic knowledge, including external world knowledge (e.g typical features of entities such as their typical functions or whereabouts) as well as linguistic knowledge (e.g. syntactic dependencies and semantic relationships among the constituents). One of our goals for this work is finding an appropriate numeric representation for this type of symbolic knowledge.

We propose to utilize the semantic predicates from VerbNet, semantic roles from VerbNet and PropBank, syntactic dependency labels, and world knowledge from ConceptNet as symbolic knowledge, going beyond the types of symbolic knowledge used so far in neural-symbolic approaches. We will expose a pre-trained language model to symbolic knowledge in two ways. First, we will embed these relations into a neural network architecture by modifying the input representations. Second, we will treat the knowledge as constraints on the output, penalizing the model at the end of each training step if the constraints are not met in the model predictions at that step.

To evaluate this approach, we propose to test it on two downstream NLP tasks: Event Extraction and Entity State Tracking. We propose a thorough investigation of the two tasks, particularly focusing on where they have benefitted from a neural-symbolic approach, and whether and how we could further improve the performance on these tasks by introducing both linguistic and world knowledge to the model.

03.16.22 Chelsea Chandler, thesis defense

NOTE: Special start time 10am

Title: Methods for Multimodal Assessment of Cognitive and Mental State

Barriers to healthcare access such as time, affordability, and stigma are common in patients suffering from psychiatric and neurodegenerative disorders. Psychiatric patients often need to be monitored with frequent clinical interviews to avoid costly emergency care and preventable events. However, there simply are not enough clinicians to monitor these patients on a regular basis and infrequent clinical evaluations may result in missing the subtle changes in patient state that occur over time. For those suffering from neurodegenerative disorders, the traditional approaches to detecting early onset lack the sensitivity needed to catch the subtle signs of cognitive decline. In order to move toward a more standardized, consistent, and reliable assessment and diagnosis process, machine learning and natural language processing methods can be harnessed to create accurate and accessible prognostic systems that could help to alleviate the burden of mental disorders in society. In this dissertation, a multidisciplinary set of methodologies for the automated assessment of psychiatric mental state were developed that were sufficiently accurate and explainable to nurture trust from patients and clinicians, and also longitudinal and multimodal to model the dynamic and multifaceted nature of mental disorders. The viability of an automated assessment pipeline was examined: from the administration of neuropsychological tests and transcription of spoken responses, to the extraction of construct-relevant data features and prediction of psychiatric mental states. A similar approach was taken for the screening of the neurodegenerative disorders Alzheimer’s disease and Mild Cognitive Impairment. Implications for the real world use of multimodal machine learning for mental disorders are discussed, providing a crucial step towards clinical translation and implementation.

Biographical Note

Chelsea Chandler is a joint PhD candidate in the Department of Computer Science and Institute of Cognitive Science and is advised by Peter Foltz and Jim Martin. Previous to her PhD, she received a BA in Mathematics and Computer Science from the University of Virginia, worked as a software engineer for Lockheed Martin, and received a MS in Computer Science from the University of Colorado Boulder.


03.23.22 ***Spring Break***
03.30.22 CLASIC Open House
04.06.22 Grad student appreciation lunch!!!
04.13.22 *** No meeting ***
04.20.22 Sagi Shaier, prelim

Prelim topic: Knowledge Incorporation In Dialogue Systems

Current dialogue systems are capable of communicating reasonably well using day-to-day language. However, the generated dialogue often appears as uninformative. Additionally, depending on the use case of the system, one may want to make it more empathetic, have an enhanced medical or soccer knowledge, or even have an improved common sense. One approach to tackle such challenges is to incorporate knowledge in various forms (e.g., emotions, facts, complete documents) into such systems. In this talk, I will present current and past research of such methods, and discuss them from the perspective of human memory.

04.27.22 05.04.22 Adam Wiemerslage, prelim

NOTE: Special date

Prelim topic: Why worry about words? A survey of computational morphology in downstream NLP tasks

Do language models acquire morphology? Do they need to? A great deal of recent progress in NLP has been driven by the development of large neural language models trained on increasing amounts of unlabeled text. Most research assumes linguistic properties that are important for NLP tasks are learned implicitly during language model pretraining insofar as they are necessary for downstream tasks. However, a great deal of research has historically focused on encoding morphology into NLP models to solve problems related to data sparsity, or morphologically rich languages. This work surveys several NLP tasks with respect to systems that benefited from an explicit handling of morphology. We review typical problems that have motivated explicitly modeling morphology, and strategies for encoding morphological information into NLP models. We then reflect on the role of morphology in the most recent training paradigm for NLP systems that relies on large pretrained language models.


07.13.22

Jan Hajič, Zdeňka Urešová Charles University, Prague, Czech Republic

Natural Language Understanding: from syntax to knowledge representation

Abstract: We will present a project the goal of which is to design a knowledge representation that can possibly be used for multiple (many) languages, with some novel features that generalize over existing formalisms such as Prague Dependency Treebank semantic layer or AMR/UMR, but still links the representation to the underlying language form (to enable supervised machine learning for various tasks, including text analysis and generation etc.). The talk will be presented in three parts: first, we will introduce the basic ideas behind the knowledge representation (work in progress), then we will introduce the event-type ontology that is desinged to serve for event/state grounding, and finally we will present the relations between the semantic features of the ontology and the morpho-syntactic resources (such as Czech and English valency lexicons) that provide the bridge between the semantics and knowledge representation and the concrete language(s).

Bios Jan Hajič is a full professor of Computational Linguistics at the Institute of Formal and Applied Linguistics at the School of Computer Science, Charles University in Prague. He is the deputy head of the Institute. His interests cover morphology and part-of-speech tagging of inflective languages, machine translation, deep language understanding, and the application of statistical methods in natural language processing in general. He also has an extensive experience in building language resources for multiple languages with rich linguistic annotation, and is currently the director of a large, multi-institutional research infrastructure on language resources in the Czech Republic, LINDAT/CLARIAH-CZ, which aims at making datasets and corpora openly available for linguistic and Digital Humanities research. His work experience includes both industrial research (IBM Research Yorktown Heights, NY, USA, in 1991-1993) and academia (Charles University in Prague, Czech Republic and Johns Hopkins University, Baltimore, MD, USA, 1999-2000, and currently holds an adjunct position at University of Colorado Boulder). He has published more than 200 conference and journal papers, a book on computational morphology, and several other book chapters, encyclopedia and handbook entries. He regularly teaches basic and advanced courses on Statistical NLP and has multiple experience giving tutorials and lectures at various international training schools. He is/has been the PI or Co-PI of numerous international as well as large national grants and projects (including EU Framework Programme projects, such as H2020, and the NSF ITR program in the U.S.).

Zdeňka Urešová is a senior researcher at the Institute of Formal and Applied Linguistics at the School of Computer Science, Charles University in Prague. Her interests are syntactic and semantic annotation of texts (Czech, English) and computational lexicons for text annotation, at both syntactic and semantic levels. She has received her Ph.D. in Computational Linguistics in 2012. Since then she has been the PI or participated in numerous national and international projects in the EU. She is the author of two books on valency, and published over 70 other publications with over 2,000 citations.

07.20.22 Susan Brown

Title: Generative Lexicon-based semantic representations for VerbNet

Abstract The need for deeper semantic processing of human language by our natural language processing systems is evidenced by their still-unreliable performance on inferencing tasks, even using deep learning techniques. These tasks require the detection of subtle interactions between participants in events, sequencing of subevents that are often not explicitly mentioned, and changes to various participants across an event. Human beings can perform this detection even when sparse lexical items are involved, suggesting that linguistic insights into these abilities could improve NLP performance. In this talk, I will describe new, hand-crafted semantic representations for the lexical resource VerbNet that draw heavily on the linguistic theories about subevent semantics in the Generative Lexicon (GL). In GL, event structure has been integrated with dynamic semantic models in order to represent the attribute modified in the course of the event (the location of the moving entity, the extent of a created or destroyed entity, etc.) as a sequence of states related to time points or intervals. We applied that model to VerbNet semantic representations, using a class's semantic roles and a set of predicates defined across classes as components in each subevent. I will describe in detail the structure of these representations, the underlying theory that guides them, and the definition and use of the predicates. I will also give a brief overview of the VerbNet Parser, which, for a given sentence, identifies the correct VerbNet class of the verb, labels the arguments with VerbNet thematic roles, and instantiates the semantic representation with entities from the sentence.

Followed by Ghazaleh Kazeminejad.

Title: VerbNet enrichment and applications

Abstract Computational lexical resources are regularly used in various NLP applications, providing richer, more precise semantic representations. Among these resources, VerbNet considers both syntactic and semantic factors in defining a class of verbs, relying heavily on meaning-preserving diathesis alternations. This property of VerbNet might turn into a problem in some applications since some useful fine-grained semantic distinctions might get overlooked. There are some VerbNet classes that are so heterogeneous as to be difficult to characterize semantically, e.g., Other_cos-45.4. In this talk, we start by discussing a recent addition to the VerbNet class semantics, verb-specific semantic features, that provides significant enrichment to the information associated with verbs in each VerbNet class. Based on these features, we can group together verbs sharing semantic features within a class, forming more semantically coherent subclasses. In the second part of this talk, we discuss an application of VerbNet in reasoning about entity state changes, such as change of location, whether an entity gets created or destroyed in a process, and potentially the locus of each change, i.e. where was the entity when the change started, and where did it end up. Our developed system, Lexis, uses the VerbNet semantic parse for each sentence, and extracts changes in the state of entities using information from the VerbNet first-order-logic semantic representations, assisted by dependency parsing, coreference resolution, and hypernym relations defined in ConceptNet, which is a commonsense knowledge graph. We will have a brief error analysis, and present the results of the evaluation on the ProPara dataset.

Recording [1] Passcode: HoH&2aaU

07.27.22 Steven Bethard

Title: Adapting machine learning models for clinical language processing

Abstract: One of the most common questions from users of clinical natural language processing software (e.g., Apache cTAKES) is “Why didn’t the software find this example of X in my data?” The problem is usually that the software’s statistical model was trained on data that does not have examples like the user’s. Fixing the problem would traditionally involve adding new examples to the training data and re-training the model. This is impractical for most users, who have limited machine-learning experience and who often cannot obtain the training data due to protected health information regulations. In this talk, I will discuss some of the challenges to building clinical language processing models that can adapt to a variety of domains, and some recent progress in machine learning techniques that can address these challenges. Examples will be drawn from my lab’s recent work on clinical negation detection, time expression recognition, and medical concept normalization.

Bio: Steven Bethard is an associate professor at the University of Arizona School of Information, whose research interests include natural language processing and machine learning theory and applications, including modeling the language of time and timelines, normalizing text to medical and geospatial ontologies, and information extraction models for clinical applications. He received his Ph.D. in Computer Science and Cognitive Science from the University of Colorado Boulder in 2007. From 2008-2013, he was a postdoctoral researcher at Stanford University's Natural Language Processing group, Johns Hopkins University's Human Language Technology Center of Excellence, KULeuven's Language Intelligence and Information Retrieval group in Belgium, and the University of Colorado's Center for Language and Education Research. From 2013-2016, he was an assistant professor in Computer and Information Science at the University of Alabama at Birmingham.


Past Schedules