Difference between revisions of "Fall 2022 Schedule"

From CompSemWiki
Jump to navigationJump to search
Line 25: Line 25:
 
| 9.29.21 ||  
 
| 9.29.21 ||  
 
|- style="border-top: 2px solid DarkGray;"
 
|- style="border-top: 2px solid DarkGray;"
| 10.6.21 ||  
+
| 10.6.21 || Invited talk: Artemis Panagopoulou, Metaphor and textual entailment
 
|- style="border-top: 2px solid DarkGray;"
 
|- style="border-top: 2px solid DarkGray;"
 
| 10.13.21 || Invited guest: Arya McCarthy
 
| 10.13.21 || Invited guest: Arya McCarthy

Revision as of 16:02, 21 September 2021

Date Title
9.1.21 Planning, introductions, welcome!

CompSem meetings will be hybrid this semester - in person at Fleming 279, and online here: https://cuboulder.zoom.us/j/97014876908

9.8.21 10am (NOTE: special start time)

Yoshinari Fujinuma thesis defense

Analysis and Applications of Cross-Lingual Models in Natural Language Processing

Human languages vary in terms of both typologically and data availability. A typical machine learning-based approach for natural language processing (NLP) requires training data from the language of interest. However, because machine learning-based approaches heavily rely on the amount of data available in each language, the quality of trained model languages without a large amount of data is poor. One way to overcome the lack of data in each language is to conduct cross-lingual transfer learning from resource-rich languages to resource-scarce languages. Cross-lingual word embeddings and multilingual contextualized embeddings are commonly used to conduct cross-lingual transfer learning. However, the lack of resources still makes it challenging to either evaluate or improve such models. This dissertation first proposes a graph-based method to overcome the lack of evaluation data in low-resource languages by focusing on the structure of cross-lingual word embeddings, further discussing approaches to improve cross-lingual transfer learning by using retrofitting methods and by focusing on a specific task. Finally, it provides an analysis of the effect of adding different languages when pretraining multilingual models.

9.15.21 ACL best paper recaps
9.22.21 Introduction to AI Institute (short talks)
9.29.21
10.6.21 Invited talk: Artemis Panagopoulou, Metaphor and textual entailment
10.13.21 Invited guest: Arya McCarthy
10.20.21
10.27.21 Invited talk: Lisa Miracchi
11.3.21 EMNLP practice talks
11.10.21 EMNLP - no meeting
11.17.21 Elizabeth Spaulding prelim
11.24.21 Fall break - no meeting
12.1.21 Invited talk: Abe Handler
12.8.21 Abhidip Bhattacharyya proposal defense


Past Schedules