Difference between revisions of "Meeting Schedule"

From CompSemWiki
Jump to navigationJump to search
Line 16: Line 16:
  
 
|- style="border-top: 2px solid DarkGray;"
 
|- style="border-top: 2px solid DarkGray;"
| 09/06/2023 || ACL talk videos
+
| 09/06/2023 || ACL talk videos (Geoffrey Hinton)
  
 
|- style="border-top: 2px solid DarkGray;"
 
|- style="border-top: 2px solid DarkGray;"
| 09/13/2023 || Ongoing projects talks  
+
| 09/13/2023 || Ongoing projects talks (Rehan Ahmed | 
  
 
|- style="border-top: 2px solid DarkGray;"
 
|- style="border-top: 2px solid DarkGray;"
| 09/20/2023 || Garden party
+
| 09/20/2023 || TBD
  
 
|- style="border-top: 2px solid DarkGray;"
 
|- style="border-top: 2px solid DarkGray;"
| 09/27/2023 || Felix practice talk, Ongoing projects talks
+
| 09/27/2023 || TBD
  
 
|- style="border-top: 2px solid DarkGray;"
 
|- style="border-top: 2px solid DarkGray;"
Line 40: Line 40:
  
 
|- style="border-top: 2px solid DarkGray;"
 
|- style="border-top: 2px solid DarkGray;"
| 10/25/2023 || TBD
+
| 11/1/2023 || TBD
  
 
|- style="border-top: 2px solid DarkGray;"
 
|- style="border-top: 2px solid DarkGray;"
| 11/29/2023 || ''Spring break -- no meeting!''
+
| 11/8/2023 || TBD
  
 
|- style="border-top: 2px solid DarkGray;"
 
|- style="border-top: 2px solid DarkGray;"
| 04/05/2023 || Kyle Gorman (invited speaker; City University of New York)
+
| 11/15/2023 || TBD
  
 
|- style="border-top: 2px solid DarkGray;"
 
|- style="border-top: 2px solid DarkGray;"
| 04/12/2023 || Abteen Ebrahimi: prelim
+
| 11/22/2023 || TBD
  
 
|- style="border-top: 2px solid DarkGray;"
 
|- style="border-top: 2px solid DarkGray;"
| 04/19/2023 || <strike>Jon Cai: proposal defense</strike> CANCELLED
+
| 11/29/2023 || TBD
  
 
|- style="border-top: 2px solid DarkGray;"
 
|- style="border-top: 2px solid DarkGray;"
| 04/26/2023 || <strike>Elizabeth Spaulding: proposal defense</strike> Strand 1 iSAT Research - Understanding and Facilitating Collaborations - Jie Cao, Jon Cai, Ananya Ganesh, Martha Palmer
+
| 12/05/2023 || <strike>Elizabeth Spaulding: proposal defense</strike> Strand 1 iSAT Research - Understanding and Facilitating Collaborations - Jie Cao, Jon Cai, Ananya Ganesh, Martha Palmer
  
 
|- style="border-top: 2px solid DarkGray;"
 
|- style="border-top: 2px solid DarkGray;"

Revision as of 11:41, 30 August 2023

Location: Hybrid - Buchanan 430, and the zoom link below

Time: Wednesdays at 10:30am, Mountain Time

Zoom link: https://cuboulder.zoom.us/j/97014876908

Date Title
08/30/23 Planning, introductions, welcome!
09/06/2023 ACL talk videos (Geoffrey Hinton)
09/13/2023
09/20/2023 TBD
09/27/2023 TBD
10/04/2023 TBD
10/11/2023 TBD
10/18/2023 TBD
10/25/2023 TBD
11/1/2023 TBD
11/8/2023 TBD
11/15/2023 TBD
11/22/2023 TBD
11/29/2023 TBD
12/05/2023 Elizabeth Spaulding: proposal defense Strand 1 iSAT Research - Understanding and Facilitating Collaborations - Jie Cao, Jon Cai, Ananya Ganesh, Martha Palmer
05/03/2023 Sameer Pradhan, GRAIL—Generalized Representation and Aggregation of Information Layers MOVED TO FALL 2023
05/17/2023 Skatje Myers, Practice Talk for Thesis Defense - Adapting Semantic Role Labeling to New Genres and Languages

Abstract: Semantic role labeling (SRL) is the identification of semantic predicates and their participants within a sentence, which is vital for deeper natural language understanding. Current SRL models require annotated text for training, but this is unavailable in many domains and languages. We explore two different ways of reducing the annotation required to produce effective SRL models: 1) using active learning to target only the most informative training instances and 2) leveraging parallel sentences to project SRL annotations from one language into the target language.

05/18/2023 11am-1pm MT: Skatje Myers, REAL Thesis Defense - Adapting Semantic Role Labeling to New Genres and Languages.

Abstract: Semantic role labeling (SRL) is the identification of semantic predicates and their participants within a sentence, which is vital for deeper natural language understanding. Current SRL models require annotated text for training, but this is unavailable in many domains and languages. We explore two different ways of reducing the annotation required to produce effective SRL models: 1) using active learning to target only the most informative training instances and 2) leveraging parallel sentences to project SRL annotations from one language into the target language.


Past Schedules