Difference between revisions of "Meeting Schedule"

From CompSemWiki
Jump to navigationJump to search
Line 19: Line 19:
  
 
|- style="border-top: 2px solid DarkGray;"
 
|- style="border-top: 2px solid DarkGray;"
| 09/13/2023 || Ongoing projects talks (Rehan Ahmed
+
| 09/13/2023 || Ongoing projects talks (Rehan Ahmed,)
  
 
|- style="border-top: 2px solid DarkGray;"
 
|- style="border-top: 2px solid DarkGray;"
Line 55: Line 55:
  
 
|- style="border-top: 2px solid DarkGray;"
 
|- style="border-top: 2px solid DarkGray;"
| 12/05/2023 || <strike>Elizabeth Spaulding: proposal defense</strike> Strand 1 iSAT Research - Understanding and Facilitating Collaborations - Jie Cao, Jon Cai, Ananya Ganesh, Martha Palmer
+
| 12/05/2023 || TBD
  
 
|- style="border-top: 2px solid DarkGray;"
 
|- style="border-top: 2px solid DarkGray;"
| 05/03/2023 || <strike> Sameer Pradhan, GRAIL—Generalized Representation and Aggregation of Information Layers </strike> MOVED TO FALL 2023
+
| 12/12/2023 || TBD
  
 
|- style="border-top: 2px solid DarkGray;"
 
|- style="border-top: 2px solid DarkGray;"
| 05/17/2023 || Skatje Myers, Practice Talk for Thesis Defense - Adapting Semantic Role Labeling to New Genres and Languages
+
| 12/19/2023 || TBD
Abstract: Semantic role labeling (SRL) is the identification of semantic predicates and their participants within a sentence, which is vital for deeper natural language understanding. Current SRL models require annotated text for training, but this is unavailable in many domains and languages. We explore two different ways of reducing the annotation required to produce effective SRL models: 1) using active learning to target only the most informative training instances and 2) leveraging parallel sentences to project SRL annotations from one language into the target language.
 
  
|- style="border-top: 2px solid DarkGray;"
 
| 05/18/2023 || 11am-1pm MT: Skatje Myers, REAL Thesis Defense - Adapting Semantic Role Labeling to New Genres and Languages.
 
Abstract: Semantic role labeling (SRL) is the identification of semantic predicates and their participants within a sentence, which is vital for deeper natural language understanding. Current SRL models require annotated text for training, but this is unavailable in many domains and languages. We explore two different ways of reducing the annotation required to produce effective SRL models: 1) using active learning to target only the most informative training instances and 2) leveraging parallel sentences to project SRL annotations from one language into the target language.
 
 
|- style="border-top: 2px solid DarkGray;"
 
|- style="border-top: 2px solid DarkGray;"
 
|}
 
|}

Revision as of 11:42, 30 August 2023

Location: Hybrid - Buchanan 430, and the zoom link below

Time: Wednesdays at 10:30am, Mountain Time

Zoom link: https://cuboulder.zoom.us/j/97014876908

Date Title
08/30/23 Planning, introductions, welcome!
09/06/2023 ACL talk videos (Geoffrey Hinton)
09/13/2023 Ongoing projects talks (Rehan Ahmed,)
09/20/2023 TBD
09/27/2023 TBD
10/04/2023 TBD
10/11/2023 TBD
10/18/2023 TBD
10/25/2023 TBD
11/1/2023 TBD
11/8/2023 TBD
11/15/2023 TBD
11/22/2023 TBD
11/29/2023 TBD
12/05/2023 TBD
12/12/2023 TBD
12/19/2023 TBD


Past Schedules