Fall 2022 Schedule
Date | Title |
---|---|
9.1.21 | Planning, introductions, welcome! |
9.8.21 | Yoshinari thesis defense, starting at 10am |
9.15.21 | ACL best paper recaps |
9.22.21 | Introduction to AI Institute (short talks) |
9.29.21 | |
10.6.21 | |
10.13.21 | |
10.20.21 | |
10.27.21 | Invited talk: Lisa Miracchi |
11.3.21 | EMNLP practice talks |
11.10.21 | EMNLP - no meeting |
11.17.21 | Elizabeth prelim |
11.24.21 | Fall break - no meeting |
12.1.21 | |
12.8.21 | Abhidip proposal defense |
Date | Title |
---|---|
3.25.20 | Happy New Year! |
1.27.21 | Planning, Zihan Wang, Extending Multilingual BERT to Low-Resource Languages |
2.3.21 | cancelled because of DARPA AIDA PI Meeting conflict |
2.10.21 | Martha Palmer SCIL UMR practice talk |
2.17.21 | Cancelled because of Wellness Day |
2.24.21 | Sarah Moeller practice talk |
3.3.21 | Clayton Lewis: Garfinkel and NLP - a discussion of challenges for Natural Language Understanding |
3.10.21 | Antonis Anastasapolous, [1] (guest of Alexis Palmer)
Reducing Confusion in Active Learning [2] Active learning (AL) uses a data selection algorithm to select useful training samples to minimize annotation cost. This is now an essential tool for building low-resource syntactic analyzers such as part-of-speech (POS) taggers. Existing AL heuristics are generally designed on the principle of selecting uncertain yet representative training instances, where annotating these instances may reduce a large number of errors. However, in an empirical study across six typologically diverse languages (German, Swedish, Galician, North Sami, Persian, and Ukrainian), we found the surprising result that even in an oracle scenario where we know the true uncertainty of predictions, these current heuristics are far from optimal. Based on this analysis, we pose the problem of AL as selecting instances which maximally reduce the confusion between particular pairs of output tags. Extensive experimentation on the aforementioned languages shows that our proposed AL strategy outperforms other AL strategies by a significant margin. We also present auxiliary results demonstrating the importance of proper calibration of models, which we ensure through cross-view training, and analysis demonstrating how our proposed strategy selects examples that more closely follow the oracle data distribution. |
3.17.21 | ACL paper discussion, led by Jon Cai & Sarah Moeller
(Conflict with DARPA KAIROS PI Meeting, no Martha, Susan, Piyush, Akanksha or Ghazaleh) |
3.24.21 | 2nd Wellness day, no group meeting, |
3.31.21 | Capstone Projects |
4.7.21 | |
4.14.21 | Marjorie McShane -
Toward Broad and Deep Language Processing for Intelligent Systems The early vision of AI included the goal of endowing intelligent systems with human-like language processing capabilities. This proved harder than expected, leading the vast majority of natural language processing practitioners to pursue less ambitious, shorter-term goals. Whereas the utility of human-like language processing is unquestionable, its feasibility is quite justifiably questioned. In this talk, I will not only argue that some approximation of human-like language processing is possible, I will present a program of R&D that is working on making it a reality. This vision, as well as progress to date, is described in the book Linguistics for the Age of AI (MIT Press, 2021), whose digital version is open access through the MIT Press website. https://cuboulder.zoom.us/rec/share/ysCHI9lUmm2S22PcmIzfhZWuHHMxsDOf4jGm_uzQOZraE7IjtfR7e0QUzpv0sxvG.IGHOX29aaPQnRPpw Passcode: rFay+9W5 |
4.21.21 | Abhidip's presentation Multimodal SRL
Scene understanding is a critical goal of Computer Vision and object recognition is an important element. Attention based encoder-decoder architectures have improved the performance of many vision-language models. However, the semantics of the images has largely been overlooked in designing these systems. As a result, vision-language systems recommend one fixed semantic interpretation for a particular image. Hence an image will always produce or retrieve a fixed description. This contrasts with the variety of expressions humans can generate when describing the same scene. To bridge this description gap we use semantic role labels (SRL) as our semantic cues for both images and text. Semantic roles enable richer representation of an image and the corresponding text in the shared space. With help of SRL we are able to achieve better performance in cross-modal retrieval. Also SRL enables generating diverse descriptions for a given image. Recording: https://drive.google.com/drive/folders/1PdB4VyrAeTIS974Y2fHtWdm46qJrqjNd?usp=sharing |
4.28.21 | Rehan Ahmed proposal, Event Coreference in Text & Graphs
recording link: https://cuboulder.zoom.us/rec/share/9fjHRVmT5GJumderyKTsDgJSDT17XeEyg_d1TwWdE8WHzSf5hH0HobmJpcIJ15Bu.ROd_ShzZvb9iTIy6 Passcode: T+3f5mVd |
5.05.21 | Skatje Myers proposal |
Past Schedules
- Spring 2021 Schedule
- Fall 2020 Schedule
- Spring 2020 Schedule
- Fall 2019 Schedule
- Spring 2019 Schedule
- Fall 2018 Schedule
- Summer 2018 Schedule
- Spring 2018 Schedule
- Fall 2017 Schedule
- Summer 2017 Schedule
- Spring 2017 Schedule
- Fall 2016 Schedule
- Spring 2016 Schedule
- Fall 2015 Schedule
- Spring 2015 Schedule
- Fall 2014 Schedule
- Spring 2014 Schedule
- Fall 2013 Schedule
- Summer 2013 Schedule
- Spring 2013 Schedule
- Fall 2012 Schedule
- Spring 2012 Schedule
- Fall 2011 Schedule
- Summer 2011 Schedule
- Spring 2011 Schedule
- Fall 2010 Schedule
- Summer 2010 Schedule
- Spring 2010 Schedule
- Fall 2009 Schedule
- Summer 2009 Schedule
- Spring 2009 Schedule
- Fall 2008 Schedule
- Summer 2008 Schedule