Fall 2022 Schedule

From CompSemWiki
Revision as of 10:48, 23 September 2020 by CompSemUser (talk | contribs)
Jump to navigationJump to search
Date Title
9.2.20 Planning
9.9.20 More planning
9.16.20 Vivek Srikumar - Title: Fads, Fallacies and Fantasies in the Name of Machine Learning

Abstract: The pervasiveness of machine learning, and artificial intelligence powered by it, is clear from even a cursory overview of the last several years of academic literature and mainstream technology reporting. The goal of this talk is to provoke thought and discussions about the future of the field. To this end, I will talk about how, as a field, applied machine learning may be starting to bind itself into an intellectual monoculture. In particular, I will describe specific blinders that we may find hard to shake off: (a) the obsession with ranking and leaderboarding, (b) the assumption that purely data-driven computing is always the right answer, and (c) the excessive focus on clean toy problems in lieu of working with real data. Along the way, we will see several examples of questions that we may be able to think about if we cast aside these blinders.

Bio: Vivek Srikumar is associate professor in the School of Computing at the University of Utah. His research lies in the areas of natural learning processing and machine learning and has primarily been driven by questions arising from the need to reason about textual data with limited explicit supervision and to scale NLP to large problems. His work has been published in various AI, NLP and machine learning venues and has been recognized by paper awards from EMNLP and CoNLL. His work has been supported by awards from NSF, BSF and NIH, and also from several companies.. He obtained his Ph.D. from the University of Illinois at Urbana-Champaign in 2013 and was a post-doctoral scholar at Stanford University.

Recording of Vivek's presentation: https://drive.google.com/file/d/1KKbU46LJbCFIgSF-CcpdgHMq4nZmyNt2/view?usp=sharing

9.23.20 2 papers:

Paper 1 - Jonas Pfeiffer (grad student, TU Darmstadt), MAD-X: An Adapter-based Framework for Multi-task Cross-lingual Transfer, Jonas Pfeiffer, Ivan Vulić, Iryna Gurevych, Sebastian Ruder, EMNLP 2020. https://arxiv.org/pdf/2005.00052.pdf

Abstract: The main goal behind state-of-the-art pretrained multilingual models such as multilingual BERT and XLM-R is enabling and bootstrapping NLP applications in low-resource languages through zero-shot or few-shot cross-lingual transfer. However, due to limited model capacity, their transfer performance is the weakest exactly on such low-resource languages and languages unseen during pretraining. We propose MAD-X, an adapter-based framework that enables high portability and parameter-efficient transfer to arbitrary tasks and languages by learning modular language and task representations. In addition, we introduce a novel invertible adapter architecture and a strong baseline method for adapting a pretrained multilingual model to a new language. MAD-X outperforms the state of the art in cross-lingual transfer across a representative set of typologically diverse languages on named entity recognition and achieves competitive results on question answering.


Paper 2 - Extending Multilingual BERT to Low-Resource Languages https://arxiv.org/abs/2004.13640

9.30.20
10.7.20 Peter Foltz "NLP for Team Communication Analysis"
10.14.20 2 papers:

Paper 1 - Tao Li (Utah CS PhD student) - Structured Tuning for Semantic Role Labeling, ACL 2020 Authors: Tao Li, Parth Anand Jawale, Martha Palmer, Vivek Srikumar, https://www.aclweb.org/anthology/2020.acl-main.744.pdf Abstract: Recent neural network-driven semantic role labeling (SRL) systems have shown impressive improvements in F1 scores. These improvements are due to expressive input representations, which, at least at the surface, are orthogonal to knowledge-rich constrained decoding mechanisms that helped linear SRL models. Introducing the benefits of structure to inform neural models presents a methodological challenge. In this paper, we present a structured tuning framework to improve models using softened constraints only at training time. Our framework leverages the expressiveness of neural networks and provides supervision with structured loss components. We start with a strong baseline (RoBERTa) to validate the impact of our approach, and show that our framework outperforms the baseline by learning to comply with declarative constraints. Additionally, our experiments with smaller training sizes show that we can achieve consistent improvements under low-resource scenarios.


Paper 2 - Stephane Aroca-ouellette's EMNLP 2020 practice talk on exploring auxiliary tasks for BERT

10.21.20 Two Brian Keegan PhD students: Arcadia Zhang and Jordan Wirfs-Brock
10.28.20 Chelsea Proposal
11.4.20 Skatje Proposal?
11.11.20 Rehan Proposal
11.18.20 NAACL submission workshop
11.25.20 Fall Break
12.2.20 Tentative Abhidip's proposal
12.9.20 Tentative Vivian proposal

Past Schedules