Spring2019Schedule
2019.01.16 | |
2019.01.23 | Welcome Meeting, Plan Spring Schedule, Viv's practice talk |
2019.01.30 | Visiting Speaker: Ryan Cotterell, [1] (Alvin Grissom II is giving a talk on 1/29 3:30)
Title: The Past-Tense Debate on Steroids: Results from the CoNLL-SIGMORPHON Shared Task 2018 Abstract: In 2018, SIGMORPHON and CoNLL hosted a shared task on universal morphological inflection. The shared task featured over 100 distinct languages, whose morphology participants are asked to model. A word’s form reflects syntactic and semantic categories that are expressed by the word through a process termed morphology. For example, each English count noun has both singular and plural forms (robot/robots, process/processes). These are known as the inflected forms of the noun. Some languages display little inflection, while others possess a proliferation of forms. Polish verb can have nearly 100 inflected forms and an Archi verb has thousands (Kibrik 1998). Natural language processing systems must be able to analyze and generate these inflected forms. Fortunately, inflected forms tend to be systematically related to one another. This is why English speakers can usually predict the singular form from the plural and vice-versa, even for words they have never seen before: Given a novel noun wug, an English speaker knows that the plural is wugs. This talk focuses on the results of this shared task and how it relates to the past-tense debate of the 1980s, which focused on a similar task, but only on one lemma--inflection pairing: English lemma --> English past tense. |
2019.02.06 | Cancel, attend Yonatan Bisk's Feb 5 talk and teaching talk at noon 2/6 instead) |
2019.02.13 | [Cancel, attend Emma's teaching/research talk](prospective visiting days 02/14-02/15, Emma Strubell talk on 02/12) |
2019.02.20 | ACL paper clinic |
2019.02.27 | Xiaolei |
2019.03.06 | CANCELLED for Friday CLASIC Advisory Board meeting & student presentations 03/08 |
2019.03.13 | Kathy Mckeown (talk on 03/12) - CANCELLED BECAUSE OF WEATHER |
2019.03.20 | Jon (transformer/bert) - CANCELLED (Fei Xia from UW at ICS colloquium on Friday, 03/22) |
2019.03.27 | Spring break |
2019.04.02 | 3:30 CS Colloq, DLC 170, Pat Verga, Neural Knowledge Representation and Reasoning - MUST SEE! |
2019.04.03 | Vivian |
2019.04.04 | 1:30 Dissertation Defense, FLEM 279, Daniel Peterson, Bayesian Approaches to Computational Semantics. |
2019.04.10 | Rehan |
2019.04.17 | Shantanu |
2019.04.24 | Abhidip Bhattacharyya Prelim - Multimodal Learning in space of Text and Image, Doc: [2] Slides: [3] |
2019.04.25 | Peter Norvig - As We May Program, 3:30pm ECCR 265
Abstract: Innovations in machine learning are changing our perception of what is possible to do with a computer. But how will machine learning change the way we program, the tools we use, and the mix of tasks done by expert programmers, novice programmers, and non-programmers? This talk examines some possible futures. BIO: Peter Norvig is a Director of Research at Google Inc. Previously he was head of Google's core search algorithms group, and of NASA Ames's Computational Sciences Division, making him NASA's senior computer scientist. He received the NASA Exceptional Achievement Award in 2001. He has taught at the University of Southern California and the University of California at Berkeley, from which he received a Ph.D. in 1986 and the distinguished alumni award in 2006. He was co-teacher of an Artifical Intelligence class that signed up 160,000 students, helping to kick off the current round of massive open online classes. His publications include the books Artificial Intelligence: A Modern Approach (the leading textbook in the field), Paradigms of AI Programming: Case Studies in Common Lisp, Verbmobil: A Translation System for Face-to-Face Dialog, and Intelligent Help Systems for UNIX. He is also the author of the Gettysburg Powerpoint Presentation and the world's longest palindromic! sentence. He is a fellow of the AAAI, ACM, California Academy of Science and American Academy of Arts & Sciences. |
2019.05.01 | Xiaowen Hu and Diego García - Measuring Sentiment in Financial Text
Abstract: At the heart of empirical work on financial text is the issue of quantifying its content, from market-wide sentiment in news media (Tetlock, 2007) to firm specific signals in annual statements (Loughran and McDonald, 2013). Although sentiment is often context specific, the extant literature has been dominated by bag-of-words approaches, in which the pre-defined word-lists are used for sentiment analysis. We use a standard machine learning NLP approach (Taddy, 2013) to measure sentiment in financial text, in the news media as well as in regulatory filings (10-Ks) and managerial discussions (earnings calls). We relate the positive and negative n-grams from the machine learning algorithm to new bags-of-words generated using an unsupervised approach that picks n-grams that are related to lagged returns (DJIA, firm specific). Our new sentiment indexes strongly dominate standard bag-of-words approaches. |
2019.05.08 | Final exams |
Past Schedules
- Fall 2018 Schedule
- Summer 2018 Schedule
- Spring 2018 Schedule
- Fall 2017 Schedule
- Summer 2017 Schedule
- Spring 2017 Schedule
- Fall 2016 Schedule
- Spring 2016 Schedule
- Fall 2015 Schedule
- Spring 2015 Schedule
- Fall 2014 Schedule
- Spring 2014 Schedule
- Fall 2013 Schedule
- Summer 2013 Schedule
- Spring 2013 Schedule
- Fall 2012 Schedule
- Spring 2012 Schedule
- Fall 2011 Schedule
- Summer 2011 Schedule
- Spring 2011 Schedule
- Fall 2010 Schedule
- Summer 2010 Schedule
- Spring 2010 Schedule
- Fall 2009 Schedule
- Summer 2009 Schedule
- Spring 2009 Schedule
- Fall 2008 Schedule
- Summer 2008 Schedule