Difference between revisions of "Fall 2022 Schedule"
CompSemUser (talk | contribs) |
CompSemUser (talk | contribs) |
||
Line 39: | Line 39: | ||
| 02.02.22 || | | 02.02.22 || | ||
|- style="border-top: 2px solid DarkGray;" | |- style="border-top: 2px solid DarkGray;" | ||
− | | 02.09.22 || | + | | 02.09.22 || [https://blogs.umass.edu/scil/schedule-for-scil-2022/ SCiL live session!] |
|- style="border-top: 2px solid DarkGray;" | |- style="border-top: 2px solid DarkGray;" | ||
| 02.16.22 || | | 02.16.22 || |
Revision as of 18:19, 31 January 2022
Location: Virtual until further notice (Usual location: Fleming 279)
Time: 10:30am, Mountain Time
Zoom link: https://cuboulder.zoom.us/j/97014876908
Date | Title |
---|---|
1.12.22 | Planning, introductions, welcome!
CompSem meetings will be virtual until further notice (https://cuboulder.zoom.us/j/97014876908) |
01.19.22 | Kai Larsen, CU Boulder Leeds School of Business
Validity in Design Research Research in design science has always recognized the importance of evaluating its knowledge outcomes, particularly of assessing the efficacy, utility, and attributes of the artifacts produced (e.g., A.I. systems, machine learning models, theories, frameworks). However, demonstrating the validity of design science research (DSR) is challenging and not well understood. This paper defines DSR validity and proposes a DSR Validity Framework. We evaluate the framework by assembling and analyzing an extensive data set of research validities papers from various disciplines, including design science. We then analyze the use of validity concepts in DSR and validate the framework. The results demonstrate that the DSR Validity Framework may be used to guide how validity can, and should, be used as an integral aspect of design science research. We further describe the steps for selecting appropriate validities for projects and formulate efficacy validity and characteristic validity claims suitable for inclusion in manuscripts. Keywords: Design science research (DSR), research validity, validity framework, artifact, evaluation, efficacy validity, characteristic validity. |
01.26.22 | Elizabeth Spaulding, prelim
Prelim topic: Evaluation for Abstract Meaning Representations Abstract Meaning Representation (AMR) is a semantic representation language that provides a way to represent the meaning of a sentence in the form of a graph. The task of AMR parsing—automatically extracting AMR graphs from natural language text—necessitates evaluation metrics to develop neural parsers. My prelim is a review of AMR evaluation metrics and the strengths and weaknesses of each approach, as well as a discussion of gaps and unexplored questions in the current literature. |
02.02.22 | |
02.09.22 | SCiL live session! |
02.16.22 | |
02.23.22 | Invited talk: Aniello de Santo (TBC) |
03.02.22 | Ghazaleh Kazeminejad, proposal defense |
03.09.22 | Kevin Cohen |
03.16.22 | Chelsea Chandler, defense (TBC) |
03.23.22 | ***Spring Break*** |
03.30.22 | CLASIC Open House |
04.06.22 | Abteen Ebrahimi, prelim (TBC) |
04.13.22 | Ananya Ganesh, prelim (TBC) |
04.20.22 | Adam Wiemerslage, prelim (TBC) |
04.27.22 | Sagi Shaier, prelim (TBC) |
Past Schedules
- Fall 2021 Schedule
- Spring 2021 Schedule
- Fall 2020 Schedule
- Spring 2020 Schedule
- Fall 2019 Schedule
- Spring 2019 Schedule
- Fall 2018 Schedule
- Summer 2018 Schedule
- Spring 2018 Schedule
- Fall 2017 Schedule
- Summer 2017 Schedule
- Spring 2017 Schedule
- Fall 2016 Schedule
- Spring 2016 Schedule
- Fall 2015 Schedule
- Spring 2015 Schedule
- Fall 2014 Schedule
- Spring 2014 Schedule
- Fall 2013 Schedule
- Summer 2013 Schedule
- Spring 2013 Schedule
- Fall 2012 Schedule
- Spring 2012 Schedule
- Fall 2011 Schedule
- Summer 2011 Schedule
- Spring 2011 Schedule
- Fall 2010 Schedule
- Summer 2010 Schedule
- Spring 2010 Schedule
- Fall 2009 Schedule
- Summer 2009 Schedule
- Spring 2009 Schedule
- Fall 2008 Schedule
- Summer 2008 Schedule