582 阅读 2021-03-03 18:01:58 上传
会议讲座: 会议
时 间: 2021.03.10
形 式: 在线
主 讲 人: Shravan Vasishth
主办单位: 港中文语言处理实验室
人 数: 300
心理语言学线上论坛 | 3月10日 Shravan Vasishth 教授讲座
Speaker: Shravan Vasishth
Title: Case and Agreement Attraction in Armenian: Experimental and Computational Investigations
Time: 15:00 – 16:30, Wed, 10 March 2021
(Beijing, Hong Kong time)
Venue: https://cuhk.zoom.us/j/779556638
https://cuhk.zoom.cn/j/779556638
About the Speaker
Shravan Vasishth received a BA (Honours) in Japanese from Jawaharlal Nehru University (JNU), New Delhi (1989), an MA in Linguistics (1992) also from JNU, an MS in Computer and Information Science and a PhD in Linguistics from Ohio State University (2002), and an MSc in Statistics from Sheffield, UK (2015). After a two-year postdoc at Saarland University’s Computational Linguistics department, he joined the faculty at the Department of Linguistics at the University of Potsdam, Germany, in 2004, where he has been full professor since 2008. He has worked as a patent translator in a law firm in Osaka, translating patents from Japanese to English. Fun fact: Shravan owns a real Japanese sword which he knows how to use; he has a second-degree black belt in the Japanese martial art Iaido (居合道, swordsmanship). When he was living in Japan, he won a state (Hyogo prefecture) and a national Iaido championship in Japan. He also practises Tai Chi and Qi Gong in Berlin, with Klaus-Dieter Zarn.
Shravan Vasishth在新德里的贾瓦哈拉尔尼赫鲁大学(JNU)获得日语荣誉学士学位(1989年),在JNU获得语言学硕士学位(1992年),在俄亥俄州立大学获得计算机和信息科学硕士学位和语言学博士学位(2002年),在英国谢菲尔德获得统计学硕士学位(2015年)。在萨尔兰大学计算语言学系完成两年的博士后研究后,他于2004年加入德国波茨坦大学语言学系,自2008年起担任该系正教授。他曾在大阪的一家律师事务所担任专利翻译,将专利从日语翻译成英语。有趣的事实:Shravan 拥有一把真正的日本剑,他知道如何使用它;他有日本武术居合道的二级黑带 (居合道,剑术)。当他在日本生活时,他赢得了一个州(兵库县)和一个国家的Iaido锦标赛。他还在柏林与Klaus-Dieter Zarn一起练习太极和气功。
Vasishth’s research group develops computational models of human sentence comprehension. He is co-developer of the leading model of retrieval processes in sentence processing (Lewis and Vasishth, Cognitive Science, 2005; Vasishth et al., TiCS, 2019). A major research interest is in developing computational models of impairment in aphasia (Patil et al., 2016, Maetzig et al. 2018, Lisson et al., 2020). He is also interested in statistical theory and practice, particularly the applications of Bayesian methods to data analysis and computational modeling, and in open science, transparency in research, and good scientific practice. He has co-authored many tutorial articles on applying modern statistical methods in psycholinguistics (e.g., Schad, Betancourt, Vasishth, Psychological Methods, 2020), and methodological articles illustrating the consequences of basing inferences on underpowered studies in psycholinguistics (Vasishth et al., JML 2018). Shravan runs an annual week-long summer school in statistical methods for linguistics and psychology that covers both frequentist and Bayesian methods and attracts approximately 350 applications every year (https://vasishth.github.io/smlp/).
Vasishth 的研究小组开发了人类句子理解的计算模型。他是句子处理中领先的检索过程模型(Lewis and Vasishth, Cognitive Science, 2005;Vasishth等人,TiCS, 2019)。主要研究兴趣是开发失语症损伤的计算模型(Patil等,2016年,Maetzig等,2018年,Lisson等,2020年)。他也对统计理论和实践感兴趣,特别是贝叶斯方法在数据分析和计算建模中的应用,以及开放科学、研究透明度和良好的科学实践。他与人合作撰写了许多关于在心理语言学中应用现代统计方法的教程文章(例如,Schad、Betancourt、Vasishth,《心理方法》,2020年),以及阐明基于心理语言学中缺乏动力的研究进行推理的方法学文章(Vasishth等,2018年JML)。Shravan 每年开办一所为期一周的暑期学校,学习语言学和心理学的统计方法,涵盖频率论和贝叶斯方法,每年吸引约350人申请(https://vasishth.github.io/smlp/)。
Case and Agreement Attraction in Armenian: Experimental and Computational Investigations
Shravan Vasishth
Department of Linguistics, University of Potsdam, Germany
Email: vasishth@uni-potsdam.de URL: vasishth.github.io
In reading studies, agreement attraction is often taken to refer to the fact that the auxiliary verb are in (1) is read faster than in (2):
(1) The key to the cabinets are on the table.
(2) The key to the cabinet are on the table.
What’s surprising about this finding is that both sentences are equally ungrammatical. Why would (1) be easier to process than (2)? There are various theoretical explanations for this effect, ranging from a presumed misretrieval of the non-subject noun cabinets (the retrieval account), to feature overwriting of the subject noun’s number feature (hereafter, encoding accounts). In this talk, I discuss two issues related to agreement attraction. First, can distinctive case marking on the nouns attenuate interference effects? Previous studies have suggested that, in production, distinctive case marking on noun phrases reduces agreement attraction; theory predicts that this should happen in sentence comprehension as well. To answer this question, we conducted three attraction experiments in Armenian, a language with a rich and productive case system. The experiments showed clear attraction effects, and they also revealed an overall role of case marking such that participants showed faster response and reading times when the nouns in the sentence had different cases. However, we found little indication that distinctive case marking modulated attraction effects. We present a theoretical proposal of how case and number information may be used differentially during agreement licensing in comprehension. The second issue we consider is: which theoretical account explains agreement attraction data better, encoding or retrieval? We carried out a self-paced reading study in which we elicited information about which noun might have been retrieved; then we computationally implement several competing models of agreement attraction, and show through a quantitative evaluation of the models that encoding accounts provide a superior explanation for the data than retrieval theories, at least for the Armenian data considered here.
Background reading:
Serine Avetisyan, Sol Lago, and Shravan Vasishth. Does case marking affect agreement attraction in comprehension? Journal of Memory and Language, 112, 2020. doi: 10.1016/j.jml.2020.104087
Dario Paape, Serine Avetisyan, Sol Lago, and Shravan Vasishth. Modeling misretrieval and feature substitution in agreement attraction: A computational evaluation. 2020. Preprint: https://psyarxiv.com/957e3/
Virtual Psycholinguistics Forum:
(https://cuhklpl.github.io/forum.html)