Skip to main content
Kent Academic Repository

Task-constraints (but not semantic association) facilitate perspective use during discourse interpretation.

Ferguson, Heather J., Ahmad, Jumana, Ulrich, Philip, Bindemann, Markus, Apperly, Ian (2012) Task-constraints (but not semantic association) facilitate perspective use during discourse interpretation. In: 25th Annual City University of New York Conference on Human Sentence Processing, 14th - 16th March 2012, New York, NY. (The full text of this publication is not currently available from this repository. You may be able to access a copy if URLs are provided) (KAR id:51290)

The full text of this publication is not currently available from this repository. You may be able to access a copy if URLs are provided.
Official URL:
http://cuny2012.commons.gc.cuny.edu/files/2012/03/...

Abstract

Interpreting descriptions about other peoples’ actions relies on an understanding of their current mental state. Psycholinguistic research on this topic has focused on the comprehension of referentially ambiguous expressions (e.g. “the cup”), and has revealed a different time-course of anticipation across tasks that either require participants to engage in an interactive question-answer discourse[4], follow a speaker’s instructions[1,2,3],or attend to a passive narrative scenario[5,6]. Indeed, it has been suggested that understanding privileged information is subject to an egocentric bias, perhaps caused by low-level associations between spoken descriptions and visually available referents [7].

We report a visual-world study, where two groups of participants watched short videos. Passive observers (N=40) were simply told to ‘look and listen’, while active participants (N=40) were instructed to ‘click on the container that will complete the sentence’. Experimental videos depicted transfer events, which began with an actress (‘Sarah’) moving an object (e.g. a chocolate) into one of three boxes while another actress(‘Jane’) looked on. In the second part of the video, Sarah moved the object into one of the other boxes- either while Jane was watching or after she had left the scene. Additionally, on half the trials the first container used in the transfer event (i.e. the ‘belief’ box) predictably matched properties of the object (e.g. a chocolate box), thus providing an additional low-level cue to facilitate the belief inference in some conditions. Thus, the experiment crossed task (passive vs. active), belief (true vs. false) and predictability of the belief box (predictable vs. unpredictable).We tracked participants’ eye-movements around the final visual scene, time-locked to related auditory descriptions (See example).

Eye-tracking analyses on word-onset-locked time-windows revealed significantly different patterns of anticipation in true vs. false belief conditions throughout the auditory input (Fs>35.7, ps<.001). This reflected a general bias to predict the reality box when Jane witnessed the second transfer event (ts>6.2), and a bias to the belief box (from [objects] onwards) when Jane was ignorant to the second transfer event (ts>2.2). Prior to [objects] participants also showed a predictability bias (Fs>3.75, ps<.05), reflecting a stronger bias to the belief box when low-level cues predicted this container. Moreover, task emerged as a main effect during “[object] in the container” (Fs>3.91, ps<.05) and interacted with belief throughout (Fs>4.39, ps<.04). These effects reflect a weaker bias to the belief box in the passive task compared to the active task. While active participants correctly anticipated reference to the belief box from “look” onwards (ts>2.4), passive observers did not significantly predict the belief box until location information became auditorily available (ts prior to location <1.3).Both groups showed appropriate reality biases on TB trials (ts>2.3).

These results provide further online evidence that comprehenders are spontaneously sensitive to others’ perspectives. However, they also demonstrate that active engagement in a task leads to earlier and stronger anticipation of perspective-appropriate discourse interpretations, compared to passive observers who are susceptible to egocentric influences. Finally, this study shows that low-level language cues guide early visual biases to objects, but are not sufficient to overcome a ‘pull-of-reality’. We consider the role of task-constraints, relative to previous studies of perspective use in language comprehension.

Example: Jane will look for the [objects] in the container on the [left/ middle/ right].

[1] Hanna et al. (2003).The effects of common ground and perspective on domains of referential interpretation. Journal of Memory and Language, 49, 43–61.

[2] Keysar et al. (2000).Taking perspective in conversation: The role of mutual knowledge in comprehension. Psychological Science, 11, 32-37.

[3] Keysar et al. (2003).Limits on theory of mind use in adults. Cognition, 89, 25-41.

[4] Brown-Schmidt et al.(2008). Addressees distinguish shared from private information when interpreting questions during interactive conversation. Cognition, 107, 1122-1134.

[5] Ferguson & Breheny(in press). Listeners’ eyes reveal spontaneous sensitivity to others’ perspectives. Journal of Experimental Social Psychology.

[6] Ferguson et al., 2010).Expectations in counterfactual and theory of mind reasoning. Language and Cognitive Processes, 25, 297-346.

[7] Barr, D.J. (2008). Pragmatic expectations and linguistic evidence: Listeners anticipate but do not integrate common ground. Cognition, 109, 18-40.

Item Type: Conference or workshop item (Poster)
Subjects: H Social Sciences > H Social Sciences (General)
Divisions: Divisions > Division of Human and Social Sciences > School of Psychology
Depositing User: P.I.N. Ulrich
Date Deposited: 30 Oct 2015 14:56 UTC
Last Modified: 16 Nov 2021 10:21 UTC
Resource URI: https://kar.kent.ac.uk/id/eprint/51290 (The current URI for this page, for reference purposes)

University of Kent Author Information

  • Depositors only (login required):

Total unique views for this document in KAR since July 2020. For more details click on the image.