Language and Linguistics Seminar Series: Week 20 with Dr Elizabeth Manrique, University College London

"(Mis)understanding in face-to-face interaction: The role of visual and gestural modality in signed and spoken communication"

  • Thu 14 Feb 19

    12:00 - 14:00

  • Colchester Campus


  • Event speaker

    Dr Elizabeth Manrique, University College London

  • Event type

    Lectures, talks and seminars
    Language and Linguistics Seminar Series

  • Event organiser

    Language and Linguistics, Department of

  • Contact details

    Victoria Mead

This week we are joined by Dr Elizabeth Manrique, University College London, to talk about her recent research.

12-1pm Dr Manrique will take to the stage to deliver her talk, followed by a lunch provided by Language and Linguistics from 1pm-2pm.

We look forward to seeing you there: this event is open to all students and staff! 


How do people understand each other in everyday face-to-face interaction? Many
researchers in the language sciences have addressed this question, mostly using data from
spoken languages, but with relatively little regard for the fundamental role of sign languages.
Despite the differences, both signed and spoken languages use visual and gestural ways to
communicate. Sign languages are natural, complex and fully developed languages that rely
entirely on the visual-gestural modality by producing linguistic signs and communicative

This talk will focus on the visual and gestural practices that signers and speakers use to
perceive and understand when asking and responding to questions. One of the main sources
of evidence in evaluating whether an utterance is a question or other type of social action is
to look at how it is perceived and treated through the addressee’s response in terms of both
linguistic and non-linguistic practices. Three languages are compared: two of them un-related
sign languages, Argentine Sign Language (Lengua de Señas Argentina or LSA) and British Sign
Language (BSL). In addition, modality effects are also examined by including a spoken
language (Spanish).

Data come from corpora of video-recorded spontaneous, naturally occurring, dyadic
conversation among deaf native and near-native signers on the one hand, and native speakers
of Spanish on the other hand. Around 200 question-response sequences in each language
have been included. Preliminary findings suggest a systematic use of turn-final hold across
languages and modalities in questions and responses to questions. Focusing on both visual
and auditory type of languages help us to better understand specific modality effects and
linguistic communalities and particularities across languages and cultures.

Related events