ABSTRACT
How do people understand each other in everyday face-to-face interaction? Many
researchers in the language sciences have addressed this question, mostly using data from
spoken languages, but with relatively little regard for the fundamental role of sign languages.
Despite the differences, both signed and spoken languages use visual and gestural ways to
communicate. Sign languages are natural, complex and fully developed languages that rely
entirely on the visual-gestural modality by producing linguistic signs and communicative
gestures.
This talk will focus on the visual and gestural practices that signers and speakers use to
perceive and understand when asking and responding to questions. One of the main sources
of evidence in evaluating whether an utterance is a question or other type of social action is
to look at how it is perceived and treated through the addressee’s response in terms of both
linguistic and non-linguistic practices. Three languages are compared: two of them un-related
sign languages, Argentine Sign Language (Lengua de Señas Argentina or LSA) and British Sign
Language (BSL). In addition, modality effects are also examined by including a spoken
language (Spanish).
Data come from corpora of video-recorded spontaneous, naturally occurring, dyadic
conversation among deaf native and near-native signers on the one hand, and native speakers
of Spanish on the other hand. Around 200 question-response sequences in each language
have been included. Preliminary findings suggest a systematic use of turn-final hold across
languages and modalities in questions and responses to questions. Focusing on both visual
and auditory type of languages help us to better understand specific modality effects and
linguistic communalities and particularities across languages and cultures.