Jonas Nölle Profile picture
Postdoc @facesyntax🔎:🗣️🤗I study how language, culture and cognition evolve using #VR █-) & experimental semiotics 🙆 Formerly 🎓@UoE_CLE & @interact_minds

Sep 7, 2021, 13 tweets

Multimodality is the future of language! Plenary by @ozyurek_a on how multimodality should shape our ideas of language (and thus its evolution) at #Protolang7

Earlier approaches into the fundamental nature of language have ignored multimodal aspects. Ozyurek, however, argues that language is an adaptive system that has been multimodal from the get-go and adapting to any setting it is thrown into (including future technologies)

Multimodal expressivity as a fundamental design feature is what has allowed language to be so adaptive, as each modality provides individual semiotic affordances that can be applied (and combined) in all kinds of communicative contexts.

No language community uses speech only, vocal and visual/bodily articulators interact flexily across all cultures. Understanding why this preferred is an important question when trying to understand the nature of language.

Popular lang evo theories have long focused on speech, or gesture as singular systems (where one or the other came first giving rise to complex signals, verbs and nouns etc later). Such theories are incomptabile with a multimodal view and cannot explain multimodal signals today

Similarly, traditional linguistics has usually focused on spoken/written data, but both linguistic and non-linguistic components should be considered in an integrated way as part of a complex adaptive semiotic system that can accomodate variations in many ecological contexts

A multimodal (not-gestural) origins theory is more compatible with such a view (see also Levinson and Holler 2019, TiCS).
So what's the evidence for the multimodal view? Speech & gesture are part of an intgrated system across many core domains of language

for example, iconic gestures vary with semantic/syntactic variations in spoken language (e.g. describing motion events in verb-framed vs satellite-framed langs). They also vary with semantic typology, e.g. in time metaphors or demonstrative systems (incl. pointing and eye gaze)

gestures also have consequences for processing as they are readily integrated with speech, which requires different brain areas to process (see, e.g., work by @sdk_lab), e.g. a basketball gesture would be remembered by subjects as having *heard* that "she played basketball"

Gestures are also modulated for the addressee, e.g. how Italian speakers describe actions to adults or a child. Gestures are part of the communicative intent. And pointing/iconic gestures are integrated in spoken lang acquisition too where they combine with linguistic structure

For sign langs it has been debated whether modality-specific expressions (iconic/pointing) are in fact part of SLs. In a multimodal view, they are just as for spoken langs & similarly vary cross-linguistically, are involved in processing, interaction and transmision/acquisition

As work headed by @MacuchSilvaVini has shown, multimodal expressions have also an advantage when referring to novel referents, and work by @jamesptrujillo et al showed how both speech and gesture adapt to environmental noise levels, e.g. making gestures more repetitive

Language likely began multimodal and will continue to further adapt to technology and human cultural evolution (which will probably only an increase in multimodality). Future studies should test uniqueness/advantages of multimodality compared to other species and in lab exps

Share this Scrolly Tale with your friends.

A Scrolly Tale is a new way to read Twitter threads with a more visually immersive experience.
Discover more beautiful Scrolly Tales like this.

Keep scrolling