Greg Mills asks how people coordinate when they interact with each other. #Protolang7
Usually we use reference games to study how conventions emerge to enable this. Which usually leads to patterns and the emergence of conventions lik enew referring expressions (or signs in experimental semiotics)
BUT there are more fundamental coordination problems in dialogue that are actually very different from referential problems. He shows clips of people coordinating on a street quite seemlessly and messed up high fives or tennis doubles, where coordination fails.
What fails is the timing, the turn-taking, signalling when to do what. We can use language to explicitly resolve this, using coordination expressions.
For example in hunting, actions need to be coordinated with signals like 'wait over there'. There are a large number of procedural expressions that are used in social coordination. X and Y here could be any task.
However, in psycholing, this has hardly been studied. However, e.g. in the tangram task, 30% of speech content is procedural
Procedural language is just as hard and ambiguous as referring expressions. To develop procedural conventions, we conventionalise, e.g., in the interaction routinising 'wait' to mean 'wait 5 seconds before doing x'
Participants then develop adjancy pairs and rapidly conventionalise them.
Mills argues that procedural language is a blind spot in studying language in interactive settings, since most studies focus on referring expressions and are concerned with repetitions/entrainment of contributions.
So how does procedural coordination actually develop? And what happens when participants don't have language to coordinate? And which mechanisms are involved?
He presents a guitar hero-based style, where participants can't use language, but have to coordinate, because only one person can see the instructions. They are only able to communicate with the buttons on the controller however, and only some notes they can play are shared.
This creates many procedural coordination problems, like the 3rd one where they have to take turns pressing notes.
Participants do manage to develop their own languae for solving this, where the procedural actions are the form of communication
A similar version has been done with keyboards with different conditions controlling what feedback they could provide. Being able to signal only negative feedback is actually detrimental to coordination, and alignment was actually higher in unsuccesful dialogues.
What the exps seem to suggest is that the solution to procedural coordination problems are rapidly conventionalised similar to referring expressions.
Fantastic talk by @kristian_tylen and colleagues from @AarhusUni@interact_minds (& @Nicolas_Fay)
showing how to combine archaeology, cognitive science and semiotics to study the possible symbolic function of South African cave engravings over several millenia.
Engravings in these areas seem to evolve into more structured forms over time, perhaps signalling gradual refinement of symbolic tools. But the function of these potential symbolic tools is not very clear.
Some think they could just be for aesthetic effect (non-semantic), regard them as cultural/traditional stylistic elements (to actively mark group identity), or perhaps they are early signs of full-blown denotational symbolic and semantic signs, pointing to individual meanings
Concepts have traditionally been thought of as either transcendental, biological, or grounded in social interaction. The latter refers for instance, how languages make conceptual distinctions, e.g. with regard to spatial relations
What drives these distinctions? It might be that salient features of the environment drive these distinctions in situated language use where environmental biases would get enhanced and eventually conventionalized in culture
Cool work on complexity and simplicity in language evolution across species by @Limor_Raviv and @cedricboeckx. They start with an interesting discrepancy between animals and humans in how social complexity shapes the complexity of their communication systems #Protolang7
An important distinction we need to make is whether we are talking about grammar or simple signal variation, and what 'simple' or 'complex' actually refers too. The mirror pattern we see might relate directly to how we distinguish these concepts.
In animal communication research, the social complexity hypothesis contrasts on the surface quite directly with the linguistic nich hypothesis by @glupyan et al, suggesting a seemingly disciplinary conflict
@YaaminMoot et al from @UoE_CLE show work on regularisation, naturalness, and systematicity in silent gesture experiments. They start with the question of we get from item-based preling communication to a system via several processes #Protolang7
One way to test this is using possible biases in word order. E.g. naturalness: specific orders preferred for specific meanings, or regularity: same WO used for a specific meaning, or systematicity: same WO across all meanings. We also know that WO can be conditioned on semantics
this is strong natural preference found in silent gestures. But what about spoken languages? It seems much less natural there, but there is some evidence for sign languages (NSL). So is naturalness limited to improvisation? Is it replaced by systematic structure through learning?
Magdalena Schwarz, @thematzing & Niki Ritt ask why do we trust others? Between kin it makes sense, but what how is trust maintained in non-kin within cooperative groups? Or even with strangers? #Protolang7
Hypotheses on this involve social bonds, reputatio, gossip and 3rd party punishment that all help maintain trust. But what about strangers?
For strangers, symbolic tags can help identify whether they are trustworthy (e.g., wearing same clothes as ones own group). But free-riders could easily imitate this tag. Speech, or more specifically accent might be a more reliabl marker that is very hard to fake (Cohen 2012)
Iconicity, e.g. in the form of sound symbolism is pervasive in the lexicon. Iconicity can also help ground symbols via sensorimotor simulation (e.g., representing what it means for something to be a 'tree'). We also find interactions of word processing with specific brain areas
How can sensorimotor simulation manifest in iconic expressions? Looking at gestures suggests that when we think about actions, premotor activation can spill over into iconic signals as well as more deliberately when there is a need/goal to communicate perceptual details