Some questions are *standardized* (e.g., surveys, scripts, instructions) and require reading out loud, word for word.
In business, research, law, medicine, etc., do people "just read them out"?
TL;DR: No. And there are consequences.
1. 🧵
2. We might take it for granted that, when 'standardized', questions will be the same whether spoken or written. The examples in the thread will show they're not.
Without examining actual interaction, we won't know the clinical, diagnostic, legal, etc. consequences either way.
3. Let's start with @rolsi_journal's research on the significant consequences of the way diagnostic instruments about #QualityOfLife are delivered in talk, compared to how they're written on the page.
4. Maynard/Schaeffer's extensive research on standardized survey tools shows that and how, for example, "elaborations on answers... significantly affects what the interviewer does to register a (response) code in the computer."
5. The standardized questions on the Quality of Life questionnaire each have THREE response options. The instructions for "reading the items" are to "pay close attention to the exact wording." But items are often reformulated into yes/no questions with a positive tilt.
6. Here's a study by @sue_wilkinson showing the tension between standardization and 'recipient design' in the case of asking about 'ethnicity of caller': "the ethnicity question is asked and responded to, and then transformed into entries on a coding sheet."
7. When interviewing vulnerable victims, written guidance ('Achieving Best Evidence') for police states that, sometimes, witnesses should *demonstrate* their understanding of "truth and lies" - but *only* at the start of interviews. @Richardson_Emm et al find much deviation.
8. Milgram's classic obedience experiments have generated much debate about ethics since their publication, but Stephen Gibson's modern classic showed that *negotiation* between experimenter and participant led to "radical departures from the standardized experimental procedure."
9. In another classic, Robin Wooffitt showed that, rather than sticking to experimental standardization, the way the "experimenter acknowledges the research participants’ utterances may be significant for the trajectory of the experiment."
10. And here's a paper that compares "formalized communication guidance for interviewing victims, particularly vulnerable adult victims,... to what actually happens in interviews between these victims and police officers." @Richardson_Emm
11. We rarely get to scrutinize the conversations in which standardized questions for research consent are delivered. Sue Speer & I found that, in psychiatric consultations, written questions were delivered without yes/no options and were tilted towards a ‘yes’ response.
12. In @Dr_JoeFord@RoseMcCabe2 et al's analysis of how GPs diagnose depression, with & without the nine-item Patient Health Questionnaire (PHQ-9), they show that the PHQ-9 is not used verbatim - and/but that deviations from the wording works in favour of diagnosis & treatment.
13. Two more studies in which standardized questions as written are different to the same questions as spoken, and their consequences, by @ClaraIversen (in social work) @EricaSandlund & @LNyroos (in performance appraisals).
14. Summary:
- Standardization is assumed to happen, but if we don't look we don't know.
- Some questions are not always well thought through for spoken delivery.
Everything is different when you study 'the world as it happens' (Boden, 1990)
PS. Thank you @Fi_Contextual for kicking off this thread with a question about the potential invalidity of surveys - I'm sure there are many more studies in #EMCA
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Despite being "the magic word", @AndrewChalfoun @gio_rossi_5 @tanya_stivers show in their recent #EMCA conference paper that "please" appears in <10% of actual requests and does *other* things.
It's another #communication myth busted.
🧵 1/8
2/8
It becomes very clear if/when you listen to and analyse recordings of actual "in the wild" social interaction (the data used in conversation analytic research) that people make their requests sound 'polite', 'pushy', 'tentative', etc., through a variety of words and phrases.
3/8
(...and, btw, despite the enduring nature of such claims in (pop) communication & some psych & linguistics, so-called 'tentative' or 'polite' requests are NOT gendered, as pretty much any #EMCA research on requesting shows - often as an artefact if not the focus...).
Great to see “signage and ratings”, “awareness”, and “visible assurance” prominent in @RAEngNews@CIBSE recommendations to ensure that the public understands the importance of “good indoor air quality.”
Between Oct 21-March 22 @IndependentSage and colleagues worked on a project to design, pilot, and evaluate a scheme to convey, in a non-technical way, #ventilation information ('scores / signs on the doors') for rooms, buildings, and venues. 3/8
I haven’t transcribed Johnson for a while (too😡) but for the records here are his responses to Susanna Reid's questions about #Elsie, which include placing a definitive-sounding "no" after Reid suggests "you can't say anything to help Elsie, can you."
Part 1: Opening question:
Part 2, in which Johnson produces incomplete responses, cut off and abandoned sentences, rushed-through turns, deviations, and stated intentions - but does not provide examples of what Elsie "should cut back on".
Part 3, in which Reid repeats her initial question (at line 47); Johnson repeats his earlier answer (line 49); resists addressing Reid's factual challenges, and ends up placing that "no" at line 65 - he can't say anything to help Elsie because "we" are focusing on supply.
What can we learn from the #language of “living with covid”?
We wrote about the origins of “living with it”; how it became associated with Covid-19, and how – like other idiomatic phrases – it closes down discussion (“just live with it!”)
2. We searched on @LexisNexisUK for the first use, first use in association with Covid-19, and frequency of use, of twelve variations of ‘living with it’ and ‘learning to live with it’, up to the start of 2022.
It’s clear that ‘live/living’ outpaced ‘learn/learning’ versions.
3. Here are some examples from Lexis Nexis.
For each iteration of the phrase, we looked at the date and quote of the first (non-covid) mention; number of hits/mentions (to end December 2021); first Covid-19 mention, and an exemplar recent Covid-19 mention.
What evidence is there that “using these 8 common phrases” will “ruin your credibility”?
Answer: Not much.
Why do we create and perpetuate #communication myths? Communication is important, and we don't see enough of how it works “in the wild.”
🧵Thread 1/12
The thread is informed by research in conversation analysis #EMCA
There are other research methods for investigating communication, but not all look at actual humans producing, for instance, those “8 common phrases” in social interaction.
That’s what this thread will do. 2/12
The thread gives examples of the “8 common phrases” being used.
As @DerekEdwards23 says, if data-free assertions (advice, theories, models) don’t account for actual interaction, there’s a problem.
Judge for yourself whether the phrases undermine speaker credibility. 3/12
After last week's focus on the science of mechanical and natural #ventilation, today's @IndependentSage briefing focused on its translation into a non-technical #communication#messaging 'proof of concept' scheme.
3. NB. Ventilation is complex - as is making decisions about the behavioural mitigations needed following the assessment of any given space - so any such scheme must be underpinned by ventilation and aerosol expertise ...