, 77 tweets, 16 min read Read on Twitter
I see this thread from last year is getting another round of attention. Results of our randomised trial to test the effect of press releases on the news are now in-press with @BMCMedicine so stay tuned! cc @adamsrc86
We’ve just published the next step in our journey to understand the role of press releases in science news. It’s without doubt the most off-the-wall & unexpected research project I’ve ever been involved in.

Our latest foray was a real-world experiment on the news media itself.
That’s right. We did an experiment on the news.

We took press releases on health-related science, altered them before they were issued to journalists, and then studied what effect the changes we made influenced science reporting.
In this continuation of the thread I started last year, I want to take you through the results of the trial and what I think they mean.

But I also want to take you behind the scenes of doing this sort of research because the project was a political roller coaster.
To be honest, it’s amazing that the project happened at all. What follows is a pretty long thread that in retrospect feels like a kind of academic version of Billions. Feel free to mute me if this isn't your thing.

Otherwise, saddle up for some Sunday night thrills…
To recap tweet 14/x above, in Dec 2014 we published a retrospective study which found that most exaggeration found in health-related science news is already in the press releases issued by universities
This was big news & it made a splash. When it came to bad sci reporting, university press releases were likely to be a major contributor. As someone put it, all these years it was assumed that *reporters* created hype, but in fact “the call was coming from inside the house” /26
But those of us on the research team knew from the very beginning that as impactful as the project might be, it could never provide evidence that press releases *causally* influence science news. Why? /27
Because the research was retrospective and observational, much like a lot of epidemiology. This meant we could only ever show *associations* between the content of press releases and the content of news stories. /28
To demonstrate causality you would need to do an experiment. And to do it properly you would need a randomised controlled trial or RCT – the gold standard for testing the existence of causal effects. /29
If we could show in an RCT that improving the quality of press releases improved the quality of science news – and, crucially, WITHOUT reducing news uptake – we could build an evidence-based policy for sci communication that press officers would have every reason to embrace /30
Afterall, we knew already that press officers had no innate desire to issue inaccurate or hyped press releases. But from talking with them (lots of them) we also understood the pressure they face to generate media impact for their universities. /31
One press officer we met took this so seriously that every week he'd get out his ruler and record the literal column inches of every print news story stemming from his university's research. He'd been doing it for decades. /32
For running a randomised trial, the big problem I kept coming back to was feasibility. It seemed like one of those thought experiments that in theory would be beautiful but in reality you know is a pipedream. /33
We needed the idea to pass the smoke test. So at the @SMC_London Christmas drinks party in 2014 we worked the room to find out what journalists might make of an experiment on the news.

Our BMJ paper had just come out & a lot of people were talking about it. /34
We asked the journalists: How would you feel about being guinea pigs in a trial where the press release you’re reading might have been manipulated?

How would you feel not knowing what the manipulation was? Not even knowing if this press release was part of the trial or not? /35
The response varied. A few of the specialist reporters liked the idea. Some didn’t give a crap. One guy spent the whole time glancing forlornly over my shoulder to see if someone more important might walk past who he could talk to. I felt a bit sorry for that guy /36
Several journalists were quite disturbed by it. One BBC reporter told me that running controlled experiments on the news was undemocratic and dangerous. /37
That raised a red flag for me. If we ran the trial, I wondered if journalists might crucify us. Visions of being doorstepped. ENEMY OF DEMOCRACY. MANIPULATOR OF THE NEWS. A laughable anxiety, really, given the state of the world now but these were the heady days of pre-2016. /38
At the same time if we were going to run an experiment like this, our most important partners were not the journalists, they were the press officers. If we were going to intervene in press releases b4 they were issued, we'd need press officers to work with us. And trust us. /39
But there was one major hurdle with getting them on board. Our BMJ paper, which by now was storming across the media and social media, pissed a lot of them off. Big time. Here's the paper as a refresher: bmj.com/content/bmj/34… /40
Not all of them, of course, but enough to make the prospect of a trial a diplomatic nightmare.

Here we were, mostly nobodies with no sci comm background, detonating a nuclear warhead in the BMJ telling them that most exaggeration in science news begins in press releases. /41
Many press officers felt that we’d just driven a bulldozer through their profession. Of course our work wasn't intended that way at all. We just wanted to know the answer to the question: Where does hype come from in science news? /42
And as I pointed out earlier in this thread, this is important because if hype comes from universities then as scientists we can do something about it. /43
We've got zero chance of changing newsroom culture (my earlier escapades in this area – see the very start of this thread – taught me that). But we can change what happens in the universities because we basically ARE the universities. /44
And in our BMJ paper we were careful to attribute the main responsibility for the content of press releases to the scientists who approve them, not the press officers. /45
We decided that to get the press offices on board for a possible randomised trial, we needed a kind of good will national diplomacy tour. I mean, JFC, if there is anything I am NOT built for... /46
But we saddled up. And in the freezing Jan/Feb of 2015, a small group of us went up and down and all round the UK to visit the press offices of major universities. /47
It was one of the most surreal experiences of my working life. A public relations tour conducted by scientists with no public relations experience in order to build (& in many cases mend) public relations with public relations professionals. Most of whom we'd never even met. /48
We’d arrange meetings with the press officers, turn up with our BMJ paper, walk them through it, and then ask if they want to join our trial. Let’s work together, we said.

Join us.

Join us, or die.

Just kidding. Am I? Not sure now. /49
Some were enthusiastic. Some, I could tell, didn’t trust us a wink. Others agreed to meet but were openly annoyed at the BMJ paper & used the meeting as a kind of therapy session to air grievances. Still, we always asked at the end of each meeting: “Want to join our trial?” /50
For me, the most memorable meeting we had was at a university in the midlands. Their comms guy seemed nice enough beforehand but when @adamsrc86 and I turned up, I realised it was a trap. /51
He was a Dacre-Paxman hybrid in press officer form. He began by saying how appalled he was by the paper. Angered. Disappointed. An insult to him, his colleagues, his profession. His cat, his grandma. His favourite pencil sharpener. It demeaned academia. He’d brought notes. /52
He was also pissed off at @bengoldacre for a BMJ editorial he wrote about our paper. You can find Ben’s comment linked in this blogpost: badscience.net/2014/12/my-bmj… /53
Dacre-Paxman told us how we'd got everything wrong. We didn’t understand research. Our own methods. The results. Anything. We were wrong to blame press officers for exaggeration, if there was any. Which there wasn't. /54
HANG ON A MINUTE, I thought. We’d been getting bollocked for over an hour, but that crossed the line. I pointed out as calmly as I could that we weren’t blaming press officers. If anything we held scientists responsible. I pointed him to this part of the BMJ paper. /55
That’s even worse!” he exclaimed. “Now you’re treating us like children who aren’t responsible for the press releases we issue!”

Tough crowd.

“Would you like to join our trial?”

At the end of the meeting he trundled off with a scowl and I admit I had a few strong drinks on the train home. (Dacre-Paxman never joined the trial)

Incredibly, though it didn’t feel like it at the time, our meet-and-greet tour worked. Over the next couple of months we managed to line up a sufficiently large group of press offices who were willing in principle to work with us. Enough to get started. /58
There were conditions. We needed a protocol that didn’t bork their workflow. Their timelines for producing press releases were tight. The protocol needed to be agreed in detail. They had reputational concerns. And we agreed to keep their identities confidential. /59
I have to say I was impressed at the general willingness of the press officers to engage with us. Many expressed a desire to improve the quality of press releases. They recognised the importance of their role and their duty to encourage accurate journalism. /60
They also told us about the pressures they often face from scientists (sometimes very senior & powerful ones) to over-egg press releases. At the end of the day, the scientists were usually in charge. I developed a respect for press officers. They have a tough job. /61
And we had a grant now too from the ESRC to run the trial. Things were coming together. We were just on the verge of being able to get it going when I got a phone call from an influential friend who worked in science journalism and public relations. /62
She'd just returned from a meeting of the heads of various university comms departments and our trial was being trashed. Eviscerated. There was a risk of us losing everyone, months of complicated negotiations for nothing. /63
It turned out that Dacre-Paxman had a surprise up his sleeve for us. He’d spent the meeting attempting to convince the press offices from other universities who'd signed up to drop out, while pushing those sitting on the fence to shun us altogether. /64
He accused us of all sorts of shady practice and framed us a bad operators. Looking back all these years later (somewhat older and wiser), I realise it was an amazing political play. And it came within hair’s breadth of sinking everything. /65
But our influential friend (to whom I owe drinks for eternity) stood up & defended us. She went to bat for the trial & for us personally, & held the ship together. These are the friends you need to do research in this space. She was amazing. So we crept forward again. /66
Then another threat. We almost lost another major press office – one that so was important that it threatened to trigger a cascade of departures. I can’t say more without identifying the university but the Vice Chancellor himself had to come to our rescue. Another breath. /67
A few months later & we were ready to start. We now had a merry band of 9 UK press offices on board and a protocol that was as efficient as possible for them and as rigorous as we could make it. We had ethics approval. We preregistered it here: isrctn.com/ISRCTN10492618 /68
But, boy, the research was slower going than we intended. Our protocol had to be complex to avoid bias & unblinding. One researcher would receive the draft press release before it was issued. They’d check it for eligibility & then assign it randomly to one of four arms. /69
In the first arm we altered the headline and main claims in the draft press release to match the strength of inference allowed by the study design, as reported in the journal article that the press release was based on. /70
So, for e.g., if the study was correlational (e.g. association b/w wine consumption & cancer risk) but the press release used causal language (“red wine causes cancer”) we changed it to align with the journal article (“drinking red wine is associated w increased cancer risk”) /71
In the second arm, we added an explicit statement to the press release about whether the evidence could support a causal conclusion (e.g. “this was an observational study, which does not allow us to conclude that drinking wine caused the increased cancer risk”) /72
In the third arm, we administered both interventions at once in the press release.

And the fourth arm was our placebo or sugar pill – in this case, a synonym change unrelated to causal language (e.g. “beverage” changed to “drink”) /73
Once we received a press release we’d feed it into one of the trial arms & return it to the press officer – usually within a few hours to minimise delays to their workflow. The authors of the research were then given the final say on whether to accept or reject our revisions. /74
Then another researcher, blinded to which arm the press release was in, would scrape all the print & broadcast news stories associated with the study. @adamsrc86 @aimschallenger and the rest of the team did an amazing job making this complex machine work smoothly. /75
And then we encountered another obstacle. Many of the draft press releases we were receiving ALREADY showed the interventions we sought to test. So, for example, we’d seek to add a caveat to a claim in the press release and find that there already was one. /76
There were at least two possible reasons for this. One is called a Hawthorne effect: when the act of participating in a trial changes the behaviour of the participants. Press officers might have changed the way they were writing the press releases, even unconsciously. /77
The other possibility was that our 2014 BMJ paper had been more impactful than we realised. It might be that the article itself, which was widely read by press officers, served as an intervention that changed practice. We have research in progress testing this proposition. /78
Whatever the reason, it made the trial challenging. To prevent bias, the main type of analysis in a trial – called “Intention to treat” keeps “participants” (in this case press releases) in their randomly assigned groups, regardless of whether we gave them the intervention. /79
This spontaneous adoption led to “condition mixing” between trial arms and potential dilution of the intervention. It's one of the big practical challenges when running randomised trials in a real-world, social sciences setting. /80
Even so, a couple of years later -- well after our projected trial end date and grant money had run out -- we reached our target sample size of press releases (75 per trial arm) and opened the box on the data analysis. /81
The intention-to-treat analysis - the most conservative test - didn’t show much effect on the news. This could be because our interventions had no effect, or it could be because the “spontaneous adoption” diluted the intervention & made the test insensitive. /82
The one statistically significant outcome for the intention-to-treat analysis was a small benefit for news headlines. Encouragingly, we also found no evidence that the intervention *reduced* news uptake. /83
To get around condition mixing caused by spontaneous adoption, we decided to conduct a post hoc “as-treated” analysis. This reassigned the press releases to whichever intervention they ACTUALLY got, regardless of whether we gave it via the randomisation or they had it already /84
As-treated analysis should be used with great caution in trials. Here it violated the random assignment of press releases to the trial arms. It is prone to bias and doesn’t allow causal conclusions. BUT it is potentially more sensitive. /85
The differences were striking. Here’s the key figure. When press releases were aligned to the claims in the journal article, the news was more likely to follow suit (see A). And even with this "as-treated" analysis there was still no evidence of reduced news uptake (see B). /86
The upshot? Making claims in press releases more cautious, & aligning them with the actual study design of the journal article, is *associated* with better quality news. And, importantly, it doesn’t kill off media interest. /87
I think this means that journalists aren’t using hype and exaggeration in press releases to decide what to cover. The decision is far more complex. And the received wisdom among sci comm professionals that hype generates news -- and caution kills it -- is wrong. /88
You can read the full paper here. It is preregistered, with #opendata and #openmaterials bmcmedicine.biomedcentral.com/articles/10.11… /89
I would love to see the trial replicated and extended. Though it was a massive effort from everyone, both scientifically and politically, we showed that a trial like this CAN be done. It is feasible where there is the mutual will. /90
Realistically tho it’s going to be tough to do again. The political challenges in doing a trial like this are off the scale. It took luck & high-level connections with people at universities & in the media to stop it all falling apart. Multiple times. Few have this access. /91
That said, I really hope other researchers try because health & science reporting matters, and press releases are a crucial driver of science news. The Dacre-Paxmans of the public relations world should not be left with all the control. /92
We have some more papers coming out of this project but the trial is the flagship & I hope it changes practice. I hope it serves as a wake-up call to both scientists & press officers that you don't need to hype your PR material to get into the news. Be accurate. Be cautious. /93
Let me end with with some thank you's. First, to the brave & hardy press offices (and officers) who joined us in this adventure. Your identities are confidential by request, but you deserve applause. Second, to all the members of the research team who worked so diligently... /94
And finally, to our colleagues & friends in the media who inspired this work, offered counsel and also kept us motivated -- friends like @alokjha @SMC_London and too many others to name. /95
And I want to leave a final dedication to the amazing Lisa Schwartz, who died in Nov 2018

Lisa became involved in our work toward the later stages as a collaborator, & her work in this area (which dwarfed ours) was an inspiration from the beginning thelancet.com/journals/lance…

Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Chris Chambers
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!

This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!