She's blogging soon for @fasttrackimpact but in the meantime here are my takeaways 👇
Lack of time for deliberation over conceptions of impact created a risk of group think in sub-panels that may have biased some #REF2014 outcomes (1/11)
Panelists talked about how "...over time there was a coming together", a "gut feeling of how this thing is going" as they "kind of worked out that we knew what we were measuring against." They "gained something of a common mindset" (2/11)
Many REF panelists had a broader conception of impact than the REF2014 definition (for example valuing public engagement in itself) but accepted the narrower definition they were given for the assessment (for example only valuing the benefits arising from that engagement) (3/11)
Panelists conceptions of impact tended to move from linear and instrumental pre-evaluation to no-linear and broad post-evaluation (4/11)
Non-academic user evaluators were perceived by academic panelists as primarily lending legitimacy rather than improving the quality of decisions, but may have played a "powerful antidote to overly pragmatic, simplistic groupthink-based decision making” (5/11)
Non-academic panelists wearing ties found academic jumper-wearing elitism intimidating. While some managed to bring fresh perspectives, others left the process "browbeaten by their academic counterparts" and the "hundreds of thousands of papers they had gone through" (6/11)
Evaluators suspected that some cases studies were "hyperbole", "spin", "lying a little bit", "pulling the wool over the eyes with a clever writer" to "blind" panelists with their story telling (7/11)
Senior colleagues encouraged panelists to "err on the side of positive" and "round up rather than down" as a political tool "celebrating the value of British science" (8/11)
#REF2021 panels can "work smarter" if they focus on "how to get many [different] voices aired, contributing to the deliberation and [being] considered in a balanced, democratic approach" (9/11)
My favourite quote: "impact is a sticky process and is rarely linear" (10/11)
Gemma Derrick wrote her book despite being warned (threatened?) by a colleague that "critiquing peer-review doesn't always win academic friends". Thank you Gemma for having the courage to write this book and for sharing your insights with us all (11/11)
• • •
Missing some Tweet in this thread? You can try to
force a refresh
My take on what the initial decisions on REF mean for impact. Summary: these are mainly tweaks that address issues for which most institutions already found work-arounds. You can read the full announcement here repository.jisc.ac.uk/9148/1/researc…
There are 4 main proposed changes to impact: a reduction in the minimum number of case studies needed for a submission, the reintroduction of an impact narrative at the unit level, removal of the 2* threshold for underpinning research and new criteria around rigour and engagement
1. The reduction in the number of case studies needed to make a submission may encourage new groups to submit to REF, but in my experience this wasn’t a major barrier to entry as long as people didn’t mind that at least one of their cases might get a low or unclassifiable score
We need to re-think research impact if we are to truly benefit those most in need. My new paper with @hannahrudman (4 years in the making) identifies three crucial points we need to consider when designing research for impact link.springer.com/article/10.100…
It isn't possible to define impact without asking "for whom" - what will benefit one group or species in one context may compromise the interests of another group, or the same group in a different context (based on this subjective definition of impact sciencedirect.com/science/articl…)
Without a robust theory of impact, we will continue to narrow/instrumentalise our ideas of "what counts" and fail to anticipate negative unintended consequences. Our paper is a first step towards such a theory, outlining 3 key factors determining beneficial outcomes from research
Really enjoying @wadekelly's new volume, The Impactful Academic. Struck by how far ahead Canada is when it comes to drawing on diverse forms of knowledge and paying attention to context, thanks to @LaurenAlbrecht3 and Catherine Scott's chapter
Decisions are rarely made on the basis of research evidence alone. They draw on many forms of knowledge, including experience and moral judgement. Our task is get the most relevant knowledge to decision-makers, not just our research.
The pursuit of impact should be "an ongoing commitment to learning and applying knowledge to solve real-life problems"
Holiday reading - haven't been as excited about a new book as much as this for ages...
Love this approach to theory building. Theory is useful when it both explains and helps, and if it isn't helpful enough, keep applying and refining your theory in practice until you have something that actually works
Engaging with uncertainty is a prerequisite to social learning because:
1) It creates urgency when we want to make a difference but don't know how, "restless uncertainty...as an edge, pulling learning, insisting that learning help make a difference"
In addition to having robust evidence, researchers need to communicate why they are qualified to deliver the message - don't assume your audience will know your work or trust you.
Different media lend themselves to communicating different types of message to different groups. We need to become multilingual: