Inspired by this paper (psyarxiv.com/9yqs8/) when covering traditional ethical theory in my class, we discussed what different frameworks might suggest for how to convince someone to do the right thing (e.g., WEAR A MASK!). One insight I had was: utilitarianism won't work. 🧵
A utilitarian moral message (like the one used in that paper) is essentially "think of the consequences for everyone if you don't do this!" but... people have to have an understanding of the consequences for that to be effective, but there's so much misinformation. :(
For the study in the paper (it's a pre-print) they found a modest effect for duty-based deontological messaging ("it is our responsibility to protect people) re: intention to share the message. But we also covered OTHER ethical frameworks in my class...
Many of us thought that moral messaging framed around ethics of care (e.g., "think about the people you love") would be most effective. And that in some cultures (but probably not in the U.S.) a community-based Ubuntu-inspired message might work, too.
And this isn't to say that misinformation can't impact the persuasiveness of these messages too (e.g., "I don't need to do that to protect people because it's not real") but I do think that there's something to be said for an appeal to duty or care rather than consequences.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Following four years of empirical work on research ethics for public data, our (@pervade_team) manifesto for trustworthy pervasive data research--foregrounding power dynamics and learning from ethnographers--published in @BigDataSoc. journals.sagepub.com/doi/10.1177/20…
This paper in part details evidence that many data subjects are unaware of the research uses of their digital communications, and often express unhappiness and alarm. We map awareness to this 👇 spectra and recommend researchers reflect on where their data-gathering methods fall.
Importantly, using “public” data does not relieve researchers from considerations of participant awareness, because awareness of creation is not necessarily awareness of research use. And we should reflect on both awareness and the power implications of our research.
I'm "teaching" a highly condensed version of my tech ethics & policy class on TikTok. Here's all the videos I made for the week on traditional ethical theory featuring "should Batman kill the Joker?" and ending with COVID-related moral messaging: instagram.com/tv/CT0epClhQRO/
You can follow along with this experiment to "teach a class on TikTok" here! Links to videos along with readings. I'm keeping pace with the actual @CUBoulder class I'm teaching. Apparently 2.5 hours of in-class time becomes 8 minutes of video. :) bit.ly/caseysclass
(I was going to just post the combined-topic longer videos on Twitter, but apparently Twitter has a 2 minute video length limit! So trying out Instagram TV instead, hopefully that works ok!)
There has been a very upset/angry reaction to a paper published using tweets about mental health. I'm not RTing because I'd like to talk about this without drawing more attention to the researchers or the community. But it's an important research ethics cautionary tale. [Thread]
The paper is a qualitative analysis of tweets about mental health care. It includes paraphrased quoted tweets that the researchers ensured were not searchable. The study was approved by an ethics review committee in the UK, and the paper cites the AOIR ethics guidelines.
The paper includes an ethics and consent section that includes the above and notes that because tweets are public, consent was not required. The study also included a researcher with mental health lived experience. There do not appear to be any other statements regarding ethics.
The problem with workload (and thus work-life balance) in academia is:
You can always do more.
(A thread based on a recent personal epiphany.)
Unlike many other kinds of jobs, when you are a (research-heavy) professor no one tells you exactly what you need to do. Or even how much you need to do. There are things that are wonderful about this kind of freedom. But also, it means that you can always be doing more.
How many research projects should you be doing at any given time? How many papers should you be writing? You might have a personal sense for this, maybe even a rule-of-thumb, maybe even a mentor giving you advice. But whatever N is, it COULD always be N+1.
I'm often struck by how much the foundation of science relies on individual integrity. And typically I feel it's pretty solid. But this kind of garbage is a result of publish-or-perish, bean counting, and the general incentive structures of academia. cacm.acm.org/magazines/2021…
Also when this situation came to light a year ago I went on a whole lengthy tweet-rant about it so I won't repeat myself but here you go. :)
Also to clarify: "is a result of" above should probably be "exacerbated by" because obviously it's a result of when there is a breakdown of individual integrity. Awful people gonna be awful, but there are incentive structures dictating the particular form of awful.
Unsurprisingly, Reddit as a site of study or data source is on the rise. The first 2 papers we encountered were published in 2010, with a jump to 17 in 2013, and 230 in 2019.
I also think it's really interesting that though computing and related disciplines make up the largest number of journals represented in our dataset of Reddit papers, medicine and health is next - even (just) above social science.