This thread describes a research study that utilized deception and may have resulted in harm. I believe university IRBs are failing researchers who have been taught (incorrectly) to rely on them in good faith to flag & help navigate all possible ethical concerns in a study. 🧵
The methods described here are similar to resume and similar audit studies, and there has been debate over research ethics related to this kind of deception research for years, despite unquestionable benefit from findings that have provided evidence for discrimination.
To my knowledge, these kinds of studies have fallen under the purview of ethics review boards. For example, this paper about the ethical issues calls on ethics review to be more critical in requiring justification for deception. journals.sagepub.com/doi/full/10.11…
A similar issue came up not long ago regarding researchers who submitted faulty patches into the Linux kernel to see what would happen. Their IRB (post-hoc) said this did not constitute human subjects research. theverge.com/2021/4/30/2241…
In this more recent study, the researchers sent emails to contact-email addresses on websites, and then used responses as data. I am absolutely baffled by how any IRB would not consider this to be human subjects research under the federal definition.
Apparently an IRB said this study wasn't human subjects research because they didn't collect personally identifiable information - but that's only half of the definition. There's an OR and the other part includes obtaining information through communication with the researcher.
You've all heard me say that IRBs are about compliance, not ethics. But at least for deception research an IRB requires justification and consideration for harms. It's actually a challenging kind of research to get approved. And this study could have benefited from that scrutiny.
For example, here is one IRB's guidelines for using deception in research. There are a number of issues that if considered might have influenced the study design. And help from someone knowledgeable about research ethics could have help identify harms. campusirb.duke.edu/irb-policies/u…
Unfortunately, it appears to be the case that this research resulted in *actual* harm, financial and emotional. I have seen multiple reports that people who received these emails either (a) paid a lawyer to help; or (b) experienced anxiety that they might be in trouble.
I suspect that an underlying issue in thinking about the potential ethical issues (possibly from both the researchers and the IRB) is thinking about emails as going to a *website* rather than a *person*. Unfortunately websites cannot answer emails.
I have my own thoughts about changes that could have been made to this study design such that much of this harm could have been mitigated. I am also sympathetic that well-intentioned researchers might rely on an IRB evaluation in good faith.
So I think that this is something that we can all learn from, to inform education and our own research practices. IRB review is necessary but insufficient. And another example is, of course, use of public data which *actually* isn't human subjects research under the definition.
Researchers make mistakes. A good outcome when that happens is to learn from them- but also help others learn from them. I would of course like to see less ethical controversies, but they are helpful for (hopefully) avoiding similar mistakes in the future.
This tumblr post about a middle school science class points to a huge worry I have about research integrity & incentives. TL;DR An entire class lied about results of an experiment b/c they assumed getting the "wrong" result would earn a failing grade. 🧵 luulapants.tumblr.com/post/663893051…
The crux of it is that incentive structures in academia - what can get published, what gets attention that results in citations or press - are often based on the *findings* rather than the quality of the science that gets you there.
So as not to repeat myself, here's a recent-ish thread I did on this issue, featuring the gay marriage data fabrication scandal, peer review, the replication crisis, etc.
Hi all! I was thinking of doing this same assignment for my Online Communities class next semester. If so I will provide a starter list of books to choose from (or let them pick others) - can you all give me your suggestions for books about online communities/social media?
To start here are a few I think would be great but there are SO MANY more:
Participatory Culture, Community and Play: Learning from Reddit by @hegemonyrules
My Tiny Life by @juliandibbell
Distributed Blackness by @DocDre
This Is Why We Can't Have Nice Things by @wphillips49
Also when I was in grad school I was trying to post photos on Flickr that people could use to illustrate blog posts and such, so here's a very small stack of books I had back then. Some of these I read in @asbruckman's class for my MS degree in 2005. :) flickr.com/photos/cfiesle…
The reaction to the TikTok school threats is fascinating (and troubling) because - despite school closures all over the country and a ton of media coverage - it appears there might not have even been threats in the first place. A threat didn't go viral - fear did. 🧵
Last night when I spoke to @washingtonpost about the TikTok school threats (there's some vague thoughts from me: washingtonpost.com/technology/202…) I pointed out that I hadn't actually seen any such videos (just reactions) and couldn't even find a description of what the originals were.
The journalist I spoke to also had only seen TikToks ABOUT the threat. Teens expressing fear to go to school and encouraging others not to. Some possibilities: (1) there was a threat that did not circulate much but got a reaction that did; (2) the original was a fictional fear.
Facebook's re-branding/focus raises a question: if "connection is evolving," then how will the problems with connection via social media evolve? Coincidentally, today in my information ethics and policy class students did a speculative ethics exercise on this exact question. 🧵
Groups of students chose one of the issues raised in the facebook papers reports (algorithmic curation, misinformation, body image, content moderation, hate speech, etc.) and speculated about how that might manifest in the metaverse, and then about possible solutions/mitigation.
For example:
Disinformation. How might inaccurate perceptions of reality be even more severe in VR/AR? There have already been discussions about watermarks on deepfake video - should easy distinction between what's real versus not be a required feature?
Belatedly, I wanted to say a bit about what was discussed yesterday with the @sigchi Research Ethics Committee at #CSCW2021. What is the committee and what are folks in our community struggling with or thinking about when it comes to research ethics and processes?
The SIGCHI research ethics committee serves an advisory role on research ethics in the SIGCHI community. We can answer questions generally, but typically we come in during the review process to help reviewers who raise ethical issues. (We advise but do not make decisions.)
The most common outcome when we weigh in on ethical issues that arise during paper review is that reviewers ask authors for clarifications or more information or reflection in their paper. Here is a list of some general topics that have come up in recent years.
I just did a livestream answering questions about PhD applications and am really concerned about the number of people who think that having published papers is an absolute prerequisite for admission to a PhD program.
And I'll just say it: If as a professor or an admissions committee your major criteria is "author on a published paper in a major journal" then I hope you're looking real hard at the diversity of your student population because I'm guessing you might have a problem.
There's a reply in this thread that's (I'm paraphrasing): profs want applicants who've already published b/c they see PhD students as paper producing factories.
I'm concerned with the number of likes it has :( Are there so many profs like this that this is a dominant perception?