Facebook's re-branding/focus raises a question: if "connection is evolving," then how will the problems with connection via social media evolve? Coincidentally, today in my information ethics and policy class students did a speculative ethics exercise on this exact question. 🧵
Groups of students chose one of the issues raised in the facebook papers reports (algorithmic curation, misinformation, body image, content moderation, hate speech, etc.) and speculated about how that might manifest in the metaverse, and then about possible solutions/mitigation.
For example:
Disinformation. How might inaccurate perceptions of reality be even more severe in VR/AR? There have already been discussions about watermarks on deepfake video - should easy distinction between what's real versus not be a required feature?
Screen time, design towards time-on-platform.
Could more immersion exacerbate "social media addiction"? What will the VR version of "infinite scroll" be? What will be the techniques to try to keep us online? Are there going to need to be methods to encourage people to log off?
Data privacy.
What kind of biometric data might become commonplace for collection in connecting to the metaverse? If we think really far out... at what point do we get brain-computer interfaces and what kind of data privacy laws are we doing to need?
The digital divide.
Access to the internet is already a source of inequality. Imagine if the same benefits to social media use are now behind the paywall of not just a mobile phone - but an occulus rift.
Identity theft through impersonation.
There's already a problem with people creating fake social media accounts of people (this happened to me!) - how do we deal with VR "clones"? But also how do we deal with trade-offs between safety and identity authentication, and privacy?
Content moderation.
How will all of the current challenges of social media moderation (bias, PTSD for human moderators, etc.) translate into the metaverse? Because they definitely will.
(There's actually some really interesting speculation about this in Ready Player Two.)
Body image.
Given what we know about Instagram and body image, especially for young girls, you can imagine how this might be exacerbated in VR/AR - what if "filters" become more closely connected to in-person interaction?
Unfair competition.
There's already concern that Facebook "owns" too much of the social media space. But owning the metaverse *could* look more like owning the internet if it really takes off. Do we need an anti-trust crackdown now?
Social media embedded into society.
Imagine the "Nosedive" Black Mirror episode but with AR added into the mix. If we are "in" social media for even more of our in-person lives, how might it become even more deeply embedded?
Targeted advertising in the metaverse.
What new information about us might be added to models for targeted advertising, and also HOW might things be advertised to us, e.g. in AR?
(Remember @radicalbytes' Google Glass based Admented Reality remix? )
It was a really interesting discussion, but too brief, and if I'd known the announcement was going to drop later today I'd have spent more time on it! It was design ethics week, so we talked about speculation as part of technology design, ala ethical debt. wired.com/story/opinion-…
The Meta webpage links to this page on Responsible Innovation that lists four principles. They could be summarized as transparency, control, inclusivity, and responsibility. about.facebook.com/realitylabs/re…
I have about ten ideas for op-eds that combine the facebook papers with this announcement with plot points from snow crash and ready player one.
and of course as soon as I posted about this on tiktok someone mentioned @hankgreen's A Beautifully Foolish Endeavor, too. :) Which is a great example actually compared to the other two metaverse/VR books because it's more near future.
As I sometimes say, don't focus TOO much on science fiction because we need to be thinking more about e.g. current actual AI ethics problems than preparing for the robot wars. But as Asimov said, science fiction writers foresee the inevitable catastrophes. So worth a glance. :)
Also if you need to provide a quick “what…?” explanation for someone about Meta/metaverse here is a TikTok.
Final thought on speculative ethics for the push to social AR/VR (and other social media futures): some science fiction to prepare you for the metaverse dystopia.
Snow Crash by Neal Stephenson
Feed by MT Anderson
Catfishing on the Catnet by Naomi Kritzer
The Circle by David Eggers
After On by Rob Reid
Ready Player One/Two by Ernest Cline
A Beautifully Foolish Endeavor by Hank Green
(a list based on what I have physical copies of :) )

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Casey Fiesler, PhD, JD, geekD

Casey Fiesler, PhD, JD, geekD Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @cfiesler

26 Oct
Belatedly, I wanted to say a bit about what was discussed yesterday with the @sigchi Research Ethics Committee at #CSCW2021. What is the committee and what are folks in our community struggling with or thinking about when it comes to research ethics and processes?
The SIGCHI research ethics committee serves an advisory role on research ethics in the SIGCHI community. We can answer questions generally, but typically we come in during the review process to help reviewers who raise ethical issues. (We advise but do not make decisions.) (typical process) Outreach ...
The most common outcome when we weigh in on ethical issues that arise during paper review is that reviewers ask authors for clarifications or more information or reflection in their paper. Here is a list of some general topics that have come up in recent years. Potential risks to vulnerab...
Read 14 tweets
26 Sep
I just did a livestream answering questions about PhD applications and am really concerned about the number of people who think that having published papers is an absolute prerequisite for admission to a PhD program.
And I'll just say it: If as a professor or an admissions committee your major criteria is "author on a published paper in a major journal" then I hope you're looking real hard at the diversity of your student population because I'm guessing you might have a problem.
There's a reply in this thread that's (I'm paraphrasing): profs want applicants who've already published b/c they see PhD students as paper producing factories.

I'm concerned with the number of likes it has :( Are there so many profs like this that this is a dominant perception?
Read 20 tweets
15 Sep
Following four years of empirical work on research ethics for public data, our (@pervade_team) manifesto for trustworthy pervasive data research--foregrounding power dynamics and learning from ethnographers--published in @BigDataSoc. journals.sagepub.com/doi/10.1177/20… Excavating awareness and power in data science: A manifesto
This paper in part details evidence that many data subjects are unaware of the research uses of their digital communications, and often express unhappiness and alarm. We map awareness to this 👇 spectra and recommend researchers reflect on where their data-gathering methods fall. Quandrants:  private to public; intentional to automatic:  s
Importantly, using “public” data does not relieve researchers from considerations of participant awareness, because awareness of creation is not necessarily awareness of research use. And we should reflect on both awareness and the power implications of our research.
Read 6 tweets
14 Sep
I'm "teaching" a highly condensed version of my tech ethics & policy class on TikTok. Here's all the videos I made for the week on traditional ethical theory featuring "should Batman kill the Joker?" and ending with COVID-related moral messaging: instagram.com/tv/CT0epClhQRO/
You can follow along with this experiment to "teach a class on TikTok" here! Links to videos along with readings. I'm keeping pace with the actual @CUBoulder class I'm teaching. Apparently 2.5 hours of in-class time becomes 8 minutes of video. :) bit.ly/caseysclass
(I was going to just post the combined-topic longer videos on Twitter, but apparently Twitter has a 2 minute video length limit! So trying out Instagram TV instead, hopefully that works ok!)
Read 6 tweets
14 Sep
Inspired by this paper (psyarxiv.com/9yqs8/) when covering traditional ethical theory in my class, we discussed what different frameworks might suggest for how to convince someone to do the right thing (e.g., WEAR A MASK!). One insight I had was: utilitarianism won't work. 🧵
A utilitarian moral message (like the one used in that paper) is essentially "think of the consequences for everyone if you don't do this!" but... people have to have an understanding of the consequences for that to be effective, but there's so much misinformation. :(
For the study in the paper (it's a pre-print) they found a modest effect for duty-based deontological messaging ("it is our responsibility to protect people) re: intention to share the message. But we also covered OTHER ethical frameworks in my class...
Read 5 tweets
8 Jul
There has been a very upset/angry reaction to a paper published using tweets about mental health. I'm not RTing because I'd like to talk about this without drawing more attention to the researchers or the community. But it's an important research ethics cautionary tale. [Thread]
The paper is a qualitative analysis of tweets about mental health care. It includes paraphrased quoted tweets that the researchers ensured were not searchable. The study was approved by an ethics review committee in the UK, and the paper cites the AOIR ethics guidelines.
The paper includes an ethics and consent section that includes the above and notes that because tweets are public, consent was not required. The study also included a researcher with mental health lived experience. There do not appear to be any other statements regarding ethics.
Read 18 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(