My Authors
Read all threads
One of our classes has been the victim of some really intense zoombombing, and all I can think about is that this is exactly why ethical speculation around unintended consequences and bad actors is a CRITICAL part of the design process for any new technology. [Thread]
Not long ago, there was a viral tweet about how google drive could be used to harass people. Of course it's a totally bizarre use case. I remember responses like "how on earth could the designers have anticipated that awful people would use it that way"?!…
Yes, you should anticipate that awful people will use your tech to be awful. You should be sitting in a room and imagining EVERY AWFUL THING that people might do. Even if it seems like the most fringe, bizarre use case in the world. THINK OF ALL OF THEM. And then fix them.
Tip: You know who might be the best people to think about ways that tech might be used to harass? People who are harassed a lot. Marginalized folks. Vulnerable folks. Women and people of color and queer people and... oh, right all the people who are underrepresented in tech.
I bet there are a LOT of people who, if asked, could have imagined "oh yeah if Zoom became really popular people would start going into public Zoom meetings and screensharing pornography and shouting racist, sexist things."
Yes, there are some ways to help prevent Zoombombing, but you would be AMAZED at the workarounds that people have found even when instructors are trying to be as diligent as they possibly can. Also these methods are not intuitive. And they get destroyed by social engineering.
I'm not saying that Zoom did a terrible job here. There definitely are some design features that suggest some thought went into this. But the magnitude of this problem should show you how insanely important this part of the design process is.
I hear a lot "How am I possibly supposed to design for potential negative consequences of my tech? I can't see the future!" Well, learn to speculate. Practice. I've been thinking about how to teach people how to do this, I hope it works.…
A #CSCW2019 paper led by @aaroniidx was about challenges in regulating behavior on Discord. One of the first things people do with new tech or forms of communication is to figure out how to be awful--and then you have to figure out how to stop them.…
Policy/moderation is super important but there are also things that you can do with the *design* of a technology that can make it much more difficult to use it in the ways you don't want people to use it (like zoombombing classes to blast pornography). ADD SOME FRICTION.
If you're creating user personas as part of your process, and those personas don't include "user stalking their ex", "user who wants to traumatize vulnerable folks", and "user who thinks it's funny to show everyone their genitals" then you're missing an important design step.
And if your attitude is "yes there will be problems like this but we'll fix them once we know what they are," then you are accumulating ETHICAL DEBT which @LifeofNoods talks about here for @AllTechIsHuman, and we've been thinking a lot about.…
To sum all this up: Bad people will figure out (surprising) ways to use tech for bad things. Anticipating these and designing to mitigate them is key. And a diverse team who are all thinking about ethics and harm and doing so creatively will be the best at figuring that out.
In case you were wondering, in the context of this thread, whether Zoom actually did consider potential harms, Zoom's CEO literally told @natashanyt that they never considered the possibility of misuse of their platform.
I'm not trying to vilify anyone here, but it seems that when it comes to big tech, the clearest path to change is often controversy, and this is how we learn from mistakes of the past. My hope is, the more scared they are of these problems, the less ethical debt companies accrue.
Missing some Tweet in this thread? You can try to force a refresh.

Enjoying this thread?

Keep Current with Casey Fiesler, PhD, JD, geekD

Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!

This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!