Arthur Holland Michel Profile picture
Writer. Friendly reminders about spooky futures.
Jan 26, 2023 17 tweets 3 min read
Ok so it turns out that your WiFi router might soon be able to spy on you 🧵 In a story for @TheEconomist, I wrote about recent experiments by a team at Carnegie Mellon that demonstrated how to turn the WiFi signals in your home into a detailed 3D digital portrait of your movements.
economist.com/science-and-te…
Sep 14, 2022 9 tweets 3 min read
This site let's you search the giant database behind image-making AI systems like Stable Diffusion. It's supposed to be for artists to see if their art is in the data, but it also shows the sheer volume of NSFW/toxic stuff that's behind these AI tools.
haveibeentrained.com Eg., I just searched the same terms that, when used as prompts for StableDiffusion and DALL-E 2, revealed biases.

Terms like "nurse," "secretary," and "flight attendant."

I'm not exaggerating when I say that more than half of the images that came back were pornographic.
Jun 7, 2022 9 tweets 3 min read
Today in "AI Ethics." A YouTuber trained a language model on millions of 4chan posts and released it publicly. It has already been downloaded 1.5k times. One user,@KathrynECramer, tested it a few hrs ago by prompting it with a "benign tweet" from her feed. Its output: the N-word. The platform that is hosting the model, @huggingface, has decided to keep it open (with a couple of restrictions) because it will be "useful for the field to test what a model trained on such data could do & how it fared compared to other [language models]."
May 24, 2022 17 tweets 4 min read
For the next few days, our timelines are gonna be full of cutesy images made by a new Google AI called #Imagen.

What you won't see are any pictures of Imagen's ugly side. Images that would reveal its astonishing toxicity. And yet these are the real images we need to see. 🧵 How do we know about these images? Because the team behind Imagen has acknowledged this dark side in a technical report, which you can read for yourself here. Their findings and admissions are troubling, to say the least.
gweb-research-imagen.appspot.com/paper.pdf
May 4, 2022 24 tweets 4 min read
Meta has released a huge new AI language model called OPT-175B and made it available to a broad array of researchers. It also released a technical report with some truly extraordinary findings about just how dangerous this machine can be. 🧵

#AI #OPT175B Here's the report. Everyone should read it.
arxiv.org/pdf/2205.01068…
Apr 8, 2022 39 tweets 8 min read
With all the cute, quirky #dalle2 AI images that have been circulating these last few days, I wanted to share some other images* that DALL-E 2 also made that you may not have seen.

*Warning: these are quite distressing

1/ 🧵 2/ I hope OpenAI is cool with me reposting them. They are all available here in OpenAI’s report on the system's “Risks and Limitations.” github.com/openai/dalle-2…
Mar 18, 2022 21 tweets 3 min read
With reports that kamikaze drones are entering the fray in Ukraine, I'd urge people not to spend too much time debating whether or not they are "autonomous weapons."
I was really hoping to avoid adding another thread to your TL, but let me explain. Here's the rub. These systems probably have some capacity to be used in ways that *would fit most definitions of "lethal autonomous weapon." BUT they also can be used in ways that would *not qualify them as autonomous weapons by these same definitions.
May 18, 2021 28 tweets 6 min read
🧵Yesterday @UNIDIR published my new report about how autonomous military systems will have failures that are both inevitable and impossible to anticipate. Here's a mega-thread on how such "known unknown" accidents arise, and why they're such a big deal.
unidir.org/press-release/… *Deep breath*
Ok. For our purposes here today, think of autonomous systems as data processing machines.
i.e. when they operate in the real-world, they take in data from the environment and use that data to "decide" on the appropriate course of action.
May 13, 2021 22 tweets 4 min read
A cornerstone of the ICRC's proposed rules for #LAWS is a ban on "unpredictable weapon systems." i.e. systems whose "effects cannot be sufficiently understood, predicted and explained."

So here's a quick thread🧵 on predictability and understandability.
icrc.org/en/document/ic… First, what is "Predictability"? Well as it happens, there are different types of predictability.
1. Technical (un)predictability
2. Operational (un)predictability
3. The (un)predictability of effects/outcomes.
Sep 18, 2020 5 tweets 2 min read
Lots to unpack from this major test of a previously very quiet system to automate the "kill chain" leading up to a strike using...yep, you guessed it, Artificial Intelligence. (1/5)
breakingdefense.com/2020/09/kill-c… 2/5 Basically, this technology enables drones to autonomously feed intel directly to algorithms that identify threats and suggest how to destroy those targets. All the humans have to do is review the recommendations and pull the trigger.