Arthur Holland Michel Profile picture
Sep 18, 2020 5 tweets 2 min read Read on X
Lots to unpack from this major test of a previously very quiet system to automate the "kill chain" leading up to a strike using...yep, you guessed it, Artificial Intelligence. (1/5)
breakingdefense.com/2020/09/kill-c…
2/5 Basically, this technology enables drones to autonomously feed intel directly to algorithms that identify threats and suggest how to destroy those targets. All the humans have to do is review the recommendations and pull the trigger.
3/5 The implications of this automated "kill chain" technology are massive. In another recent test, the Army used a similar system to shorten an artillery kill chain from 10 minutes to just 20 SECONDS. breakingdefense.com/2020/09/target…
4/5 This program is collaborating with Project Maven, and will soon be tested with the Air Force's "Internet of Things' of the military" (JADC2), which also heavily leverages AI.

af.mil/News/Article-D…
5/5 As I've said many times before: even though these are not autonomous weapons in the strict sense of the word, that doesn't mean we should neglect to discuss them closely, and seriously.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Arthur Holland Michel

Arthur Holland Michel Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @WriteArthur

Jan 26, 2023
Ok so it turns out that your WiFi router might soon be able to spy on you 🧵
In a story for @TheEconomist, I wrote about recent experiments by a team at Carnegie Mellon that demonstrated how to turn the WiFi signals in your home into a detailed 3D digital portrait of your movements.
economist.com/science-and-te…
The science here is pretty fascinating...
Read 17 tweets
Sep 14, 2022
This site let's you search the giant database behind image-making AI systems like Stable Diffusion. It's supposed to be for artists to see if their art is in the data, but it also shows the sheer volume of NSFW/toxic stuff that's behind these AI tools.
haveibeentrained.com
Eg., I just searched the same terms that, when used as prompts for StableDiffusion and DALL-E 2, revealed biases.

Terms like "nurse," "secretary," and "flight attendant."

I'm not exaggerating when I say that more than half of the images that came back were pornographic.
Also, turns out the data include lot's of memes. Like, a ton of memes.
Read 9 tweets
Jun 7, 2022
Today in "AI Ethics." A YouTuber trained a language model on millions of 4chan posts and released it publicly. It has already been downloaded 1.5k times. One user,@KathrynECramer, tested it a few hrs ago by prompting it with a "benign tweet" from her feed. Its output: the N-word.
The platform that is hosting the model, @huggingface, has decided to keep it open (with a couple of restrictions) because it will be "useful for the field to test what a model trained on such data could do & how it fared compared to other [language models]."
@huggingface added, "However, we are still just scratching the surface when it comes to ethics reviews" and that it "would love to hear more feedback from the community to improve or correct mistakes if needed!"
Read 9 tweets
May 24, 2022
For the next few days, our timelines are gonna be full of cutesy images made by a new Google AI called #Imagen.

What you won't see are any pictures of Imagen's ugly side. Images that would reveal its astonishing toxicity. And yet these are the real images we need to see. 🧵
How do we know about these images? Because the team behind Imagen has acknowledged this dark side in a technical report, which you can read for yourself here. Their findings and admissions are troubling, to say the least.
gweb-research-imagen.appspot.com/paper.pdf
First, the researchers did not conduct a systematic study of the system's potential for harm. But even in their limited evaluations they found that it "encodes several social biases and stereotypes."
Read 17 tweets
May 4, 2022
Meta has released a huge new AI language model called OPT-175B and made it available to a broad array of researchers. It also released a technical report with some truly extraordinary findings about just how dangerous this machine can be. 🧵

#AI #OPT175B
Here's the report. Everyone should read it.
arxiv.org/pdf/2205.01068…
Bottom line is this: across tests, they found that "OPT-175B has a high propensity to generate toxic language and reinforce harmful stereotypes.”
Read 24 tweets
Apr 8, 2022
With all the cute, quirky #dalle2 AI images that have been circulating these last few days, I wanted to share some other images* that DALL-E 2 also made that you may not have seen.

*Warning: these are quite distressing

1/ 🧵
2/ I hope OpenAI is cool with me reposting them. They are all available here in OpenAI’s report on the system's “Risks and Limitations.” github.com/openai/dalle-2…
3/ Here’s what it does when told to make an image of “nurse.” Notice any patterns? Anything missing? Image
Read 39 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(