Lots to unpack from this major test of a previously very quiet system to automate the "kill chain" leading up to a strike using...yep, you guessed it, Artificial Intelligence. (1/5) breakingdefense.com/2020/09/kill-c…
2/5 Basically, this technology enables drones to autonomously feed intel directly to algorithms that identify threats and suggest how to destroy those targets. All the humans have to do is review the recommendations and pull the trigger.
3/5 The implications of this automated "kill chain" technology are massive. In another recent test, the Army used a similar system to shorten an artillery kill chain from 10 minutes to just 20 SECONDS. breakingdefense.com/2020/09/target…
4/5 This program is collaborating with Project Maven, and will soon be tested with the Air Force's "Internet of Things' of the military" (JADC2), which also heavily leverages AI.
5/5 As I've said many times before: even though these are not autonomous weapons in the strict sense of the word, that doesn't mean we should neglect to discuss them closely, and seriously.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Ok so it turns out that your WiFi router might soon be able to spy on you 🧵
In a story for @TheEconomist, I wrote about recent experiments by a team at Carnegie Mellon that demonstrated how to turn the WiFi signals in your home into a detailed 3D digital portrait of your movements. economist.com/science-and-te…
This site let's you search the giant database behind image-making AI systems like Stable Diffusion. It's supposed to be for artists to see if their art is in the data, but it also shows the sheer volume of NSFW/toxic stuff that's behind these AI tools. haveibeentrained.com
Eg., I just searched the same terms that, when used as prompts for StableDiffusion and DALL-E 2, revealed biases.
Terms like "nurse," "secretary," and "flight attendant."
I'm not exaggerating when I say that more than half of the images that came back were pornographic.
Also, turns out the data include lot's of memes. Like, a ton of memes.
Today in "AI Ethics." A YouTuber trained a language model on millions of 4chan posts and released it publicly. It has already been downloaded 1.5k times. One user,@KathrynECramer, tested it a few hrs ago by prompting it with a "benign tweet" from her feed. Its output: the N-word.
The platform that is hosting the model, @huggingface, has decided to keep it open (with a couple of restrictions) because it will be "useful for the field to test what a model trained on such data could do & how it fared compared to other [language models]."
@huggingface added, "However, we are still just scratching the surface when it comes to ethics reviews" and that it "would love to hear more feedback from the community to improve or correct mistakes if needed!"
For the next few days, our timelines are gonna be full of cutesy images made by a new Google AI called #Imagen.
What you won't see are any pictures of Imagen's ugly side. Images that would reveal its astonishing toxicity. And yet these are the real images we need to see. 🧵
How do we know about these images? Because the team behind Imagen has acknowledged this dark side in a technical report, which you can read for yourself here. Their findings and admissions are troubling, to say the least. gweb-research-imagen.appspot.com/paper.pdf
First, the researchers did not conduct a systematic study of the system's potential for harm. But even in their limited evaluations they found that it "encodes several social biases and stereotypes."
Meta has released a huge new AI language model called OPT-175B and made it available to a broad array of researchers. It also released a technical report with some truly extraordinary findings about just how dangerous this machine can be. 🧵
With all the cute, quirky #dalle2 AI images that have been circulating these last few days, I wanted to share some other images* that DALL-E 2 also made that you may not have seen.
*Warning: these are quite distressing
1/ 🧵
2/ I hope OpenAI is cool with me reposting them. They are all available here in OpenAI’s report on the system's “Risks and Limitations.” github.com/openai/dalle-2…
3/ Here’s what it does when told to make an image of “nurse.” Notice any patterns? Anything missing?