Paul Scharre Profile picture
Feb 8 5 tweets 10 min read
We went line-by-line through the new DoD policy on autonomous weapons so you don't have to!

The new CNAS Noteworthy on DoD Directive 3000.09, Autonomy in Weapons, analyzes changes from the prior policy and what they mean for the U.S. military
cnas.org/press/press-no…
Special thanks to CNAS project assistant Noah Greene for his assistance in research support.
And a huge shout out to @ApAnnagator @CNASdc for developing the new Noteworthy format!
cnas.org/press/press-no…

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Paul Scharre

Paul Scharre Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @paul_scharre

Feb 4
One of the challenging things about international crises like the #balloon is how much ambiguity there can be about the drivers of a state’s actions.

We can see the balloon, but what does it *mean* about China’s intentions?

There are many plausible explanations:

THREAD
(A) Intentional – Everything is going exactly as Xi intended. He wanted to agitate Washington and spike the Blinken visit. (Seems unlikely)
(B) Miscalculation – Balloon is exactly where Xi wants it to be, but they misjudged Washington’s response. (Most likely scenario?)
Read 20 tweets
Jan 25
The new, updated DoD Directive 3000.09, "Autonomy in Weapon Systems" is out.
esd.whs.mil/Portals/54/Doc…

Quick takes below on what's in and what's out. [THREAD]
Bottom line: The DoD just released an updated Directive (an official policy document) that guides the U.S. military's policies on autonomy in weapons (e.g., lethal autonomous weapons).
What drove this? Internal DoD bureaucratic guidelines actually force the Department to either renew, update, or cancel *any* Directive within 10 years. So this update was driven by a fixed timeline, not necessarily by any external or internal factors.
Read 19 tweets
Jan 25
The All-Volunteer Force turns 50 this year. What does the war in Ukraine tell us about the future of the All-Volunteer Force (AVF)?

[THREAD]
War is, fortunately, a rare occurrence. In peacetime, militaries build theories, implicitly and explicitly, of what future wars will look like that inform force design and force management.
In war, militaries find out – sometimes quickly and painfully – whether those theories were right.
Read 22 tweets
Jan 23
The United States has been on a steady path towards selectively decoupling U.S.-China tech ties. That's a mistake.

Decoupling alone will not secure U.S. interests. [THREAD]
While overall U.S.-China trade ties are strong, U.S. policymakers have been steadily taking efforts to pull apart the deeply integrated U.S. and Chinese tech ecosystems.
bloomberg.com/news/articles/…
Huawei's dominance in global 5G markets, and the risk that close allies might rely on Huawei equipment for their telecom networks, was a major wake-up call for Washington.
Read 24 tweets
Jan 21
I sometimes get a skeptical 🤔 response to concerns I've raised about countries falling into the trap of a "race to the bottom" on safety for military AI systems.

But it's worth pointing out that these competitive dynamics are happening *now* in the commercial sector. [THREAD]
Excellent new reporting by @nicoagrant @nytimes on how Google "will 'recalibrate' the level of risk it is willing to take when releasing [AI] technology" in response to OpenAI's ChatGPT
(h/t @ProfNoahGian @ESYudkowsky)
nytimes.com/2023/01/20/tec…
Predictability, when there are potentially hundreds of billions of dollars at stake, companies are willing to take more risk in fielding a technology that continues to have a host of difficult, unresolved problems (bias, toxicity, and just plain making shit up).
Read 6 tweets
Jan 21
Legal battles are brewing over generative AI.

Artists are suing Stability AI and Midjourney, alleging copyright infringement for including copyrighted images in their training data without permission.

But this isn't the David-and-Goliath story you might think it is. [THREAD]
At the core of the lawsuit is whether or not including copyrighted images in a training dataset is "fair use."

@jjvincent @verge breaks it down in this excellent series of articles:
theverge.com/2023/1/16/2355…
The lawsuit over art generators mirrors a similar lawsuit over code-generating models. The same lawyers are representing a class-action lawsuit against Microsoft, OpenAI, and GitHub over Copilot.
theverge.com/2022/11/8/2344…
Read 12 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(