Paul Scharre Profile picture
Mar 16, 2020 4 tweets 4 min read Read on X
Join @CNAStech @ 11am ET today for a live Twitter chat on COVID-19 and #tech #natsec.

But first, let's kick off the first virtual week by talking about #Westworld! What did you think of the first episode?
Did you spend the weekend binge-watching seasons 1&2 to catch up, @MartijnRasser? 😁🤠
I loved the discussion about whether we’re living in a simulation. That just seems so timely. I think it was the Sarah Palin singing as a masked bear last week that pushed me over the edge. We’re definitely in a simulation and the people controlling it are just trolling us now.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Paul Scharre

Paul Scharre Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @paul_scharre

Mar 30, 2023
@FLIxrisk has released an open letter calling for a moratorium on training large AI models more powerful than GPT-4.

Their specific proposal is vague and not very realistic, but it's a significant development nonetheless. [THREAD]

futureoflife.org/open-letter/pa…
Large AI models like ChatGPT and GPT-4 are inherently dual use.

@OpenAI's GPT-4 system card walks through several possible misuse risks, including for hacking, disinformation, and proliferation of unconventional weapons (e.g., chem/bio). cdn.openai.com/papers/gpt-4-s…
OpenAI assesses that GPT-4's cyber and chem/bio capabilities are limited today, but AI progress is discontinuous and large models frequently show emergent capabilities.

Dangerous capabilities are likely coming and we may not have much advance warning.

arxiv.org/pdf/2202.07785…
Read 10 tweets
Mar 16, 2023
In the long run, detectors will fail.

Any methods of detection can be folded into the next generation of AI.

Fakes will converge towards reality.
Watermarking will be key to distinguish fakes from reality.

Responsible actors will ensure their synthetic media is watermarked.

But not everyone will act responsibly. And generative AI is so widespread that there will be irresponsible actors.
Real media will need to adapt to prove authenticity: watermarking, metadata, chain-of-custody, etc.

Seeing is no longer believing.
Read 4 tweets
Mar 9, 2023
Data is a vital resource for machine learning. Does China have a data advantage?

Not so fast. China's alleged authoritarian advantage in data is overstated.

THREAD
China's supposed data advantage comes from its massive population (1.4 billion people!) and rapidly growing surveillance state.

The CCP is building a surveillance system unparalleled in the world.

But that doesn't necessarily translate to a data advantage.
For one, company user base matters more than national population. U.S. companies have global reach.

User base: Facebook 2.7 billion; YouTube 2+ billion; WeChat 1.2 billion.

Other than TikTok, Chinese platforms have struggled to gain a foothold outside China.
Read 11 tweets
Feb 27, 2023
China's model of AI-enabled repression is proliferating around the world, threatening human freedom.

Here's what the U.S. and other democratic nations can do to push back.
[THREAD]
latimes.com/opinion/story/…
China is building a new model of tech-enabled authoritarianism at home.

The Chinese Communist Party has deployed 500 million surveillance cameras to monitor Chinese citizens.They increasingly use AI tools like facial and gait recognition.
China is exporting its model of digital authoritarianism abroad. At least 80 countries use Chinese surveillance and policing technology.
(Map data courtsey of @SheenaGreitens. Map by @CNASdc)
Read 11 tweets
Feb 14, 2023
UFO jokes aside, I’m troubled that the U.S. military is shooting down aerial objects in U.S. airspace without positively ID’ing them first.

How long before they accidentally shoot down an aircraft?
“We don’t know what it is; shoot it down” seems like a very loose ROE for domestic U.S. airspace in peacetime
Republicans’ political point-scoring criticizing the administration for acting prudently and waiting to shoot down the first balloon was harmful ...
Read 15 tweets
Feb 14, 2023
DARPA gave me incredible access for Army of None, but there was one program they stiff-armed me on:

TRACE, a DARPA program to use deep neural nets to improve automatic target recognition.

For Four Battlegrounds, I got the scoop! [THREAD]
TRACE (Target Recognition and Adaptation in Contested Environments) was a DARPA program to improve automatic target recognition (ATR).

It was one of the first DoD programs to capitalize on the deep learning revolution. Image
TRACE used neural networks to improve automatic target recognition, which for DoD was the holy grail of game-changers from deep learning.

The fact that DARPA wouldn't talk about it only intrigued me more! Image
Read 10 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(