One of the challenging things about international crises like the #balloon is how much ambiguity there can be about the drivers of a state’s actions.
We can see the balloon, but what does it *mean* about China’s intentions?
There are many plausible explanations:
THREAD
(A) Intentional – Everything is going exactly as Xi intended. He wanted to agitate Washington and spike the Blinken visit. (Seems unlikely)
(B) Miscalculation – Balloon is exactly where Xi wants it to be, but they misjudged Washington’s response. (Most likely scenario?)
(C) Unauthorized – Balloon flight is an intentional act by a Chinese official, but Xi himself was not fully aware or properly briefed. Balloon flight does not necessarily represent Xi’s intentions. (Definitely possible)
(D) Accident – Balloon blew off course. The PRC did not intend to have a balloon floating over central U.S. right before Blinken’s visit. Maybe they wanted it to fly *near* the U.S. but not cross into U.S. airspace. (Could be)
There are even combinations of these that are plausible: Perhaps the PRC meant the balloon to be *near* the U.S. but Xi was not fully aware, it drifted off course, and CCP leadership didn’t anticipate the severity of the U.S. reaction if the balloon crossed into U.S. territory
[To be clear, I’m not saying that’s what occurred, but it’s a possible explanation.]
The problem is that all of these scenarios are possible, and they indicate very different things about Chinese intentions.
And *all* of these kinds of things have happened in crises in the past.
States miscalculate all the time, making intentional provocations/escalations but misjudging how others will respond. Putin’s invasion of Ukraine is one of the starkest recent examples of such a blunder.
The Cuban Missile crisis included all sorts of messy situations where the “friction” of military operations challenged U.S. and Soviet leaders’ ability to send clear messages to one another about their intentions.
On October 26th, ten days into the Cuban Missile crisis, authorities at Vandenberg Air Force base carried out a scheduled test launch of an Atlas ICBM without first checking with the White House.
On October 27th, a U-2 flying over the Arctic Circle accidentally strayed into Soviet territory.
Also on October 27th, an American U-2 surveillance plane was shot down while flying over Cuba, despite orders by Soviet Premier Nikita Khrushchev not to fire on U.S. surveillance aircraft. (The missile appears to have been fired by Cuban forces on Fidel Castro’s orders.)
Soviet and American leaders could not know for certain whether these incidents were intentional signals by the adversary to escalate or individual units acting on their own. Or just plain accidents.
A similar ambiguity about Xi’s intentions exists today, at least based on publicly available information.
Complicating this is the fact that the White House must respond to domestic political pressures which are (1) distorted by point-scoring and politicians trying to out-hawk each other, and (2) the fact that the public may be ignorant about important details that change the picture
Let’s say, for example, that the U.S. intel community had good intel that made clear that the balloon flight was an unauthorized action or an accident. The White House couldn’t necessarily *say* that publicly.
One of the things the #balloon incident highlights is just how dangerous crises can be.
The combination of ambiguity over state intentions, imperfect control over their armed forces, mutual suspicion, and domestic political pressures to not be seen as weak -- including by other domestic political actors -- creates a volatile brew.
What would the #balloon crisis look like if there was a loss of life? How much more intense would the domestic political pressures on the White House be for a forceful response?
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Quick takes below on what's in and what's out. [THREAD]
Bottom line: The DoD just released an updated Directive (an official policy document) that guides the U.S. military's policies on autonomy in weapons (e.g., lethal autonomous weapons).
What drove this? Internal DoD bureaucratic guidelines actually force the Department to either renew, update, or cancel *any* Directive within 10 years. So this update was driven by a fixed timeline, not necessarily by any external or internal factors.
The All-Volunteer Force turns 50 this year. What does the war in Ukraine tell us about the future of the All-Volunteer Force (AVF)?
[THREAD]
War is, fortunately, a rare occurrence. In peacetime, militaries build theories, implicitly and explicitly, of what future wars will look like that inform force design and force management.
In war, militaries find out – sometimes quickly and painfully – whether those theories were right.
The United States has been on a steady path towards selectively decoupling U.S.-China tech ties. That's a mistake.
Decoupling alone will not secure U.S. interests. [THREAD]
While overall U.S.-China trade ties are strong, U.S. policymakers have been steadily taking efforts to pull apart the deeply integrated U.S. and Chinese tech ecosystems. bloomberg.com/news/articles/…
Huawei's dominance in global 5G markets, and the risk that close allies might rely on Huawei equipment for their telecom networks, was a major wake-up call for Washington.
I sometimes get a skeptical 🤔 response to concerns I've raised about countries falling into the trap of a "race to the bottom" on safety for military AI systems.
But it's worth pointing out that these competitive dynamics are happening *now* in the commercial sector. [THREAD]
Predictability, when there are potentially hundreds of billions of dollars at stake, companies are willing to take more risk in fielding a technology that continues to have a host of difficult, unresolved problems (bias, toxicity, and just plain making shit up).
Artists are suing Stability AI and Midjourney, alleging copyright infringement for including copyrighted images in their training data without permission.
But this isn't the David-and-Goliath story you might think it is. [THREAD]
At the core of the lawsuit is whether or not including copyrighted images in a training dataset is "fair use."
The lawsuit over art generators mirrors a similar lawsuit over code-generating models. The same lawyers are representing a class-action lawsuit against Microsoft, OpenAI, and GitHub over Copilot. theverge.com/2022/11/8/2344…
Good that it includes oversight of TikTok's recommendation algorithm.
Lots of questions about how one would police the algorithm in practice.
CCP influence on TikTok's recommendation algorithm may not look like a backdoor into the algorithm, allowing Party officials to alter the code from China.
Lots of companies self-censor so they don't avoid run afoul of the Party by talking about "sensitive issues."