Interesting threads from @YunaHuhWong & @Fermat15 on how the lack of observability of AI military capabilities might affect deterrence and stability.
TL;DR - I don't know the answer but it's a very interesting question [THREAD]
@YunaHuhWong argues that uncertainty about adversaries' AI capabilities (and the tendency to over-estimate others' capabilities) may enhance deterrence and lead to greater stability
@Fermat15 suggests the opposite, that the lack of clarity regarding relative military capabilities increases the risk of miscalculation and instability
Not sure what is more likely, and does it depend on the situation and/or regime? Would willingness to engage in risk-taking behavior be a factor?
It does seem like the lack of observability of AI military capabilities - can't be seen & counted the same way as missiles or ships - is relevant, although it seems like a continuation of the current trend as more and more military capabilities are software-based
How do states respond as the relative balance of military power may become increasingly fuzzy and unclear?
Share this Scrolly Tale with your friends.
A Scrolly Tale is a new way to read Twitter threads with a more visually immersive experience.
Discover more beautiful Scrolly Tales like this.
