@NoemaMag @AndrewSorota 1/32 That's exceptional. And an urgent, parallel concern? A hangover from my days co-chairing the National Computer Ethics and Responsibilities Campaign, and ongoing citizen-led peace-work, but I've spent months wading through a raft of reports on the nuclear weaponization of AI.
@NoemaMag @AndrewSorota 2/ Twitter's not the place for describing complex technical specifics, but the evidence is plainly clear that AI is creeping into the scaffolding of nuclear-use decisions faster than governance is creeping into AI—just one of a mess of factors leaning "hot” in the nuclear stack.
@NoemaMag @AndrewSorota 3/ And where @AndrewSorota names a threat to the assembly code of democratic ecology, AI’s nuclear turn targets existence itself—epistemic capture at the infrastructural hinge where "peace" is secured by hair-trigger exposure to annihilation, as militaries race to deploy it.
@NoemaMag @AndrewSorota 4/ It certainly isn't AI safety as branded, rather species-level peril in its most literal form, where the very opacity and unreliability that makes current AI systems concerning is being deliberately wired into what we may rightly call the infrastructural will-to-annihilate.
@NoemaMag @AndrewSorota 5/ (I note a grave irony here. Use AI models to research Nuclear Command, Control and Communications systems (NC3), and one is immediately confronted by the same generative tendency toward closed-loop pathologies they're introducing to NC3. It's a recursive, hallucinatory mess.)
@NoemaMag @AndrewSorota 6/ Context, then. Relevant reports do not describe a clever modernization of a stable regime; they describe a murkier battlespace in which AI functions as structural accelerant— a tightly coupled, faster, deepening volatility of systems that cannot afford volatility at all.
@NoemaMag @AndrewSorota 7/ Recent work from across the field converges on the same picture: the nuclear AI race is moving into early warning and tracking, target recognition, leader decision-support, autonomous or semi-autonomous intelligence, surveillance and reconnaissance, and big-frame data fusion.
@NoemaMag @AndrewSorota 8/ Sold as ways to filter noise and “buy time,” in practice these technologies are thickening opacity, shrinking decision windows, stiffening pressures to pre-delegate, and introducing new, tightly coupled failure-modes right inside the nuclear decision loop.
@NoemaMag @AndrewSorota 9/ Strategic surveys zoom out, sketching an architecture of danger, where probabilistic, failure-prone inference engines that demonstrably generate false confidence about their own knowledge are being folded into a domain running on the illusion of error tolerance.
@NoemaMag @AndrewSorota 10/ The infrastructure producing "ground truth" in a nuclear crisis is being built from tools that cannot reliably distinguish between what they know and what sounds like knowing, validated by benchmarks that don't adequately test for this failure mode under adversarial pressure.
@NoemaMag @AndrewSorota 11/ Meanwhile, AI evaluation is systematically shaped by competitive pressures prioritizing marketable reassurance over real-world robustness. There exists no research examining whether AI-NC3 is escaping widespread commercial benchmark pathologies—gaming the metrics.
@NoemaMag @AndrewSorota 12/ When a crisis comes, political leaders will confront AI-fed dashboards and ranked options bearing the seal of “rigorous evaluation,” without visibility into how commercially slanted, gameable or contingent those evaluations are.
@NoemaMag @AndrewSorota 13/ What results may be the perfect mimetic storm. At machine speed, imitation hardens, windows compress, escalation presents itself as prudence rather than madness. No one needs to choose nuclear war; it arrives as a systems-driven verdict of indicators no actor fully controls.
@NoemaMag @AndrewSorota 14/ Catastrophe, then, may arrive not as calculated malice, but wearing the face of reason, the result of automated momentum, an emergent property of a disembodied system—with no sense of what Armageddon actually means—deciding what “sober judgment” looks like under pressure.
@NoemaMag @AndrewSorota 15/ Crucially, this is mutual. Girard’s law, then: we mirror rivals to feel safe, mistaking the copy for proof of threat, so symmetry begets symmetry, until mirroring supplies the motive power. Mimetic escalation in AI-bent NC3 is advancing at algorithmically supercharged speeds. x.com/armscontrolnow…
@NoemaMag @AndrewSorota 16/ Because hesitation reads as vulnerability, speed becomes the virtue. The moment one side closes the slow human window, the other must match, binding the entire system so tightly the crisis pathway snaps to “autopilot” as sovereign process, not choice; feature, not bug.
@NoemaMag @AndrewSorota 17/ In other words, NC3 rivalry now lives inside the wiring: the infrastructure learned the rivalry it was meant to restrain. What was meant as a cage becomes an escalation engine: the inevitable outcome of embedding mimetic competition in technical systems optimized for speed.
@NoemaMag @AndrewSorota 18/ So far, no one’s squarely publishing on AI–NC3 through a Girardian lens; the best work only circles it. The signal is there—nuclear sacralization, mimetic rivalry, algorithmic amplification—but it lives in separate rooms. What’s missing is the synthesis.
@NoemaMag @AndrewSorota 19/ Deeper still, what's most disturbing is the ontological manner in which the Algorithm is functionally colonizing the epistemic core of the rivalry engine. Name it the deep strangeness of AI-NC3, making rivalry legible to itself by knowing itself as Armageddon.
@NoemaMag @AndrewSorota 20/ It's what current research misses: AI-mediated NC3 is both hallucinatory perception process and execution mechanism. The gear that tells us what’s happening doubles as the gear that makes it happen. The knowing-apparatus is the ending-apparatus. The preventer is the danger.
@NoemaMag @AndrewSorota 21/ That's the shift: AI-mediated NC3 collapses perception and execution into one loop operating at machine speed. The stack references itself—at a tempo, opacity, and unpredictability lashed to doomsday incentives—pointing the result at launch authority.
@NoemaMag @AndrewSorota 22/ AI's nuclear weaponization extends far beyond NC3, of course. At the “edges”, AI has already slid into the support layers that watch, listen, collect, and feed data into military decision-making—what leaders see, when they see it, how they understand it.
@NoemaMag @AndrewSorota 23/ Indeed, the breadth of AI’s epistemic capture now runs far beyond nuclear or even military categories, braiding civilian and commercial streams into a mediated picture "shaping what's real" that destabilizes the center long before anything even nears a nuclear launch circuit.
@NoemaMag @AndrewSorota 24/ Add the civil–military confluence in one planetary network—awash in narrative corruption stoked by state and non-state actors weaponizing generative AI—and the global stack becomes nuclear brinkmanship's battlespace where signaling, misreads, and spoofing scale exponentially.
@NoemaMag @AndrewSorota 25/ Into this context—militant global surge, accelerating mimetic trends, eroding geopolitical safeguards—nations are racing to layer unpredictable AI systems onto NC3. The dynamics historically braking Armageddon are inverting into accelerants. The entire stack is overheating.
@NoemaMag @AndrewSorota 26/ Technical fixes won't work because they're downstream of the mimetic structure generating the problem. Political fixes won't work because they operate within the same rivalrous logic—requiring trust between parties whose entire posture is built on mutual suspicion.
@NoemaMag @AndrewSorota 27/ Mimetic escalation is now, in practical terms, already beyond human control and fully infrastructural. “Try turning it off." Girard names the trap—mimetic rivalry drives us toward apocalypse—and the only exit is interruptive intervention—something outside the rivalry's logic. x.com/nick_routledge…
@NoemaMag @AndrewSorota 28/ One option: joint submission to a moral authority structure outside mimetic rivalry—able to offer an offramp when the nuclear moment comes. An accountability no nuclear power can spin as an "edge", giving each a credible way to step securely back from the apocalyptic track. x.com/bfcarlson/stat…
@NoemaMag @AndrewSorota 29/ Not arms control (which is mimetic—"we limit if you limit") but covenant—a joint vow to ultimate consequence—binding AI pipelines to planetary traditions of moral seriousness and relational fidelity, so that form embodies principle and principle consecrates form.
@NoemaMag @AndrewSorota 30/ This offers a dimension transcending an extinction path tied irrevocably to national interest: a formation that travels into every domain where ultimate stakes are present, offering nuclear powers what mimetic rivalry cannot—a way to honor what matters beyond survival itself. x.com/nick_routledge…
@NoemaMag @AndrewSorota 31/ Call it the singular Event in history, perhaps: the whole household of earth—biosphere and noosphere, narrative and evolution—converges upon either annihilation as workflow, or the world-scale Advent of the saving power of eternal values arriving as a shared refusal to kill. x.com/nick_routledge…
@NoemaMag @AndrewSorota end\ AI is the banner; the deeper phenomenon is the total stack—profit and control as first principle, prior to any telos of truth. The Machine cannot self-correct: correction would be self-negation. But it can be made answerable to a Truth it cannot convert into instrumentality. x.com/nick_routledge…
@NoemaMag @AndrewSorota @threadreaderapp unroll
Share this Scrolly Tale with your friends.
A Scrolly Tale is a new way to read Twitter threads with a more visually immersive experience.
Discover more beautiful Scrolly Tales like this.
