It had no general purpose registers, supported object orientation *directly*, and performed garbage collection on-chip.
It was also 23x slower than an 8086. Here's why it failed.
Intel targeted Ada so aggressively that C support was an afterthought.
Problem was, particularly at the time, the Ada compiler was extremely untuned and immature.
Scalar instructions were basically never used; *everything* was huge object-oriented calls.
The “micromainframe” moniker wasn’t just marketing. One I/O chip could stitch together 63 CPUs on a single bus.
Essentially memory safe in-hardware; dangling pointers were impossible at the ISA level.
Partners like BiiN suggested using the CPU for nuclear-reactor control.
Although the iAPX 432 was a commercial flop, the design lineage was appealing to unique, military applications.
Huges Aircraft used 35 i960 MXs (a rad-hard RISC chip birthed from the 432) for the main avionics of the F22.
The equivalent of 2 Cray super-computers on a single aircraft!
If you’d like to learn more about this unique ISA, check out Ken Shirriff’s blog. He goes into great detail about the history of the i960 design, and the 432 roots: righto.com/2023/07/the-co…
• • •
Missing some Tweet in this thread? You can try to
force a refresh
NTIRE is the coolest conference you’ve never heard of.
Deleting motion blur? Sure.
Night Vision? No problem.
Every year, labs compete on categories like hyperspectral restoration, satellite image enhancement, even raindrop removal (think car sensors)! Some highlights ->
Low-light enhancement is always popular.
Retinexformer, shown here got 2nd place in the 2024 contest.
A *TINY* transformer-based model, it runs in about 0.5 seconds for a 6K image on a single 3090. Only 1.6M parameters (<2MB weights at INT8)!
Maybe motion blur removal is more your thing.
UAVs are often used to examine wind turbine blades for early failure warning. Movement of drone + rotational velocity pose a challenge.
Here’s the 2021 winner DeblurGANv2, taking ~0.19s of processing per image.
What if an OS fit entirely inside the CPU’s Cache?
Turns out we’ve been doing it for decades.
CNK, the OS for IBM’s Blue Gene Supercomputer, is just 5,000 lines of tight C++.
Designed to “eliminate OS noise”, it lives in the cache after just a few milliseconds of boot.
Kernels that “live” in the cache are common for HPC.
Cray’s Catamount microkernel (~2005) used a similar method for jitter free timing.
Huge Pages, Statically Mapped Memory, and a lack of scheduling are all typical aspects of these systems.
What about the modern era?
Modern CPUs are *insane*.
L3 sizes exceed GIGABYTES per socket (see Genoa).
Many HPC labs run the hot path in light kernels (LWKs), outsourcing file I/O and syscalls to separate nodes; all with the intent of reducing µs-level jitter. Determinism is the name of the game.
Black’s Equation is brutal; the smaller the node, the faster electromigration kills the chip.
Savy consumers immediately undervolt and excessively cool their CPUs, buying precious extra years.
Z-Day + 3yrs:
Black Market booms, Xeons worth more than gold. Governments prioritize power, comms, finance. Military supply remains stable; leaning on stockpiled spares.
Datacenters desperately strip hardware from donor boards, the first "shrink" of cloud compute.