Black’s Equation is brutal; the smaller the node, the faster electromigration kills the chip.
Savy consumers immediately undervolt and excessively cool their CPUs, buying precious extra years.
Z-Day + 3yrs:
Black Market booms, Xeons worth more than gold. Governments prioritize power, comms, finance. Military supply remains stable; leaning on stockpiled spares.
Datacenters desperately strip hardware from donor boards, the first "shrink" of cloud compute.
Z-Day + 7Yrs:
Portable computing regresses, phone SoCs fail faster from solder fatigue. Internet switches hit EOL, nothing horrible yet, but risk increases.
Used “dumb” car market skyrockets, lead-free solder in ECUs experience their first failures from thermal cycling.
Z-Day + 15Yrs
The “Internet” no longer exists as a single fabric. The privileged fall back to private peering or Sat links.
Sneakernet via SSDs popular, careful usage keeps them alive longer than network switches. For those lucky enough not to have their desktop computers confiscated, Boot-to-RAM distros and PXE images are the norm to minimize day-to-day writes.
HDDs are *well* past the bathtub curve, most are completely dead. Careful salvaging of spindle motors and actuator arms, with precision repairs keeps the most critical high capacity arrays online.
Z-Day + 30Yrs
Long-term storage has shifted completely to optical media. Only vintage compute survives at the consumer level.
The large node sizes of old hardware make them extremely resistant to electromigration, Motorola 68000s have modeled gate wear beyond 10k years! Gameboys, Macintosh SEs, Commodore 64s resist the no new silicon future the best.
Fancier, (but still wide node) hardware like iMac G3s become prized workstations of the elite. The state of computing as a whole looks much more like the 1970s-80s.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
NTIRE is the coolest conference you’ve never heard of.
Deleting motion blur? Sure.
Night Vision? No problem.
Every year, labs compete on categories like hyperspectral restoration, satellite image enhancement, even raindrop removal (think car sensors)! Some highlights ->
Low-light enhancement is always popular.
Retinexformer, shown here got 2nd place in the 2024 contest.
A *TINY* transformer-based model, it runs in about 0.5 seconds for a 6K image on a single 3090. Only 1.6M parameters (<2MB weights at INT8)!
Maybe motion blur removal is more your thing.
UAVs are often used to examine wind turbine blades for early failure warning. Movement of drone + rotational velocity pose a challenge.
Here’s the 2021 winner DeblurGANv2, taking ~0.19s of processing per image.
What if an OS fit entirely inside the CPU’s Cache?
Turns out we’ve been doing it for decades.
CNK, the OS for IBM’s Blue Gene Supercomputer, is just 5,000 lines of tight C++.
Designed to “eliminate OS noise”, it lives in the cache after just a few milliseconds of boot.
Kernels that “live” in the cache are common for HPC.
Cray’s Catamount microkernel (~2005) used a similar method for jitter free timing.
Huge Pages, Statically Mapped Memory, and a lack of scheduling are all typical aspects of these systems.
What about the modern era?
Modern CPUs are *insane*.
L3 sizes exceed GIGABYTES per socket (see Genoa).
Many HPC labs run the hot path in light kernels (LWKs), outsourcing file I/O and syscalls to separate nodes; all with the intent of reducing µs-level jitter. Determinism is the name of the game.