LaurieWired Profile picture
Dec 9, 2024 4 tweets 2 min read Read on X
Shutting down your PC before 1995 was kind of brutal.

You saved your work, the buffers flushed, wait for the HDD lights to switch off, and

*yoink*

You flick the mechanical switch directly interrupting the flow of power.

The interesting part is when this all changed.Image
Two major developments had to occur.

First, the standardization of a physical connection in the system linking the power supply to the motherboard. (Hardware constraint)

Second, a universal driver mechanism to request changes in the power state. (Software constraint) Image
These, respectively, became known as the ATX and APM Standards.

Although it would have been possible much earlier; industry fragmentation in the PC market between Microsoft, IBM, Intel and others stagnated progress.

By 1995, things started to get more consolidated. Image
Eventually control of the power state of the system via the OS became more widespread. And for good reason!

Caches, more complex filesystems, and multitasking all increased the risk of data corruption during an "unclean" shutdown.

The APM standard later got replaced by ACPI, but it's an interesting tidbit of computer history nontheless.

If you'd like to read some interesting history of the APM vs ACPI debate, check out this writeup by MJG59.

Why ACPI?:
mjg59.dreamwidth.org/68350.html

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with LaurieWired

LaurieWired Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @lauriewired

Jan 12
Dolphin’s dev blogs are some of the best technical writing on internet and not enough people read them.

My favorite is their “Ridiculous Ubershader”.

Pre-Compilation of the GameCube’s graphical effects is impossible:

5.64 x 10^511 possible states! So what do you do? Image
Image
Just-In-Time compilation *sucked*.

I mean, it “worked”…but every time a new graphical effect appeared, you had to:

Translate into shader code
Ask Driver to Compile
PAUSE the game to finish compilation
Resume and draw frame
The solution they developed was insane.

Emulate the Gamecube’s rendering pipeline (as in, the actual hardware circuits) *inside* of a pixel shader.

Turns out, it’s easier to just “pretend” to be a real GameCube GPU.

It took 2+ years, and a massive amount of effort. Image
Image
Read 4 tweets
Jan 6
2026 is the year Linux finally catches up to...UNIX.

No, seriously. Tiered memory architectures were solved in UNIX decades ago.

The era of pretending "all RAM is equal" is becoming unaffordable.

Thankfully, software is getting pretty clever: Image
Transparent Page Placement (TPP) is the modern linux equivalent of a very old idea.

Remember earlier in my series where we talked about CXL, and the latency penalty?

Turns out, having the Kernel place rarely accessed pages in the slower bucket (CXL) works pretty well. Image
The whole point of course is to get the best bang for the buck.

Not all memory needs to be fast.

If the OS handles it, and it's otherwise transparent to the user...why not? Especially with the current shortages.

Meta upstreamed the patch into the 5.18 Linux Kernel. Image
Read 4 tweets
Nov 13, 2025
The world’s first microprocessor is *NOT* from Intel.

But you won’t find it in many textbooks.

It was a secret only declassified in 1998; for good reason.

The Garrett AiResearch F14 Air Data Computer was ~8x faster than the Intel 4004, and a year earlier! Image
Image
The F14 used variable-sweep wings.

With the performance envelope the Navy wanted, humans wouldn’t be able to activate it fast enough…much less do the math in their head!

A custom air data computer was created, doing polynomial-style calculations on sensor input. Image
Image
Ray Holt, lead designer of the F14 computer, wanted to publish an article about the chip in Computer Design magazine.

1971 - Denied Publication, US Navy Classified.
1985 - Tried again, denied again. Still Classified.
1997 - Examined, cleared for public release 1998. Image
Image
Read 4 tweets
Nov 1, 2025
A Spooky Unix story for Halloween.

A new programmer accidentally ran “rm -rf *” as root, on one of the main computers at UoM.

He stopped halfway, but /bin, /etc, /dev, and /lib were gone.

What followed was one of the most insane live recoveries in computer history: Image
Image
A single Emacs session was still open, with a root shell.

Many student’s PhD thesis work was on the box. Every basic tool, ls, cd, mkdir, etc was already wiped.

The last tape backup was a week ago. Any downtime was unthinkable. Image
Image
*Assuming* you could copy or recover any tools, they needed a place to put them.

How do you rename /tmp to /etc…without mv?

Don’t forget; you can’t even compile code.

Remember that single Emacs session? Yeah, time to break out some raw VAX assembly. Image
Read 4 tweets
Oct 17, 2025
You’re (probably) measuring application performance wrong.

Humans have a strong bias for throughput.

"I can handle X requests per second."

Real capacity engineers use response-time curves. Image
Image
It all comes down to queueing theory.

Unfortunately, computers don’t degrade gracefully under load.

70% CPU is smooth sailing. 95% is a nightmare.

Programmers (incorrectly) focus on the absolute value, when really they should be looking at the derivative. Image
Image
Highways are the perfect real life example of this.

Traffic engineers study flow density, not overall vehicle counts.

A road handling 10,000 cars per hour (throughput) means nothing if the average speed drops to 5mph (response time).

Computers are the same. Image
Image
Read 4 tweets
Oct 14, 2025
GPU computing before CUDA was *weird*.

Memory primitives were graphics shaped, not computer science shaped.

Want to do math on an array? Store it as an RGBA texture.

Fragment Shader for processing. *Paint* the result in a big rectangle. Image
Image
As you hit the more theoretical sides of Computer Science, you start to realize almost *anything* can produce useful compute.

You just have to get creative with how it’s stored.

The math might be stored in a weird box, but the representation is still valid. Image
BrookGPU (Stanford) is widely considered the birth of a pre-CUDA GPGPU framework.

Virtualizing CPU-style primitives; it hid a lot of graphical “weirdness”.

By extending C with stream, kernel, and reduction constructs, GPUs started to act more like a co-processor. Image
Read 4 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(