LaurieWired Profile picture
Aug 15, 2025 5 tweets 3 min read Read on X
lp0 is a Linux error code that means “printer on fire.”

It’s not a joke. In the 50s, computerized printing was an experimental field.

At LLNL (yes, the nuclear testing site), cathode ray tubes created a xerographic printer.

...it would occasionally catch fire. Image
Image
State-of-the art at the time, the printer was modified with external fusing ovens hit a whopping…

1 page per second!

In the event of a stall, fresh paper would continuously shoot into the oven, causing aggressive combustion. Image
As tech later advanced to drum machines, the fire “problem” didn’t go away.

High speed rotary drums could cause enough friction during a jam to self-combust.

Even minor hangups needed immediate intervention. Image
In the 80s, Xerox created the first prototype laser printer.

Apparently learning nothing from lessons of the past, paper had to pass directly over a glowing wire.

If a jam occurred *anywhere* in the system, the sheet in the fuser would immediately catch fire.

The prototype UNIX driver reported every jam as “on fire” to motivate the technician to take an immediate look.

lp0 still exists to this day in the Linux source code!

Go and search the git tree for “on fire”, you’ll find it!Image
Image
correction, the full error code is “lp%d on fire”, where lp%d refers to the printer itself

my brain is tired cut me some slack Image

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with LaurieWired

LaurieWired Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @lauriewired

Jan 12
Dolphin’s dev blogs are some of the best technical writing on internet and not enough people read them.

My favorite is their “Ridiculous Ubershader”.

Pre-Compilation of the GameCube’s graphical effects is impossible:

5.64 x 10^511 possible states! So what do you do? Image
Image
Just-In-Time compilation *sucked*.

I mean, it “worked”…but every time a new graphical effect appeared, you had to:

Translate into shader code
Ask Driver to Compile
PAUSE the game to finish compilation
Resume and draw frame
The solution they developed was insane.

Emulate the Gamecube’s rendering pipeline (as in, the actual hardware circuits) *inside* of a pixel shader.

Turns out, it’s easier to just “pretend” to be a real GameCube GPU.

It took 2+ years, and a massive amount of effort. Image
Image
Read 4 tweets
Jan 6
2026 is the year Linux finally catches up to...UNIX.

No, seriously. Tiered memory architectures were solved in UNIX decades ago.

The era of pretending "all RAM is equal" is becoming unaffordable.

Thankfully, software is getting pretty clever: Image
Transparent Page Placement (TPP) is the modern linux equivalent of a very old idea.

Remember earlier in my series where we talked about CXL, and the latency penalty?

Turns out, having the Kernel place rarely accessed pages in the slower bucket (CXL) works pretty well. Image
The whole point of course is to get the best bang for the buck.

Not all memory needs to be fast.

If the OS handles it, and it's otherwise transparent to the user...why not? Especially with the current shortages.

Meta upstreamed the patch into the 5.18 Linux Kernel. Image
Read 4 tweets
Nov 13, 2025
The world’s first microprocessor is *NOT* from Intel.

But you won’t find it in many textbooks.

It was a secret only declassified in 1998; for good reason.

The Garrett AiResearch F14 Air Data Computer was ~8x faster than the Intel 4004, and a year earlier! Image
Image
The F14 used variable-sweep wings.

With the performance envelope the Navy wanted, humans wouldn’t be able to activate it fast enough…much less do the math in their head!

A custom air data computer was created, doing polynomial-style calculations on sensor input. Image
Image
Ray Holt, lead designer of the F14 computer, wanted to publish an article about the chip in Computer Design magazine.

1971 - Denied Publication, US Navy Classified.
1985 - Tried again, denied again. Still Classified.
1997 - Examined, cleared for public release 1998. Image
Image
Read 4 tweets
Nov 1, 2025
A Spooky Unix story for Halloween.

A new programmer accidentally ran “rm -rf *” as root, on one of the main computers at UoM.

He stopped halfway, but /bin, /etc, /dev, and /lib were gone.

What followed was one of the most insane live recoveries in computer history: Image
Image
A single Emacs session was still open, with a root shell.

Many student’s PhD thesis work was on the box. Every basic tool, ls, cd, mkdir, etc was already wiped.

The last tape backup was a week ago. Any downtime was unthinkable. Image
Image
*Assuming* you could copy or recover any tools, they needed a place to put them.

How do you rename /tmp to /etc…without mv?

Don’t forget; you can’t even compile code.

Remember that single Emacs session? Yeah, time to break out some raw VAX assembly. Image
Read 4 tweets
Oct 17, 2025
You’re (probably) measuring application performance wrong.

Humans have a strong bias for throughput.

"I can handle X requests per second."

Real capacity engineers use response-time curves. Image
Image
It all comes down to queueing theory.

Unfortunately, computers don’t degrade gracefully under load.

70% CPU is smooth sailing. 95% is a nightmare.

Programmers (incorrectly) focus on the absolute value, when really they should be looking at the derivative. Image
Image
Highways are the perfect real life example of this.

Traffic engineers study flow density, not overall vehicle counts.

A road handling 10,000 cars per hour (throughput) means nothing if the average speed drops to 5mph (response time).

Computers are the same. Image
Image
Read 4 tweets
Oct 14, 2025
GPU computing before CUDA was *weird*.

Memory primitives were graphics shaped, not computer science shaped.

Want to do math on an array? Store it as an RGBA texture.

Fragment Shader for processing. *Paint* the result in a big rectangle. Image
Image
As you hit the more theoretical sides of Computer Science, you start to realize almost *anything* can produce useful compute.

You just have to get creative with how it’s stored.

The math might be stored in a weird box, but the representation is still valid. Image
BrookGPU (Stanford) is widely considered the birth of a pre-CUDA GPGPU framework.

Virtualizing CPU-style primitives; it hid a lot of graphical “weirdness”.

By extending C with stream, kernel, and reduction constructs, GPUs started to act more like a co-processor. Image
Read 4 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(