LaurieWired Profile picture
Dec 9, 2024 4 tweets 2 min read Read on X
Shutting down your PC before 1995 was kind of brutal.

You saved your work, the buffers flushed, wait for the HDD lights to switch off, and

*yoink*

You flick the mechanical switch directly interrupting the flow of power.

The interesting part is when this all changed.Image
Two major developments had to occur.

First, the standardization of a physical connection in the system linking the power supply to the motherboard. (Hardware constraint)

Second, a universal driver mechanism to request changes in the power state. (Software constraint) Image
These, respectively, became known as the ATX and APM Standards.

Although it would have been possible much earlier; industry fragmentation in the PC market between Microsoft, IBM, Intel and others stagnated progress.

By 1995, things started to get more consolidated. Image
Eventually control of the power state of the system via the OS became more widespread. And for good reason!

Caches, more complex filesystems, and multitasking all increased the risk of data corruption during an "unclean" shutdown.

The APM standard later got replaced by ACPI, but it's an interesting tidbit of computer history nontheless.

If you'd like to read some interesting history of the APM vs ACPI debate, check out this writeup by MJG59.

Why ACPI?:
mjg59.dreamwidth.org/68350.html

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with LaurieWired

LaurieWired Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @lauriewired

Mar 23
The way SD cards fail is…gross.

Anyone that does heavy photography or video work knows they’ll gradually get slow; often without outright failing.

I blame the SD association.

The storage controller isn't required to report *any* health information to the host! Image
Image
There’s nothing hugely different about SD cards compared to eMMC, and to some extent SSDs.

The onboard controller *knows* the card is going bad, and the spare pool of reserved blocks is shrinking.

It just doesn’t tell you. Image
In camera land this is really annoying, because most bodies have a write speed cutoff.

Like, ~60MB/s for 4K video or so.

Your SD card might have started out at 125MB/s, slowly degrading over months, until suddenly you’re below the ~60MB/s spec and dropping frames. Image
Read 4 tweets
Feb 23
The human heart is a Turing Machine.

Researchers figured it out with an Xbox 360.

I realize how fake that sounds...but it’s real research published in Elsevier's Computational Biology and Chemistry journal in 2009.

Hearts are electrically excitable media. Image
Image
The author figured out you can build a NOR gate from heart cells.

NOR is a universal gate, so you can build all the other gates out of NORs.

Thus, arbitrary logic circuits, plus time…boom you have a computer.

But wait! Computers have interesting properties: Image
Image
Now that you’ve proven cardiac tissue is Turing complete, uh oh, it’s vulnerable to the Halting problem.

Thus, there is no general algorithm that can look at the state of cardiac tissue and decide if it will ever stop.

Arrhythmias are fundamentally uncomputable! Image
Image
Read 4 tweets
Feb 19
A open secret is that all cameras are basically the same. Just look at the sensor.

Leica SL2-S? IMX410
Sony a7 III? IMX410
Lumix S5II? IMX410
BMCC6k? IMX410

Same photosites…but they still manage different feels.

The processing pipeline is where it gets interesting. Image
Image
Much of it comes down to company taste.

Sony produces the majority of sensors; ironically I think they do the worst job with the signal chain.

First, you start with the color correction matrix (CCM).

The catch is punchy colors start to mathematically multiply noise. Image
You end up with a non-linear distribution of noisy data. Tricky.

Thus begins the NR pipeline…and this is where I start to have a real problem with Sony.

They bake spatial NR directly into the RAW path.

It's a sneaky trick to cheat on dynamic range benchmarks. Image
Read 4 tweets
Feb 13
CPUs are getting worse.

We’ve pushed the silicon so hard that silent data corruptions (SDCs) are no longer a theoretical problem.

Mercurial Cores are terrifying because they don’t hard-fail; they produce rare, but *incorrect* computations! Image
*When* exactly the problem occurred is hard to pinpoint.

The possibility was brought up at the Dependable Systems and Networks conference in 2008.

The first real SDC disclosure happened in 2021 with Meta. Google and Alibaba also confirmed later. Image
Perhaps more terrifying is that cores can *become* mercurial over time.

Chips are pushed so hard that electromigration aging can make compute “more wrong”.

No one knows for sure what process node started the phenomenon...but it's statically likely to be 14nm or 7nm. Image
Read 4 tweets
Feb 12
If you take a picture of a Raspberry Pi 2 with a strong flash it will reboot.

A specific power regulator (U16) was chip-scale packaged to save on cost and die space.

Since the silicon is basically naked, a xeon flash can cause a massive (but very short) current spike. Image
Image
Naked silicon (specifically, WLCSP) isn’t “bad” per se; it’s heavily used in mobile phones.

The thing is…phones are usually sealed. The Pi is an exposed development board.

Don't blame the engineers too hard, Apple actually had a similar issue with the iPhone 4 (back glass). Image
Image
The fix for the RPi is a bit obvious of course.

either:

1. don’t do that (take pictures with high powered flash inches away)
2. if you must…put a little blu-tak, nail polish, or other opaque inert substance on U16
Read 4 tweets
Jan 12
Dolphin’s dev blogs are some of the best technical writing on internet and not enough people read them.

My favorite is their “Ridiculous Ubershader”.

Pre-Compilation of the GameCube’s graphical effects is impossible:

5.64 x 10^511 possible states! So what do you do? Image
Image
Just-In-Time compilation *sucked*.

I mean, it “worked”…but every time a new graphical effect appeared, you had to:

Translate into shader code
Ask Driver to Compile
PAUSE the game to finish compilation
Resume and draw frame
The solution they developed was insane.

Emulate the Gamecube’s rendering pipeline (as in, the actual hardware circuits) *inside* of a pixel shader.

Turns out, it’s easier to just “pretend” to be a real GameCube GPU.

It took 2+ years, and a massive amount of effort. Image
Image
Read 4 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(