Shutting down your PC before 1995 was kind of brutal.
You saved your work, the buffers flushed, wait for the HDD lights to switch off, and
*yoink*
You flick the mechanical switch directly interrupting the flow of power.
The interesting part is when this all changed.
Two major developments had to occur.
First, the standardization of a physical connection in the system linking the power supply to the motherboard. (Hardware constraint)
Second, a universal driver mechanism to request changes in the power state. (Software constraint)
These, respectively, became known as the ATX and APM Standards.
Although it would have been possible much earlier; industry fragmentation in the PC market between Microsoft, IBM, Intel and others stagnated progress.
By 1995, things started to get more consolidated.
Eventually control of the power state of the system via the OS became more widespread. And for good reason!
Caches, more complex filesystems, and multitasking all increased the risk of data corruption during an "unclean" shutdown.
The APM standard later got replaced by ACPI, but it's an interesting tidbit of computer history nontheless.
If you'd like to read some interesting history of the APM vs ACPI debate, check out this writeup by MJG59.
What’s the difference between experience and expertise?
A 2008 research paper found an interesting distinction.
Years of work related experience didn't affect a person's susceptibility to various cognitive biases. In other words, experience didn't help at all. So what did?
As it turned out; professionals who took specific training were much less susceptible to bias than those with extensive work experience.
“Expertise” can be defined as a person who not only has a deep understanding; but also the proper tooling for the situation.
I see this bias all the time in the software industry.
Experienced professionals otherwise rejecting useful tooling (e.g. LLM code generation) due to pride, cognitive bias, or lack of interest.
Expertise is continuous experimentation; adding new tools to your workshop.
Due to Rice's Theorem, it's impossible to write a program that can perfectly determine if any given program is malicious.
This is because "being malicious" is a behavioral property of the program.
Even if we could perfectly define what "malicious behavior" *is* (which is a huge problem in of itself), any property about what a program will eventually do is undecidable.
Security in the traditional sense is probabilistic.
In other words, we can make AVs very likely to catch malware, but you cannot mathematically guarantee it.
You can't:
- analyze all execution paths
- run for infinite time
- simulate all possible environments.
- predict all possible transformations