Most software is a composition of layers of fossilised snapshots of the organizational understanding of the problem those layers were originally designed to solve - but we rely on it as if it is continually maintained critical infrastructure.
One of those has to give.
What a system was designed to do, what it does, what it is supposed to do, and what people use it for are 4 mostly unrelated concepts.
Asbestos problems, where you've introduced a dependency to serve purpose but due to "unforeseen" danger now needs to be completely replaced, suck - but competent organizations have continuous deployment pipelines and dependency tracing . At worst, at least you know of the danger.
The problem is most organizations don't consider software a competency - at best they consider it a cost center, and at worst they consider it an asset class.
"Out of all possible dependencies predict the next Asbestos" is an intractable problem.
"How to quickly remove Asbestos from a system" is a solved problem and has a clear investment path.
Which doesn't bode well.
It is cheaper to build reactive capabilities while investing in the problems you care about.
It is expensive to divert that funding towards critical infrastructure to prevent the need to react in the first place.
And round and round we go.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
It's been a while since I've visited this topic, and with some vacation coming up I think I might want to dive into it some more. I would really like to find some kind of solution to this.
I'm going to dump some thoughts about approaches I've already tried in this thread.
My ongoing rage at papers that define an adversary strong enough to compromise any participant at will, but weak enough that they are incapable of arbitrary protocol violations.
"We permit the adversary to perform all of these actions, but we assume they will never lie to the other members about it"
"We assume the adversary is omnipotent in regards to the internal state of protocol participants, but they will definitely never introduce arbitrary network delays."
Many of the major trends in crypto right now are leading to the development of structures that are fundamentally aligned with anarcho-mutualism (community ownership and control, community credit).
You have to wipe off the icky layer of rentier capitalism settling on the surface.
I started programming simple real mode operating systems in my teens and it's fluctuated as a hobby for me over the course of the last 2 decades.
In between I've built hobby emulators, (dis)assemblers, fuzzers, compilers, and uncountable weird hybrids.
I'm going to assume you know at least one high level programming language. If you don't then you should learn one. Any one will do. People may tell you the choice matters, it doesn't.
The basic principles you will learn in one are transferable to others.
I spent my recent evenings writing an operating system in an assembly language that I also developed to compile to a custom bytecode that I also designed to run on an virtual machine that I also implemented.
A meditation on recursive complexity and what actually makes me happy.
It is completely useless. All that work, and you can only run a few commands, and one of them is QUIT.
I have never loved a piece of software more.
The kernel is 832 lines of custom assembly. ~300 are dedicates to embedding binary data like font bitmaps.
Encrypted communication tools should be designed such that devs *don't* have access to things like "where [and when] accounts are created, how [data] travels, which [messages] are fastest to spread"
Basically this. The underlying expectation that "responsible encryption" requires some kind of metadata surveillance to be safe seems to be to be a deeply flawed narrative that can only result in greater and greater privacy harms.