The fact that our civilization does not have a right to repair is all you need to know to realize the fundamental misalignment of the economic model and that of sustainability.
The kinds of human technologies that we build are incompatible with the biological world. They are incompatible with everything else that we build. We are in a constant march towards greater and greater incompatibility.
Gone are the days where you can fix things. The only recourse is to throw broken things away. That is because corporations are incentivized to manufacture cheaply rather than building things to last.
To ensure recurring revenue, everything is built to become obsolete. Nothing is meant to last. Nothing is repairable. This is in contrast to biological systems where self-repairability is an intrinsic feature.
Not making things repairable promotes the propagation of vertically integrated monopolies. This is where companies own the entire stack. This only leads to less robustness in our system. But the key effect here is greater inequality.
If you want to address human inequality, you have to address allow widen the ability of humans to participate in the economy. The right of repair widens opportunities for participation. Vertical integration in contrast restricts opportunities only to those within the stack.
It's just a sad state of affairs when people hear about the 'right to repair' and dismiss it as something only technical folk care about. People cannot see that it is fundamental. Why can't they see it? A simple lack of education on how reusable systems like biology works.
The right of repair is directly important to decentralized production. If you want to reduce inequality, you must allow people to produce the goods and services that they need locally. But that's not possible with vertically integrated monopolies.
But why don't government's seek decentralized production? The reason is that they also don't see it as important! We live in a world where people only understand solutions that are machine like. So we concentrate production in a few places.
Enough of this concentration of production and you basically starve out entire populations. People need to migrate out of their countries or from rural areas because there is absolutely nothing to do there. They cannot grow or build anything that is competitive.
It used to be that you could competitively grow anything anywhere. That's not true anymore. Mechanized agriculture has made it competitively impossible to make a living by growing things yourself. The only survival strategy is to seek out the long tail.
I can keep going on with this rant, but I will stop. The important key point here is that without incentives for decentralized production, then we are stuck with every increasing inequality.
Only a tiny minority of people see this. So I don't expect this to change anytime soon.
The right of repair lowers the cost for everyone. It gives employment to those who can fix. It allows production to be shared. Why then do we keep discouraging this kind of an economy?
• • •
Missing some Tweet in this thread? You can try to
force a refresh
The term linear in the traditions of mathematical programming and hence machine learning doesn't have the same meaning as linear in the tradition of physics. So when ML folks speak of non-linearity, it is not the same as non-linear that physicists speak of.
I read this latest article and it was obviously apparent. "They’re “linear” because the only allowable power is exactly 1 and graphs of solutions to the equations form planes." quantamagazine.org/new-algorithm-…
Dynamical equations in physics usually have the power of 2. A non-linear equation in physics is one that typically does not have a closed-form analytic solution. The dynamics of a non-linear system is that it feedbacks into itself.
The purpose of the brain is homeostasis. More specifically, a particular variant referred to by a lesser-known word "allostasis". Accepting this reveals all that is wrong with machine language approaches in modeling brains. Permit me to explain...
Allostasis proposes that efficient regulation depends on the anticipation of needs and preparation for their satisfaction. This is a more complex form of homeostasis, which is typically defined as maintaining a system within a narrow operating range.
The problem with machine learning approaches is that the formulation of the domain of stability is performed by a researcher who explicitly defines an objective function or in the RL paradigm a reward function.
Permit me to explain why your brain is flipping its interpretation of the image. For starters, human vision acts very similarly to touch. medium.com/intuitionmachi…
When your eyes looks at an image, it is actually rapidly moving around and 'feeling' the image. The part of the eye that can see color and high resolution is just a small fraction of what you see in front of you.
Thrilled today to have anticipated a @DeepMind position paper several years before it was pre-published. This is a hint that I may in fact be at the bleeding edge of understanding general intelligence. Here's the said paper: arxiv.org/abs/2102.03406
The key points of this paper are what the authors describe as symbolic fluency: receptive, constructive, embedded, malleable, separable, meaningful, and graded. Let me explore this in more detail to mine newer insights.
I don't have a need to regurgitate the motivations of the approach other than to say that it derives inspiration from Peirce's formulation of semiotics. medium.com/intuitionmachi…
Quoted from the paper "Our definition of a symbol draws on the work of the philosopher Charles Sanders Peirce. Peirce outlined three categories of relation—icons, indices, and symbols—whose definitions illuminate the role of convention in establishing meaning."
Perhaps the authors got inspired by my blog post written in 2018. I do hope they continually get inspired by other blog posts on the same topic. medium.com/intuitionmachi…
I crucial step on the road towards AGI is a richer vocabulary for reasoning about inductive biases.
@yudapearl explores the apparent impedance mismatch between inductive biases and causal reasoning. But isn't the logical thinking required for good causal reasoning also not an inductive bias?