There's is a massive asymmetric information gap between knowing a theory is wrong and discovering the correct theory. Becoming aware of flaws is just the first step in a very long journey. But if you never see the flaws, you never take the journey and thus never get anywhere.
This is a double-edged sword. So we see flaws that are simply not there and take a journey, towards discovery, that is along a deceptive path. The path where one sticks to because it's the one without forks. The one that continually confirms one's own biases.
Persistence requires a level of naivety, this is what keeps us motivated. This is because if we knew how long the journey was before we began, then we might have never started it at all.
Innovation requires the rejection of what is assumed. But it is hard to know what is implicitly assumed because it is baked into our language and our education. Both influence our thoughts and thus we must be always cognizant of the biases that it introduces.
We cannot individually break out of the echo chamber of our own thoughts. Evolution has primed us to bias our thinking to confirm ourselves. A Feynman has said, 'we are the easiest person to fool'.
The only solution is to realize that there are other minds that can reveal the incorrectness of your own language, education and thus thinking. This can only be uncovered if you are willing to honestly engage in conversation.
All innovative discoveries are found through interaction. The most powerful general intelligence that exists isn't the intelligence in our own heads. Rather, it is in the intelligence that exists in the interactions of many independent minds.
It is indeed striking that when you discover the bias of our noun-centric modern language. It's all too easy to become a reductionist that decouples interaction from the whole. But the richness of reality is a consequence of interactions and not static things.
Thus we have come full circle. Language is not a static thing. Language is interaction and as a consequence a living thing. A living thing that "is created by all living things. It surrounds us and penetrates us; it binds the galaxy together.” ;-)
Lazy Twitter: What was the name of that hypothesis that the technologies that were not the best but were most widely distributed would become the ones that take over the world? Do you know what I'm referring to?
I seem to recall that it was also used as an argument why the Apple M1 was so fast. I don't recall though the name of the theory or who came up with it. debugger.medium.com/why-is-apples-…
It's also related to how the successful programming languages are not the most elegant or powerful ones, but the kinds that just have the best fit at the time of its adoption. I also forget what they called this observation!
Lazy Twitter: What is a good metaphor for biology?
I'm asking this because the usual bias about biology is that because it is made of wet stuff that we are biased to think of like a massively scaled chemical engineering process.
We don't think of biology like it is the dry stuff that we find in semiconductor technology. That is where the scope of control is in the movement of few elementary particles (i.e. electrons and protons).
The fact that our civilization does not have a right to repair is all you need to know to realize the fundamental misalignment of the economic model and that of sustainability.
The kinds of human technologies that we build are incompatible with the biological world. They are incompatible with everything else that we build. We are in a constant march towards greater and greater incompatibility.
Gone are the days where you can fix things. The only recourse is to throw broken things away. That is because corporations are incentivized to manufacture cheaply rather than building things to last.
The term linear in the traditions of mathematical programming and hence machine learning doesn't have the same meaning as linear in the tradition of physics. So when ML folks speak of non-linearity, it is not the same as non-linear that physicists speak of.
I read this latest article and it was obviously apparent. "They’re “linear” because the only allowable power is exactly 1 and graphs of solutions to the equations form planes." quantamagazine.org/new-algorithm-…
Dynamical equations in physics usually have the power of 2. A non-linear equation in physics is one that typically does not have a closed-form analytic solution. The dynamics of a non-linear system is that it feedbacks into itself.
The purpose of the brain is homeostasis. More specifically, a particular variant referred to by a lesser-known word "allostasis". Accepting this reveals all that is wrong with machine language approaches in modeling brains. Permit me to explain...
Allostasis proposes that efficient regulation depends on the anticipation of needs and preparation for their satisfaction. This is a more complex form of homeostasis, which is typically defined as maintaining a system within a narrow operating range.
The problem with machine learning approaches is that the formulation of the domain of stability is performed by a researcher who explicitly defines an objective function or in the RL paradigm a reward function.