Lazy Twitter: What is a good metaphor for biology?
I'm asking this because the usual bias about biology is that because it is made of wet stuff that we are biased to think of like a massively scaled chemical engineering process.
We don't think of biology like it is the dry stuff that we find in semiconductor technology. That is where the scope of control is in the movement of few elementary particles (i.e. electrons and protons).
There is an analogy of molecular machines. But that implicitly assumes continuous movement despite knowing that all of biology is quantized in energy units of ATP breaking and recreation.
So appears something more like a molecular computer with discreteness in the movement of processes.
Furthermore, one cannot avoid the notion that biology is massively modular. That there are molecular structures at different scales that are conserved and refused across a multitude of species.
So does a molecular computer as a metaphor the best we can come up with to understand biology or is there a better metaphor?
• • •
Missing some Tweet in this thread? You can try to
force a refresh
There's is a massive asymmetric information gap between knowing a theory is wrong and discovering the correct theory. Becoming aware of flaws is just the first step in a very long journey. But if you never see the flaws, you never take the journey and thus never get anywhere.
This is a double-edged sword. So we see flaws that are simply not there and take a journey, towards discovery, that is along a deceptive path. The path where one sticks to because it's the one without forks. The one that continually confirms one's own biases.
Persistence requires a level of naivety, this is what keeps us motivated. This is because if we knew how long the journey was before we began, then we might have never started it at all.
Lazy Twitter: What was the name of that hypothesis that the technologies that were not the best but were most widely distributed would become the ones that take over the world? Do you know what I'm referring to?
I seem to recall that it was also used as an argument why the Apple M1 was so fast. I don't recall though the name of the theory or who came up with it. debugger.medium.com/why-is-apples-…
It's also related to how the successful programming languages are not the most elegant or powerful ones, but the kinds that just have the best fit at the time of its adoption. I also forget what they called this observation!
The fact that our civilization does not have a right to repair is all you need to know to realize the fundamental misalignment of the economic model and that of sustainability.
The kinds of human technologies that we build are incompatible with the biological world. They are incompatible with everything else that we build. We are in a constant march towards greater and greater incompatibility.
Gone are the days where you can fix things. The only recourse is to throw broken things away. That is because corporations are incentivized to manufacture cheaply rather than building things to last.
The term linear in the traditions of mathematical programming and hence machine learning doesn't have the same meaning as linear in the tradition of physics. So when ML folks speak of non-linearity, it is not the same as non-linear that physicists speak of.
I read this latest article and it was obviously apparent. "They’re “linear” because the only allowable power is exactly 1 and graphs of solutions to the equations form planes." quantamagazine.org/new-algorithm-…
Dynamical equations in physics usually have the power of 2. A non-linear equation in physics is one that typically does not have a closed-form analytic solution. The dynamics of a non-linear system is that it feedbacks into itself.
The purpose of the brain is homeostasis. More specifically, a particular variant referred to by a lesser-known word "allostasis". Accepting this reveals all that is wrong with machine language approaches in modeling brains. Permit me to explain...
Allostasis proposes that efficient regulation depends on the anticipation of needs and preparation for their satisfaction. This is a more complex form of homeostasis, which is typically defined as maintaining a system within a narrow operating range.
The problem with machine learning approaches is that the formulation of the domain of stability is performed by a researcher who explicitly defines an objective function or in the RL paradigm a reward function.