Is anyone working on a AlphaZero that explains a chess game like a chess commentator? This sounds like an interesting project that tests the emerging idea of language models grounded in virtual environments. @openai@deepmind@MetaAI
There's a prevailing cultural bias that we should not frame biological information using thinking frameworks that we've developed in the social and computer sciences. That biology must have its own mysterious vitality that's independent of ideas formulated by man.
This bias manifests itself in the argument that "a brain is not a computer." Depending on your level of abstraction, this may be true or false. Of course, brains don't have Von Neuman architectures and don't require programming. At a higher abstraction, there's a sameness here.
Humans need to have abstractions to work out the consequences of complex systems. But generative stuff that we find in biology is too gratuitously bureaucratic to understand. Biology is based on historic accidents that have very muddy information spaces.
AI risk is like computer security. Someone has to devote their career to becoming an expert in the field. It's very important. But it just so happens that I've got a limited attention budget and am not inclined towards that kind of work. It's not for me, so I avoid the topic.
AI risk and computer security demand studying dark patterns. The kind of patterns that are intentionally deceptive. Where you control other agents that are opposite their own purpose. I am just not a fan of immersing myself in this kind of information space.
I know that it's damn important, but there is a multitude of things in life that are also important. Humans have limited attention and lifespans, we just have to prioritize the stuff that we can contribute to society. We can't individually do everything.
A Why machine is an agent that, when given a task, is aware of why a task needs to be performed.
A Why machine is an agent that is able to realize that when he doesn't know the answer to why will take the initiative to seek further explanation. An agent that doesn't care about the result, doesn't seek an answer to why.
It's about time we abandon old debates that got us nowhere and reframe them with a modern scientific basis. The first one we should get rid of is the hard problem of consciousness. aeon.co/essays/the-har…
The other debate is the nature vs nurture debate for human cognition. Humans are not like things that come out of the factory. We have sufficient adaptability and the right inclinations to learn from our environment. Where development begins and ends is arbitrary.
All intelligence is collective intelligence and is built up from agential material. The hard problem is how to direct development so that we can robustly achieve our intended goals. The manageability of complex adaptive systems requires new methods to manage collective growth.
If I understand Wolfram's emes correctly, a specific instance of them defines the branchial space of his physics. They are the rules that govern the evolution of spacetime.
The peculiar thing about biological intelligence is that the perception of energy expenditure is related to how valuable we measure our experiences. Stuff that we deem as real are experiences we consume cognitive energies on.
Thus we find running a marathon more significant than a walk to the grocery store. We value writing an entire book more than conjuring up a tweetstorm. We value something we painted over the same thing that we photographed. We place value on what we spent effort to achieve.
We are disappointed when we fail in a task where we invested effort. We make an effort for things we do care about. We expend energy because we care. But a machine has no concept of energy, energy is unlimited to it. So there's no notion of caring.