ChatGPT is useful but is not a good model for human-like reasoning, since it needs a large training data set and consumes lots of energy. A thread on how far we are away from brain-inspired autonomous agents that, like humans, go out in the world and learn purposeful behavior.
The 1st step is to accept Bayesian (probabilistic) reasoning as the gold standard for reliable information processing. Yet, the adoption of Bayesian reasoning proceeds slowly in many relevant engineering fields such as signal processing and machine learning.
The 2nd step drops the utility theory cap on top of Bayesian reasoning, which is still a strong anchor in reinforcement learning and optimal control theory. Agents that act solely to maximize expected future utility will fail in several ways (see t.ly/GepB).
Rather, among many advantages, active inference, which is based on the Free Energy Principle, leads to Bayes-optimal behavior that properly balances utility and information-seeking drives. #activeinference#freeenergyprinciple#FEP
Thirdly, agents that both sense and act on the world under real-time situated conditions, affect their own future behavior in unpredictable ways. These agents should be programmed in a reactive programming style, rather than the much more common imperative coding style.
In short, progress towards low-power, autonomous agents with human-like curiosity depends on the adoption rate of a few paradigm shifts: (1) Bayesian reasoning as a gold standard, (2) utility theory as misdirection, and (3) favoring reactive over imperative programming style.
• • •
Missing some Tweet in this thread? You can try to
force a refresh