The thing with pointing out "AI can't do X!" is that, if you keep refining X into something narrow and precise enough, you'll eventually cross a threshold where a realistic amount of engineering and training data make X possible.
AI can always do *specific* things -- as long as they're sufficiently specific and you're investing sufficient effort / data.
The problem with AI isn't that it can't do a specific X, it's that it has basically no intelligence at all at this time. No general cognitive abilities.
Intelligence simply means moving to a different part of the specificity / effort spectrum, one where you can master broad tasks with little effort.
You can always make up for a lack of intelligence by reducing task uncertainty (making X more specific) or investing more effort.
No matter how stupid the student, they can always pass the exam if you give them a set of problems very similar to what they will be tested on (reduced task uncertainty), and if they're willing to spend countless hours studying them (more experience).
What's hard is to improvise from little experience in the face of high uncertainty and novelty. That's what biological intelligence has evolved for. That's what current algorithms and models can't do at all.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
The Turing test was *never* a relevant goal for AI. We should remember that Turing never intended it as a literal test to be passed by a machine designed for that purpose, but as a philosophical device in an argument about the nature of thinking.
The major flaw of the Turing test is that it entirely abdicates the responsibility of defining intelligence and how to evaluate it (the value of a test). Instead, it delegates the task to human judges, who themselves don't have a proper definition or a proper evaluation process.
As a result, the Turing test does not at all provide incentives to develop greater intelligence, it solely encourages developers to figure out how to trick humans into believing a chatbot is intelligent.
I keep coming back to the importance of self-image in one's life trajectory. You become who you believe you are. You do what you believe you can do.
Belief is a greater determinant than ability or environment.
"Man often becomes what he believes himself to be. If I keep on saying to myself that I cannot do a certain thing, it is possible that I may end by really becoming incapable of doing it...."
Having to figure things out by yourself is extraordinarily inefficient (plus, risky). The primary benefit of civilization is curriculum optimization: getting you to the right destination while expending the least amount of experience. Civilization is integral to human cognition.
To caricature, you could say that the human brain is merely a short-lived mirror of what constitutes the main body of human cognition: the thought patterns, behaviors, and systems we've collectively evolved over thousands of years.
Your mind reflects the civilization that shaped it -- it wouldn't amount to much without it.
I believe cultural wealth is more important than material wealth, i.e. it's better to have a house full of books than to have marble in your bathroom. Holds true for nations as well
Lots of misinterpretations of this tweet. It does not imply that these two things are opposites, nor that they're independent. It simply means that a rich cultural life enhances your lifestyle (and, in a strong sense, "makes your life worth living") more than material luxury.
While it is necessary to be financially comfortable to have a rich cultural life (in particular because you need free time), it's often much cheaper than funding the sort of lifestyle that society would normally associate with "being rich".
The lack of awareness of AI ethics issues by AI practitioners has been an ongoing source of very real problems. On the other hand, I have yet to hear of any harm caused by making AI practitioners think about the implications of their work.
Awareness of human consequences is a necessity in all scientific & engineering disciplines. It's even more important in fields that are "high leverage", where a very small team consisting entirely of engineers can make a big impact. Like CS, and in particular AI.
If your work has "impact", then by definition it is changing the world. You must then ask *how* the world is changing -- in which direction does your impact point? Who benefits and who loses out? Technological impact always has a moral direction.
Humans develop their full cognitive potential in an environment that is complex & challenging, without being overwhelming. Similarly, the big technological leaps of past civilizations have occurred in response to environmental constraints that were challenging, but not too harsh.
A lack of challenges and hardships is just as big an obstacle to the realization of one's potential as facing hardships so tough they cannot be overcome. This applies to individuals and cultures alike.
In the first few millennia of the history of civilization, natural environmental constraints were the main driver of (and limit to) human ingenuity. New technology arose from the need to survive in challenging environments.