A Sunday late apéritif:
Dreyfus and Dreifus (1990). Making a mind versus modelling the brain: Artificial Intelligence back at a branch-point. In M. Boden (Ed). The Philosophy of Artificial Intelligence.
“In the early 1950s, as calculating machines were coming into their own…
“At that point two opposed visions of what computers could be, each with its correlated research programme, emerged and struggled for recognition. One faction saw computers as a system for manipulating mental symbols; the other, as a medium for modelling the brain.
"One sought to use computers to instantiate a formal representation of the world; the other, to simulate the interactions of neurones. One took problem-solving as its paradigm of intelligence; the other, learning. One utilized logic; the other, statistics.
>
"One school was the heir to the rationalist, reductionist tradition in philosophy; the other viewed itself as idealized, holistic neuroscience."
"The rallying cry of the first group was that both minds and digital computers are physical-symbol systems.
This way of looking at computers became the basis of a way of looking at minds. "
Not to break the tradition of seeking a non-biological, mechanical analogy, of course. 😄
"Newell and Simon hypothesized that the human brain and the digital computer, while totally different in structure and mechanism, had at a certain level of abstraction a common functional description. "
So functionally <- abstraction +
"At this level both the human brain and the appropriately programmed digital computer could be seen as two different instantiations of a single species of device -a device that generated intelligent behaviour by manipulating symbols by means of formal rules."
"AI can be thought of as the attempt to find the primitive elements and logical relations in the subject (man or computer) that mirror the primitive objects and their relations that make up the world."
𝐼𝓃𝓉𝑒𝓇𝓂𝑒𝓏𝓏𝑜
"The opposed intuition that we should set about creating artificial intelligence by modelling the brain rather than the mind's symbolic representation of the world, drew its inspiration not from philosophy but from what was soon to be called neuroscience."
"It was directly inspired by the work of D. 0. Hebb, who in 1949 suggested that a mass of neurones could learn..."
"This lead was followed by Frank Rosemblatt, who reasoned that since intelligent behavior based on our representation of the world was likely to be hard to formalize AI should instead attempt to automate the procedures by which a network of neurons learns..."
which easier, ofc🤨
"Another way to put the difference between the two research-programmes is that those seeking symbolic representations were looking for a formal structure that would give the computer the ability to solve certain class of problems or discriminate certain types of patterns.
>>
"Rosenblatt, on the other hand, wanted to build a physical device, or to simulate such a device on a digital computer, that could then generate its own abilities"
"By 1956 Newell and Simon had succeded in programming a computer using symbolic representations to solve puzzles and prove theorems in the propositional calculus...Newell and Simon were understandably euphoric. Simon announced:
Here comes the hype! Symbolic AI Hype.
""... there are now in the world machines that think, that learn and that create ... in a visible future the range of problems they can handle will be coextensive with the range to which the human mind has been applied (1958:6)""
and the other side
"By 1956 Rosenblatt was able to train a perceptron to classify certain types of patterns
...by 1959 he too was jubilant and felt that his approach had been vindicated: ...
"
""As concept, it would seem that the perceptron has established, beyond doubt, the feasability and principle of non-human systems which may embody human cognitive functions""
""The future of information processing devices which operate on statistical, rather than logical, principles seems to be clearly indicated (1958: i. 449).""
with an assumption of associated error?
"In the early sixties both approaches looked equally promising, and both made themselves equally vulnerable by making exaggerated claims."
"By 1970 the brain simulation research which had its paradigm in the perceptron, was reduced to a few lonely, underfunded efforts, while those who proposed using digital computers as symbol manipulators had undisputed control of the resources"
"Each position had its detractors, and what they said was essentially the same: each approach had shown that it could solve certain easy problems but that there was no reason to think either group could extrapolate its methods to real-world complexity. "
"Indeed, there was evidence that as problems got more complex, the computation required by both approaches would grow exponentially and so would soon become intractable."
Both! 1⃣&2⃣
"In 1969 Marvin Minsky and Seymour Papert said of Rosenblatt's perceptron: ..."The machines usually work quite well on very simple problems but deteriorate very rapidly as the tasks assigned to them get harder"
"Three years later, Sir James Lighthill, after reviewing work using heuristic programs such as Simon's and Minsky's, reached a strikingly similar negative conclusion:
In no part of the field have the discoveries made so far produced the major impact that was then promised ..."
"Both sides had, as Jerry Fodor once put it, walked into a game of 3-dimensional chess, thinking it was tick-tack-toe."
Now, I suggest you keep reading. It gets more spicy!
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Government Office for Science: Future Risks of Frontier AI
Definition and object:
Frontier AI: "highly capable general-purpose AI models that can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced models" assets.publishing.service.gov.uk/media/653bc393…
"As of October 2023, this primarily encompasses foundation models consisting of very large neural networks using transformer architectures"
(e.g., LLMs, "generative" AI)
Risks:
"Frontier AI will include the risks we see today, but with potential for larger impact and scale. These include enhancing mass mis- and disinformation, enabling cyber-attacks or fraud, reducing barriers to access harmful information, and harmful or biased decisions."
It's been a long time since I added to the 'back to the sources (or classics) series'. This is another favourite for you.
A Framework for Misrepresenting Knowledge. H.L. Dreyfus (1979). In M. Ringle (Ed) Philosophical Perspectives in Artificial Intelligence.
"... an interesting change has, indeed, taken place at the MIT AI Laboratory. In previous works (Minsky, 1968) Minsky and his co-workers sharply distinguished themselves from workers in cognitive simulation who presented their programs as psychological theories,
>>
insisting that the MIT programs were 'an attempt to build intelligent machines without any prejudice toward making the system ... humanoid'. "
>>
This time I am going to excerpt one of my own old papers.
There is much confusion about what constitutes a cognitive computational model and the underlying psychological theory.
. https://t.co/tGMznxEhVwmondragon.cal-r.org/home/Papers/Al…
We focussed our analysis on conditioning due to the early identification of ANNs and associative learning
The critique, however, can be easily extended to other cognitive phenomena.
.
"It is worth noting though that the benefits derived from using implementations do not spring exclusively from the formal specification of the psychological models in equations and algorithms. ➡️
A year ago, I summarised our DDA model. Afterwards, I presented it three or four times to different audiences, in none of them I was satisfied with the way I explained the problems that motivated the model and the solution we offered.
Today, I was preparing some slides introducing complex AL models, (with fully connected networks) and decided to give it a new go. My tactic this time has been to focus only on the model’s relevance in accounting for retrospective revaluation in the conditioning literature.
Although there are other proposals, including those purely based on performance (e.g., Miller’s comparator hypothesis), the debate has revolved around Wagner’s SOP memory system and the distinct and opposing learning rules proposed to operate at SOP’s dynamic states of activation
According to Wagner (2008) one critical result in favour of Pearce’s configural approach that could potentially be solved by new, more advanced elemental developments is that obtained when reversing a conditioned inhibitor.
Following A+, AB-, B becomes a conditioned inhibitor— able, e.g., to reduce the responding to a different excitatory CS. According to the RW model, the discrimination is learned, with A becoming excitatory and AB neutral, as a result of B becoming as negative as A positive.
Pearce also assumes that AB becomes neutral, here AB’s direct strength becomes as inhibitory as the excitation that generalizes to it from A. B alone acts as an inhibitor due to its similarity to AB.
Two different predictions can be made if B is subsequently reinforced alone.
I firmly advocate for our right to be lazy, thus for my dear lazy (otherwise very busy to read a 140 pp. paper) fellows I'm going to summarise the DDA M'odel (upadated preprint, 2nd round)
The DDA is a “real-time” formal model of associative learning which incorporates representational and computational mechanisms able to make accurate predictions of a variety of phenomena that so far have eluded a unified account.
>
The model instantiates a connectionist network consisting of elements, which belong to individual stimuli or are shared between pairs of stimuli, and that are temporally clustered. There are two sources of cluster activation, direct (sensory) activation and associative (cued).
>