My Authors
Read all threads
I am endlessly baffled by 4E theorists who argue that "embodiment" is somehow the unique domain of humans, as if particular machines don't always have bodies situated in particular contexts.

Let's talk about Dreyfus' critique of AI! (Megathread)
There were two major philosophical critiques of AI in the 20th century: Dreyfus and Searle.

Searle's arguments were bad and wrong and that's all I have to say about that.

Dreyfus' arguments were slightly better, but they're old and they don't really apply to the AI boom today.
SOME HISTORY: Hubert Dreyfus was a philosopher studying Husserlian phenomenology at MIT and Harvard in the 60s, during the first golden age of AI. The RAND corporation asked Dreyfus and his brother to give report on the AI research of Newell and Simon.

en.wikipedia.org/wiki/Hubert_Dr…
The result was Alchemy in AI (1965) and What Computers Can't Do (1972), which developed a critique of the Newell-Simon style symbolic AI (GOFAI) being developed at MIT.

Dreyfus' critique was an embodied, phenomenological approach grounded in Merleau-Ponty and Heidegger.
If you recall from AI history, the Lighthill report comes out in 1973, and DARPA pulls its AI research funding in 74. Dreyfus' 72 book was a well-timed herald of the end of the first golden age and the start of AI Winter.

en.wikipedia.org/wiki/AI_winter
The book's reissuing in 1992 coincided nicely with the failure of expert systems in the late 80s, the so-called second AI winter. This bit of tactical philosophical publishing, along with Searle's Chinese Room argument in 1980, produced a generation of AI skeptics in philosophy.
So it's still fairly common to see Dreyfus' name dropped as a gesture at a critique of AI. The OP article uses Dreyfus' title as clickbait, though his views don't show up in the article.

@FrankPasquale mentioned Dreyfus in his keynote @ #AIES2020
What is Dreyfus' critique of AI? Dreyfus argues that the symbolic approach to AI in Newell and Simon requires making the machine's behavior explicit, formal, and sequential to fit into the IF ____ THEN ____ conditional architecture of the computers available at the time.
Here's typical Dreyfus intuition pump: can you chew gum and whistle at the same time? You might have never considered this question before, but you can tell quickly that the task is impossible. Not because you've stored that information explicitly, but because you *have a body*.
Dreyfus (drawing on Merleau-Ponty) argues that your embodiment provides a background context for knowledge and action. Because you have a mouth and you are familiar with the embodied requirements of whistling and chewing gum, you can work out their incompatibility for yourself.
On the other hand, Dreyfus argues, a computer would need these conflicts made explicit at the level of symbolic representation. Something in the computer's memory must store explicit symbols to the effect that:

IF (whistling) THEN (no chewing)
Dreyfus argues that unless the computer's designers make these logical relationships explicit, a computer performing a sequence of IF-THEN operations will never derive this kind of embodied knowledge on its own.

Symbol manipulation isn't sufficient for embodied phenomenology.
For embodied creatures like us, Dreyfus argues we experience the world as active agents embedded within it. In 1990 Dreyfus publishes Being-in-the-World, a widely read commentary on Heidegger's Being and Time, laying theoretical foundations for the "Dreydeggerian" critique of AI.
Let's agree with Dreyfus on the phenomenology. I know I can't chew gum and whistle because of my embodied knowledge.

Two questions we'll come back to: Why think symbolic processing can't reconstruct this knowledge? More importantly: Why don't computers have bodies?!?!
But first, more history! Dreyfus had a string of failed predictions on the basis of his views. In 1966, Dreyfus argued in an interview that machines would never play good chess.

Then Seymour Papert asked Dreyfus to play chess against an MIT computer, and Dreyfus lost.
In the 1992 reissuing of "What computers can't do", Dreyfus again predicted that computers will never beat human chess champions.

In 1997, IBM's Deep Blue beat Kasparov, and Dreyfus had to again walk back his claims.

dl.tufts.edu/concern/pdfs/s…
en.wikipedia.org/wiki/Deep_Blue…
In "On the Internet" (2001) Dreyfus develops his critique to argue against the very possibility of searching the internet. Without embodied knowledge, Dreyfus argues, online search will never work.

The book's Post-Google IPO Second Edition removes these embarrassing arguments🤡
Clearly Dreyfus' predictions are over-extensions of his arguments. Google is an existence proof against the apparent implications of Dreyfus-style critiques of AI.

And yet despite the obvious counter-examples, the popularity of Dreyfus' critiques of AI persist. What's going on?
Three things are going on:

1) Dreyfus' critique of GOFAI was good, actually... and AI/Robotics listened and changed!
2) A long tradition of tech/science skepticism in continental philosophy
3) That generation of AI skeptics finally have something to say!
1) Dreyfus' critique was good, actually! But it targets a very narrow form of AI, the Good Old Fashioned AI (GOFAI) of Newell and Simon.

Modern AI techniques (evolutionary algorithms, neural networks, etc) aren't vulnerable to Dreyfus' arguments.
This distinction is made most clear in Haugeland's book on AI, where he coins the term "GOFAI" and discusses how "new fangled AI" techniques avoid these criticisms.

Haugeland was a major influence on Dreyfus' interpretation of Heidegger.

en.wikipedia.org/wiki/John_Haug…
Dreyfus contributed to the growing arguments against GOFAI computationalism in the 80s and 90s, a trend ultimately yielding the 4E (embodied, embedded, extended, enactive) approaches of today.

plato.stanford.edu/entries/embodi…
But that 4E coalition included theorists like Andy Clark who were explicitly pointing to early machine learning techniques as alternatives to GOFAI. The ML models being trained today are (not in)compatible with Dreyfus' critique!
The lessons of Dreyfus' critique were perhaps heard most loudly in robotics, which started taking seriously the constraints of embodied action in the design and operation of robotics by the early 90s.

Atlas can do flips because it is an embodied dynamical system.
FWIW, the Dreydeggerians are happy to point to the advances in robotics as vindication of their critique of GOFAI, despite it also working as evidence for "yes, computers can do that".

opinionator.blogs.nytimes.com/2011/02/28/wat…
So, to the extent that Dreyfus' views are correct, they have also been recognized and absorbed by the wider AI/robotics community and discourse, such that Dreyfus' arguments no longer serve as a basis for a systemic critique of the field.

The critique is an anachronism.
Which brings us to 2) The long tradition of tech/science skepticism in continental philosophy. I think Dreyfus' views are best read as contributing to this tradition, rather than as an attack on a specific, outdated AI technique.
Dreyfus' use of Kierkegaard's critique of the press to develop a parallel critique of the (early) internet is a particularly good read, and it doesn't depend on misguided predictions about the fundamental constraints on hyperlinks.

bit.ly/2v0Bb0j
I like this paper on the "Jonasian turn in enactivism", which is really a reflection on the anti-tech/tech pessimism/luddism commonly found in 4E theorists like Dreyfus, and their philosophical disposition to draw strong lines between humans and machines.

bit.ly/2P9WuDw
That paper also makes clear how Jonas sees his ethical anti-tech views as being in fundamental tension with the natural sciences. In other words, the lines drawn between humans and machines is not suggested by the science itself, it is suggested by prior (ethical) commitments.
To some extent, I think these Jonasian ethical commitments moved Dreyfus to overextend his arguments against AI.

Insofar as 4E theorists want a theory of mind compatible with the natural sciences, they should probably reject the Jonasian dichotomy between humans and machines.
But it's misleading to point to Dreyfus as somehow vindicating the Jonasian ethical commitment to a human-machine dichotomy.

Dreyfus' arguments do nothing of the sort. They have no obvious application to a modern context of dynamical, learning, socially embedded AI.
In fact, the recent debates in AI run in almost exactly the opposite direction as Dreyfus' critiques. @GaryMarcus' wildly discussed & debated criticisms of ML/DL techniques are (basically) that they aren't symbolic enough!

Gary's critique is more technical, but he's basically arguing that the rejection of GOFAI went too far, and the opposite strategy of training huge neural networks with little top down structure can only do so much.

Symbolic systems are useful, and they're still widely used!
This all leaves Dreyfus' arguments in really poor shape for engaging the contemporary AI landscape. They aren't very helpful for appreciating Gary's criticisms or the responses it's received.

Again, Dreyfus' critiques are an anachronism.
So that leaves 3) that Dreyfus' views remain popular because a generation of scholars raised on his critiques now have something to say.

Not that Dreyfus' critiques have much merit and purchase in the current AI landscape...
... but rather, that Dreyfus' critiques serve as a widely-taught theoretical background and framing for a broad range of criticisms of AI.

And today, in a new golden age of AI, there's plenty of money to pay people to dust off and trot out this old training.
But while the casual appeal to Dreyfus' critique of AI might seem erudite and sophisticated, it's a good example of armchair philosophy disconnected from the real world.

Dreyfus' critique of "computers" mostly targets the giant mainframes in expensive research labs in the 60s.
But my phone is loaded with sensors. It knows where it is, it knows how it is situated in space, it knows what's going on around it, it is actively filtering all that information for relevance and user preferences.

Cell phones are CLEARLY embodied agents.
Hell, my cellphone is more concerned with my active engagement with the world than I am! It yells at me if I don't take enough steps.

cf Haraway: "Our machines are disturbingly lively and we ourselves frighteningly inert."
Every machine is a body
Every machine has a body too.
If a machine is active,
SET embodiment flags to 'True'...
Will be back to finish the thread and respond to comments tomorrow. Thanks for engaging everyone!
Missing some Tweet in this thread? You can try to force a refresh.

Enjoying this thread?

Keep Current with eripsa

Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!