Profile picture
Thomas G. Dietterich @tdietterich
, 20 tweets, 3 min read Read on Twitter
A few reflections on Kalev Leetaru's opinion piece in Forbes 1/
forbes.com/sites/kalevlee…
I strongly agree that we need to fight the hype about Deep Learning, and Leetaru correctly criticizes the rampant anthropomorphism underlying much of this hype. I also share deeply Leetaru's concern about deploying DL in high risk applications. 2/
But I want to address Leetaru's complaints about using words like "learn" and "reason" to describe AI systems. I claim we can safely employ these words without engaging in dangerous anthropomorphizing.

Let's begin with the word "learning". Herbert Simon defined it this way:
Simon: "Learning denotes changes in the system that are adaptive in the sense that they enable the system to do the same task or tasks drawn from
the same population more efficiently and more effectively the next time" 4/
In the case of learning to recognize objects in images, before the system analyzes the training images, it is incapable of recognizing the objects. Afterwards, it is capable of recognizing the objects in new images. It is more effective; it meets Simon's definition. 5/
Leetaru is correct that deep networks do this by discovering correlations between the input pixels and the output class label. He is correct that this results in a very shallow understanding of objects. 6/
But the computer has learned something, and even this shallow knowledge can be very valuable. It is transforming security (e.g., face recognition), speech understanding, language translation, medical imaging, and many other fields. 7/
But it is also brittle, which is why I agree with Leetaru that it is not good enough for high risk application.

What about other words such as "knowledge", "reason", "believe", and so on?
8/
Allen Newell, in "The Knowledge Level", argues that these words are appropriate to the extent that they allow us to predict the future behavior of the computer by attributing to it knowledge & beliefs from which it draws inferences. 9/
Newell was building on previous work by Daniel Dennett on "The Intentional Stance". Dennett argues it doesn't make sense to ask whether an agent (a person or a computer) "really" has knowledge and makes inferences. 10/
Instead, these are functional concepts that are useful for predicting the behavior of the agents. If I tell you the keys are in the desk drawer, then it is reasonable for me to attribute to you the knowledge "the keys are in the desk drawer". 11/
And if I then ask you to fetch the keys, I can predict that you will go to the desk drawer, open it, pick up the keys, and bring them to me. In other words, you will reason from your knowledge and take appropriate actions. 12/
In my 1986 paper, "Learning at the Knowledge Level", I tried to extend Newell's analysis to define machine learning as an increase in the knowledge that can be usefully attributed to a system. 13/
Leetaru criticizes AI people for describing their deep learning systems as performing "reasoning". We usually apply this word only when a system is able to combine two separately-acquired chunks of knowledge to infer a third fact. 14/
But even here, I can imagine cases where it might make sense to describe a deep learning system as doing a form of "reasoning". 15/
Imagine a system that learns from one image to associate a dog collar with the object label "dog" and from another image to associate a leash with the object label "dog". If it sees both collar and leash, it can "reason" that the label "dog" is more likely to be correct. 16/
Moving beyond deep learning, no one questions that "reasoning" is an appropriate word for describing systems for theorem proving and probabilistic inference. 17/
In summary, words like "knowledge", "reason", and "learn" should be viewed as providing a language for modeling the behavior of systems. Instead of asking whether a system is "truly" learning or has "genuine" knowledge, 18/
we should ask instead to what extent it is useful to MODEL the system as having knowledge and doing learning and reasoning. The answer, for many of today's deep learning systems, is "not very useful" but "more useful than in the past". 19/
So I completely agree with Leetaru that AGI is not just around the corner. end/
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Thomas G. Dietterich
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member and get exclusive features!

Premium member ($30.00/year)

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!