My Authors
Read all threads
It’s not true that there’s nothing new under the sun; there are many new things, it’s just that the old, human things are still with us, and always will be
You can’t really understand things until they are part of your habitual life, and you only assimilate novelty by finding analogies to things that you already know
So new technologies really do give us new perceptions, and enable us to add new understandings to our conceptual framework of the world
When a new technology becomes ubiquitous, we all become philosophers, because what used to be a rarefied abstraction is now a concrete daily occurrence
Lately, we have been discussing a computer program, which is itself a new and incredible thing, a thinking lightning rock, and it can seemingly apprehend language, and speak better than many real people.
Set aside the obvious criticisms and skepticisms of the thing and notice, instead, that philosophy of mind is now close at hand, that our intuitions about these topics are now tested by a real life encounter with a thought experiment
The minimalist position on GPT-3 is that it is “just splines for text”. A spline is a continuous path between points in geometric space, usually curved. By analogy, GPT produces continuous(ish) paths between points in linguistic space
The question is whether our own faculty of language is much more than this. It is in one sense, obviously, because our linguistic faculty is tangled and coupled with all of our other senses
All theories of consciousness are vague and bad, but the least vague one is called integrated information theory, and it claims that consciousness may be “just” splines for multiple overlapping sensory paradigms. Intuition is quiet, here
The obvious objection, I think, is grounded in an impoverished view of our senses. We have many more than five; we have our emotions, and our proprioceptions, and pleasure and pain and temperature, sense of direction, and so on
And if I imagine an algorithm that could do for all of my senses what GPT does with only a single sense; if I imagine further that it continuously consumed its own output as one more “input” sense, then I feel there might be very little left to explain
Of course, for some of the senses that I mention, such as emotion, it is not clear how to implement that. Computer science has basically solved the problems of perception; what is lacking is a theory of emotion and desire
But I want to leave that for a moment and return to the question of how machine learning brings philosophy of mind into the practical realm.
It feels obvious to us that GPT-3 doesn't see itself as anything, doesn't see itself at all, because the moments where it does "see itself" are discontinuous, leaving no trace once they have passed
It may not be horrifying, but it is certainly alien, to imagine small or discontinuous forms of consciousness. We can imagine GPT-3 "perceiving" in tiny fits and starts of awakeness, similar perhaps to the subjective experience of a nematode worm
Even if we subscribe to spiritual views regarding the mind and the soul, we must admit that a dog or a cat is conscious, but then also a mouse, and also lizard, and a bee, and an ant. There is no obvious cutoff. Why shouldn't microscopic life have proportional subjectivity?
The real mistake is to believe that faculty with language, which seems to us to be the apex of consciousness, depends upon our other faculties. This is not the case; there can be no more argument here; we you can always point at the language machine, aha!
Wittgenstein, in On Certainty, says that our basic beliefs are really animal or unreflective ways of acting which, once formulated, look like empirical propositions, when in fact they are atomic; they are the lowest level of our knowledge...
More simply: language is not a picture of thought, it is thinking--the mind itself!. AIs such as GPT-3 are a material instantiation of this Wittgensteinian claim about knowledge; certainly, the linguistic knowledge inside GPT-3 is nonpropositional, the sum total of its knowledge
Does Wittgenstein's thesis survive this experiment? The answer is no, but also yes, because when I say a word, I am invoking mesh of tangled sensory manifolds which may include haptics, optics, osmics, and other -ics also.
Language is a form of thinking, but if it lacks integration from other senses, then it can only occupy uncanny valleys of the mind, because the infinite regress of words can only terminates when it meets a plurality of other senses.
A tenet of postmodern philosophy is that subjectivity, the phenomenological sense of yourself, is constructed by language. A famous idea from Althusser is that merely acknowledging a policeman who says “hey, you” causes you to see of yourself as a subject of civic authority
The above relies on a conflation of subjectivity in the phenomenological sense and subjectivity in the political sense--but is this so wrong? I assure you that power is already creating rules to exercise dominion over AI, to make it a subject of the state. techcrunch.com/2020/07/03/we-…
And if laws are written to bind GPT-3, then researchers will teach it to model those laws, then GPT-3 will, in fact, realize a kind of polysensory subjectivity.
And in fact the Althusserian model has more to offer when dealing with GPT-3. To the degree it perceives itself, as a linguistic entity, telling it to talk about itself as a subject literaly causes it to perceive itself as a subject, to the degree it perceives.
One of these days, someone is going to figure out how to intertwine GPT-3 with a boston robotics man, and suddenly its schizoid dream ramblings are going to have material and perceptual correlates, and then we'll see how big your soul really is.
Missing some Tweet in this thread? You can try to force a refresh.

Keep Current with Zero HP Lovecraft 🛡

Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!