Hobbes writes to his publisher, 1647:
Don't show the MS to academics; they are jealous. Don't show it to intellectuals; they are condescending. And above all, don't show it to Descartes; he's just a jerk.
To be fair, Descartes *was* a jerk. Here he is (1641) writing about Hobbes' critique of his earlier work. "I did not take that part of his writings seriously enough to think that I was obliged to spend my time refuting it"
Lesson: There has always been a Reviewer 2
Lesson: Someone needs to make a 'Rene and Hobbes' comic. Fewer imaginative adventures, more imaginative calumnies.
Note: small error in original tweet. The letter dates from 1646, not 1647. (But it is about a 1647 publication, the second edition of Hobbes' De Cive.)
• • •
Missing some Tweet in this thread? You can try to
force a refresh
In my Philosophy of AI class I run an exercise. I announce that BigTech Company has been secretly testing a new AI in our class. All along, one of the students was an android! The students break into small groups with this goal: prove that everyone in your group is human. 1/4
The pedagogical point, of course, is to help them come to their own realizations about limits of the Turing Test. It works pretty well. But once, two students both simultaneously and loudly announced that they were the android. That complicated things. 2/4
So I changed the assignment. I said: ‘look, I know there aren’t two androids in here (too expensive). So at least one of these students is a human pretending to be an android. Try to catch the pretender in their lie.’ 3/4
Okay, a thread defending Blake Lemoine. Not how you’re expecting. I won’t defend Lemoine’s claim the LaMDA chatbot has achieved sentience – that’s false. But I will claim that Lemoine’s mistake is a good one, which we shouldn’t mock. 🧵->
1/15. First, let’s get it out of the way: LaMDA is almost certainly not sentient, and Lemoine’s proffered evidence is no reason to think it is. LaMDA sounds spookily convincing when it talks about its feelings, but there’s an easy explanation.
2/15. LaMDA, like other language models, is trained on giant piles of human-authored text. Somewhere in that text there are surely examples of speculative fiction about intelligent AI. If you start cueing it with such ideas, it mechanically offers similar text.
Frustrated by the narrowness of the Lemoine / LaMDA discourse on here. This isn't as simple as the emerging consensus. I may write something later. I know I'll be accused of AI hype or somesuch, so here's a mini-thread of what I've said before about such things.
from two years ago, when GPT-3 first launched - Why language models are significant even if they aren't self-aware dailynous.com/2020/07/30/phi…
and from 5 years ago, about why I don't care about future self-aware AI threatening us (the big hype) but instead about the peculiar philosophical threat that we will pose to *it* aeon.co/essays/creatin…
It's so weird how much Kant is hated by culture warriors who evidently never read him. There is a simple way to learn what Kant thought Enlightenment is: read his short essay, 'What is Enlightenment?' A few clips in thread ->
What I find most frustrating about anti-wokism is that it gobbles all the oxygen needed for more thoughtful conversations on the excesses of some social justice activism. A thread:
Anti-wokism holds that social justice activism is really a secretive campaign to (variably) destroy free speech or extinguish enlightenment values or achieve Marxist totalitarianism. In other words, a conspiracy story. 🧵1/11
You don’t have to be an anti-wokist to have worries about social justice activism. You might believe (as I do) that social justice activists are good people with highly valuable aims who sometimes make ordinary human mistakes. 🧵2/11
Much chatter now about the abominable behavior of Philosophy journal refs, so I thought I'd share this: a guide to the sorts of creatures you will meet when you visit the land of philosophy journals. Intended for grad students, but maybe useful to others. reginarini.files.wordpress.com/2021/04/philos…