David Chapman Profile picture
Better ways of thinking, feeling, and acting—around problems of meaning and meaninglessness; self and society; ethics, purpose, and value.
14 subscribers
Aug 18, 2023 8 tweets 2 min read
"People tell you everything you need to know about them in the first minute after you meet them"

On graduating, my sometime-collaborator Phil Agre went to interview for a faculty job at Yale, where Roger Schank was the senior AI guy. Phil came back somewhat shaken... (1/n)Image Schank was a very weird dude. Phil was also a very weird dude.

In fact, everyone of significance in AI at that time was stupid, crazy, or evil.

Everyone of significance in AI now is also stupid, crazy, or evil. This is important; try not to forget it over the next few years.
Jul 2, 2023 6 tweets 2 min read
Huh! Just figured something out (I think). It was bugging me that the silly “pandita hat” worn by Buddhist academics (pandita=pundit) reminded me of something… It’s the “Phrygian cap” worn throughout the Iranian world in ancient times…
Jun 15, 2023 8 tweets 3 min read
An academic rant: startling cluelessness where I'd expected intelligent error...

I'm trying to understand how pomo replaced the classical undergraduate humanities curriculum, and how how people thought about it at the time, in preparation for writing about the consequences.… twitter.com/i/web/status/1… @StephenPiment BTW I'm reading Douthat's Privilege, about his time at Harvard, which is relevant and fun. I recommend it! twitter.com/i/web/status/1…
Mar 29, 2023 5 tweets 2 min read
⌚️ I did not anticipate a future in which you lie to your watch about meeting your hydration goal for the day so it doesn't give you a hard time the next morning. ⌚️ When I was a kid, watches were all radioactive. The hands were coated in radium so you could see the time in the dark by the radioactive glow. Miniaturizing either a battery or an incandescent bulb into a watch was completely technologically impossible.
Mar 25, 2023 4 tweets 2 min read
Incisive thinking about AI interaction, drawing on Brian Smith's work and reminiscent of the ethnomethodological stance, from @jessi_cata Incisive thinking about transness, also from @jessi_cata. "Trans" is a iron maiden category constructed by cis authorities which, for many people trying to fit into it, is grossly false to facts and harmful, painful, sometimes fatal. unstableontology.com/2023/02/07/am-…
Mar 13, 2023 29 tweets 10 min read
An extraordinary essay on ethics by @jkcarlsmith, highly recommended for those willing to work through its difficulty.

What happens when you realize moral philosophy doesn't and can't work, but saying "whatever, then, I guess" is also utterly inadequate? joecarlsmith.com/2023/02/17/see… "Seeing more whole" is difficult both textually and conceptually. I had to read it three times. It's probably also necessary to have read a precursor essay, which is less exciting but lays out distinctions the later one relies on: joecarlsmith.com/2023/02/16/why…
Mar 3, 2023 5 tweets 2 min read
🤖 The myth that AI “neural networks” cannot be understood obstructs ordinary scientific and engineering investigation. This is extremely convenient for both tech people and powerful decision makers. Engineers have a moral responsibility to know what the things they build can do and how and when and why. Companies selling products to the general public, and government agencies using them, have the same responsibility. Backpropaganda helpfully obfuscates that.
Mar 1, 2023 8 tweets 3 min read
.@slatestarcodex on OpenAI's recent glossy PR piece compares it with Exxon's "we're responsibly transitioning to address climate change, any decade now, look at the wookie" PR pieces. astralcodexten.substack.com/p/openais-plan… AI research (I suggest in my book) was mainly a PR effort by the advertising industry. No one seriously expected it would work. betterwithout.ai/AI-is-public-r…
Feb 21, 2023 14 tweets 4 min read
Why did Microsoft Bing's chatbot "Sydney" generate text that resembled a conversation with someone with borderline personality disorder?

A guess: they got what they trained it on, and they were scraping the bottom of the internet barrel. 1/n Last spring a DeepMind team realized that the constraint on the quality of text generator output was the amount of text they were trained on, more than the AI stuff. AI stuff doesn't give you something for nothing. arxiv.org/pdf/2203.15556…
Feb 14, 2023 5 tweets 2 min read
Pretty funny getting this spam advertising an AI spam-generating service posted as a comment on my site that warns that a tidal wave of AI-generated spam is about to hit. If you go to their site, they position themselves as legit, but are using classic low-end spamming techniques to advertise themselves; they used a botnet to post the identical message from many different IPs within four seconds.
Feb 5, 2023 10 tweets 2 min read
🤖 The term "AI safety" is being abused, perhaps deliberately, and its meaning has collapsed. Relatedly, the two different fields concerned with AI risks may both have been suddenly sidelined, perhaps deliberately, and may now be irrelevant. My December reworking of my AI book framed the topic as a neglected middle ground between the concerns of "AI ethics" (harmful uses of current tech) and those of "AI safety" (catastrophic consequences of hypothetical future tech).
Feb 2, 2023 9 tweets 2 min read
Legacy ideologies are coherent; new ones, incoherent. Why? Ideologies are often contradicted by experience, so they need a way to keep you on the hook other than by being right: constant exposure to memes. Facilities for presenting them have evolved... [long speculative rant ->] Literacy was a breakthrough. Preliterate cultures can't support ideologies because the memes can't propagate effectively enough to keep them going. In cultures with a literate elite only, only the elites get possessed.
Dec 10, 2022 18 tweets 7 min read
Ethical theories are bad. They are bad because they are wrong, and bad because (being wrong) they make you think and feel and do wrong things. There is no correct ethical theory.

🧵 by @DRMacIver, with commentary from me which he might not agree with… @DRMacIver All ethical systems are wrong because they are abstract, general, theoretical, and conceptual;

whereas ethics are, critically, a matter of concrete, contextual, practical action.
Nov 18, 2022 5 tweets 2 min read
🕊 My backup venues in case of twitter collapse: Tinyletter, Meaningness, Mastodon (addresses in follow-on tweets)... Image 📧 My free email newsletter. I'll use it to explain where I've gone and what I'm doing, if twitter implodes. Otherwise, very low volume: mainly just notifying readers of new writing. tinyletter.com/meaningness
Oct 19, 2022 5 tweets 3 min read
💥 Revolutionary rethinking of what is possible for science. Enormous opportunities await—requiring radical structural reform. This analysis will stand as a definitive harbinger of that effort. 🧬 What science we get depends on how “we” do it—where “we” is not so much scientists as bureaucrats. The way “we” do science was invented 70 years ago for a different world, has sclerosed, and now inhibits progress.

So much more is possible if those constraints were thrown off! Image
Aug 16, 2022 21 tweets 5 min read
Ephemeral subcultures used to be the essential drivers of culture, and are still disproportionately significant relative to their populations (but less so than in the 80s-90s). @slatestarcodex explains their lifecycle: Scott contrasts his analysis with my Geeks, MOPs, and Sociopaths model (which I mostly stole from @vgr). He doesn’t see the sociopaths. The comment section on his essay includes many people pointing to sociopathic destruction of various subcultures—crypto is a common example.
Aug 15, 2022 8 tweets 3 min read
Some answers to questions I posed earlier, from the Minerva paper (thanks to those who recommended it!) storage.googleapis.com/minerva-paper/… 80% accuracy on 10-digit addition means it definitely wasn’t memorizing those, and implemented an adder. Cool! Presumably it uses attention heads to track digits in the two numbers and does it digit-by-digit. It would be neat to find that circuit.
Jul 26, 2022 7 tweets 2 min read
This has been my experience. Everyone is constantly evaluating papers for credibility and meaningfulness, and sharing that info informally with peers. If you aren’t in the network, you’re schrod (or have hugely more work to do). PubPeer tries to make actual-peer-review transparent and public, and it helps on the margin, but the inherent problem is that you don’t want your negative opinions about someone’s work getting back to them, lest they retaliate.
Jul 13, 2022 7 tweets 3 min read
Carlos Castaneda’s books are read as pop-mystical fantasy fiction which pretended to be anthropology. The converse is also, and perhaps more, true. We can read them as a study in ethnomethodology disguised as pop-mystical fantasy fiction… Carlos Castaneda’s early books are an account of his apprenticeship with an extraordinary teacher, the Yaqui sorceror don Juan Matus. The magic and mysticism are just set-dressing; their difficult, intense relationship is the real topic and what makes the books compelling.
Jul 9, 2022 14 tweets 5 min read
🗣 In 1974, Joseph D. Becker pointed out that rigid rationalist Chomskian linguistics was an emperor without clothes, and explained how syntax actually works.

Rigorously ignored for decades, his theory seems powerfully confirmed by current AI text generators. Chomsky’s opening move was to take all empirical evidence off the table by declaring that anything people actually say is irrelevant to linguistics, because the mental machinery is error-prone. Underlying it is a pristine rational process we should understand instead.
Jul 7, 2022 4 tweets 2 min read
🤖 In 1964, the ELIZA chatbot sorta-kinda passed the Turing test. But until a year ago, no one knew how it worked, because the code was lost. The "ELIZA" you know about was written by Jeff Shrager in 1973 when he was 13 years old... Image Last year, Jeff found a paper copy of the code (with the help of an MIT archivist) and got it running (with the help of a team of hackers) and found it was much more sophisticated than the one you know (which Jeff wrote when he was 13)... Image