David Chapman Profile picture
Better ways of thinking, feeling, and acting—around problems of meaning and meaninglessness; self and society; ethics, purpose, and value.
KimSia Sim (I tweet business & software stuff) Profile picture A. S. Profile picture Twitter author Profile picture Maleph Profile picture giulio rospigliosi Profile picture 14 added to My Authors
Mar 25 4 tweets 2 min read
Incisive thinking about AI interaction, drawing on Brian Smith's work and reminiscent of the ethnomethodological stance, from @jessi_cata Incisive thinking about transness, also from @jessi_cata. "Trans" is a iron maiden category constructed by cis authorities which, for many people trying to fit into it, is grossly false to facts and harmful, painful, sometimes fatal. unstableontology.com/2023/02/07/am-… Image
Mar 13 29 tweets 10 min read
An extraordinary essay on ethics by @jkcarlsmith, highly recommended for those willing to work through its difficulty.

What happens when you realize moral philosophy doesn't and can't work, but saying "whatever, then, I guess" is also utterly inadequate? joecarlsmith.com/2023/02/17/see… "Seeing more whole" is difficult both textually and conceptually. I had to read it three times. It's probably also necessary to have read a precursor essay, which is less exciting but lays out distinctions the later one relies on: joecarlsmith.com/2023/02/16/why…
Mar 3 5 tweets 2 min read
🤖 The myth that AI “neural networks” cannot be understood obstructs ordinary scientific and engineering investigation. This is extremely convenient for both tech people and powerful decision makers. Engineers have a moral responsibility to know what the things they build can do and how and when and why. Companies selling products to the general public, and government agencies using them, have the same responsibility. Backpropaganda helpfully obfuscates that.
Mar 1 8 tweets 3 min read
.@slatestarcodex on OpenAI's recent glossy PR piece compares it with Exxon's "we're responsibly transitioning to address climate change, any decade now, look at the wookie" PR pieces. astralcodexten.substack.com/p/openais-plan… AI research (I suggest in my book) was mainly a PR effort by the advertising industry. No one seriously expected it would work. betterwithout.ai/AI-is-public-r…
Feb 21 14 tweets 4 min read
Why did Microsoft Bing's chatbot "Sydney" generate text that resembled a conversation with someone with borderline personality disorder?

A guess: they got what they trained it on, and they were scraping the bottom of the internet barrel. 1/n Last spring a DeepMind team realized that the constraint on the quality of text generator output was the amount of text they were trained on, more than the AI stuff. AI stuff doesn't give you something for nothing. arxiv.org/pdf/2203.15556…
Feb 14 5 tweets 2 min read
Pretty funny getting this spam advertising an AI spam-generating service posted as a comment on my site that warns that a tidal wave of AI-generated spam is about to hit. If you go to their site, they position themselves as legit, but are using classic low-end spamming techniques to advertise themselves; they used a botnet to post the identical message from many different IPs within four seconds.
Feb 5 10 tweets 2 min read
🤖 The term "AI safety" is being abused, perhaps deliberately, and its meaning has collapsed. Relatedly, the two different fields concerned with AI risks may both have been suddenly sidelined, perhaps deliberately, and may now be irrelevant. My December reworking of my AI book framed the topic as a neglected middle ground between the concerns of "AI ethics" (harmful uses of current tech) and those of "AI safety" (catastrophic consequences of hypothetical future tech).
Feb 2 9 tweets 2 min read
Legacy ideologies are coherent; new ones, incoherent. Why? Ideologies are often contradicted by experience, so they need a way to keep you on the hook other than by being right: constant exposure to memes. Facilities for presenting them have evolved... [long speculative rant ->] Literacy was a breakthrough. Preliterate cultures can't support ideologies because the memes can't propagate effectively enough to keep them going. In cultures with a literate elite only, only the elites get possessed.
Dec 10, 2022 18 tweets 7 min read
Ethical theories are bad. They are bad because they are wrong, and bad because (being wrong) they make you think and feel and do wrong things. There is no correct ethical theory.

🧵 by @DRMacIver, with commentary from me which he might not agree with… @DRMacIver All ethical systems are wrong because they are abstract, general, theoretical, and conceptual;

whereas ethics are, critically, a matter of concrete, contextual, practical action.
Nov 18, 2022 5 tweets 2 min read
🕊 My backup venues in case of twitter collapse: Tinyletter, Meaningness, Mastodon (addresses in follow-on tweets)... Image 📧 My free email newsletter. I'll use it to explain where I've gone and what I'm doing, if twitter implodes. Otherwise, very low volume: mainly just notifying readers of new writing. tinyletter.com/meaningness
Oct 19, 2022 5 tweets 3 min read
💥 Revolutionary rethinking of what is possible for science. Enormous opportunities await—requiring radical structural reform. This analysis will stand as a definitive harbinger of that effort. 🧬 What science we get depends on how “we” do it—where “we” is not so much scientists as bureaucrats. The way “we” do science was invented 70 years ago for a different world, has sclerosed, and now inhibits progress.

So much more is possible if those constraints were thrown off! Image
Aug 16, 2022 21 tweets 5 min read
Ephemeral subcultures used to be the essential drivers of culture, and are still disproportionately significant relative to their populations (but less so than in the 80s-90s). @slatestarcodex explains their lifecycle: Scott contrasts his analysis with my Geeks, MOPs, and Sociopaths model (which I mostly stole from @vgr). He doesn’t see the sociopaths. The comment section on his essay includes many people pointing to sociopathic destruction of various subcultures—crypto is a common example.
Aug 15, 2022 8 tweets 3 min read
Some answers to questions I posed earlier, from the Minerva paper (thanks to those who recommended it!) storage.googleapis.com/minerva-paper/… 80% accuracy on 10-digit addition means it definitely wasn’t memorizing those, and implemented an adder. Cool! Presumably it uses attention heads to track digits in the two numbers and does it digit-by-digit. It would be neat to find that circuit.
Jul 26, 2022 7 tweets 2 min read
This has been my experience. Everyone is constantly evaluating papers for credibility and meaningfulness, and sharing that info informally with peers. If you aren’t in the network, you’re schrod (or have hugely more work to do). PubPeer tries to make actual-peer-review transparent and public, and it helps on the margin, but the inherent problem is that you don’t want your negative opinions about someone’s work getting back to them, lest they retaliate.
Jul 13, 2022 7 tweets 3 min read
Carlos Castaneda’s books are read as pop-mystical fantasy fiction which pretended to be anthropology. The converse is also, and perhaps more, true. We can read them as a study in ethnomethodology disguised as pop-mystical fantasy fiction… Carlos Castaneda’s early books are an account of his apprenticeship with an extraordinary teacher, the Yaqui sorceror don Juan Matus. The magic and mysticism are just set-dressing; their difficult, intense relationship is the real topic and what makes the books compelling.
Jul 9, 2022 14 tweets 5 min read
🗣 In 1974, Joseph D. Becker pointed out that rigid rationalist Chomskian linguistics was an emperor without clothes, and explained how syntax actually works.

Rigorously ignored for decades, his theory seems powerfully confirmed by current AI text generators. Chomsky’s opening move was to take all empirical evidence off the table by declaring that anything people actually say is irrelevant to linguistics, because the mental machinery is error-prone. Underlying it is a pristine rational process we should understand instead.
Jul 7, 2022 4 tweets 2 min read
🤖 In 1964, the ELIZA chatbot sorta-kinda passed the Turing test. But until a year ago, no one knew how it worked, because the code was lost. The "ELIZA" you know about was written by Jeff Shrager in 1973 when he was 13 years old... Image Last year, Jeff found a paper copy of the code (with the help of an MIT archivist) and got it running (with the help of a team of hackers) and found it was much more sophisticated than the one you know (which Jeff wrote when he was 13)... Image
Jul 4, 2022 7 tweets 2 min read
Why logical positivism always fails, but keeps getting reinvented, Part Twenty-Nine. Image When a field fails and ends, there needs to be urgent funding for historians to investigate & explain what happened and why, with lots of recorded interviews with major participants, so its decayed corpse doesn’t keep lurching out of ground and going around causing trouble again.
May 30, 2022 4 tweets 3 min read
Clear, accessible explanation of a main way statistical methods are misused, and its role in the replication crisis. Part 2 in a series by @ProfJayDaigle jaydaigle.net/blog/hypothesi… @ProfJayDaigle 🔑 Misusing a method developed for making decisions (with asymmetric risks and benefits) to evaluate purely descriptive theories is a key failing:
May 28, 2022 7 tweets 3 min read
Fascinating review of a book about an international development project in Lesotho that failed because rationalism. Lots of cool details here; good examples of the standard ways rationalism goes wrong. astralcodexten.substack.com/p/your-book-re… A meta-rational principle is that you need to have already basically understood what is going on before, during, & after applying rationality, or else it takes you off a cliff.

To find out what is actually going on, you need to put boots on and go see
May 23, 2022 6 tweets 3 min read
Extraordinarily self-aware reflection from an anti-woke activist/theorist on the emotional/aesthetic roots of his motivations: Even granting the premise that wokeness is false and harmful (which @RichardHanania considers obviously so), being obsessed with it from the anti- side needs explanation, since there are many worse things in the world.