David Chapman Profile picture
Feb 5 10 tweets 2 min read
🤖 The term "AI safety" is being abused, perhaps deliberately, and its meaning has collapsed. Relatedly, the two different fields concerned with AI risks may both have been suddenly sidelined, perhaps deliberately, and may now be irrelevant.
My December reworking of my AI book framed the topic as a neglected middle ground between the concerns of "AI ethics" (harmful uses of current tech) and those of "AI safety" (catastrophic consequences of hypothetical future tech).
"AI safety" is now being (ab)used to refer specifically to the least significant concern of the AI ethics field: it is possible to trick text generators into outputting derogatory diatribes against the Fr*nch.
Maybe it's a coincidence, but this seems awfully convenient for the companies producing text generators.
"We are committed to responsible development of AI safety. Look, we've ensured that our system never uses the F-word, and instead uses the properly respectful term 'people experiencing Frenchness'."
This move rhetorically relieves AI companies from addressing far more serious concerns of unethical use of AI:
Assimilating "AI safety" to "preventing our text generator from saying the word 'Fr*nch'" could be a cynical manipulation of culture war dynamics to avoid addressing any of the actual concerns of the AI safety movement, because that would be difficult (or, fr*nkly, impossible).
Back in September, I warned that the AI ethics/safety split was likely to get appropriated by the culture war, in which case all serious discussion of risks might become impossible:
ChatGPT has inserted AI into a public consciousness to an extent that discussion will likely now be dominated by lumbering dinosaurs: Congressional committees, lobbying companies hired by Mooglebook PR departments, the NYT and WSJ, all clueless and self-interested.
I fear that both the AI ethics and AI safety movements are now minuscule powerless rodents unnoticed among the feet of slow-moving gigantic new players with brains the size of walnuts.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with David Chapman

David Chapman Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @Meaningness

Feb 2
Legacy ideologies are coherent; new ones, incoherent. Why? Ideologies are often contradicted by experience, so they need a way to keep you on the hook other than by being right: constant exposure to memes. Facilities for presenting them have evolved... [long speculative rant ->]
Literacy was a breakthrough. Preliterate cultures can't support ideologies because the memes can't propagate effectively enough to keep them going. In cultures with a literate elite only, only the elites get possessed.
Before mass media, an ideology had to propagate itself through books. Problem: you forget what's in books quickly. To keep reminding you, the memes have to be memorable and compelling.

Rationality—actually making sense—was the best way to do that.
Read 9 tweets
Dec 10, 2022
Ethical theories are bad. They are bad because they are wrong, and bad because (being wrong) they make you think and feel and do wrong things. There is no correct ethical theory.

🧵 by @DRMacIver, with commentary from me which he might not agree with…
@DRMacIver All ethical systems are wrong because they are abstract, general, theoretical, and conceptual;

whereas ethics are, critically, a matter of concrete, contextual, practical action.
@DRMacIver “Ethical pluralism is empirically correct”—it is a fact about other people that you have to work with, even if you (wrongly) believe your system is The Truth.

Drawing on multiple ethical frameworks gives you greater resources for accurate action
Read 18 tweets
Nov 18, 2022
🕊 My backup venues in case of twitter collapse: Tinyletter, Meaningness, Mastodon (addresses in follow-on tweets)... Image
📧 My free email newsletter. I'll use it to explain where I've gone and what I'm doing, if twitter implodes. Otherwise, very low volume: mainly just notifying readers of new writing. tinyletter.com/meaningness
🧑‍💻 My central web site. I'll post something there about where I've gone if twitter expires. meaningness.com
Read 5 tweets
Oct 19, 2022
💥 Revolutionary rethinking of what is possible for science. Enormous opportunities await—requiring radical structural reform. This analysis will stand as a definitive harbinger of that effort.
🧬 What science we get depends on how “we” do it—where “we” is not so much scientists as bureaucrats. The way “we” do science was invented 70 years ago for a different world, has sclerosed, and now inhibits progress.

So much more is possible if those constraints were thrown off! Image
Everyone knows in principle that the most important science happens when brilliant weirdos spend a decade doing something inscrutable.

If you want that kind of science, you have to support actual humans, not past heroes of mythology, doing it. Which is risky; no way around it. Image
Read 5 tweets
Aug 16, 2022
Ephemeral subcultures used to be the essential drivers of culture, and are still disproportionately significant relative to their populations (but less so than in the 80s-90s). @slatestarcodex explains their lifecycle:
Scott contrasts his analysis with my Geeks, MOPs, and Sociopaths model (which I mostly stole from @vgr). He doesn’t see the sociopaths. The comment section on his essay includes many people pointing to sociopathic destruction of various subcultures—crypto is a common example.
Some subcultures either don't attract sociopaths, or are good at ejecting them, I guess? What accounts for this?

(Here's Scott's post, delayed because twitter is at war with links off-site nowadays. They're bad for business.) astralcodexten.substack.com/p/a-cyclic-the…
Read 21 tweets
Aug 15, 2022
Some answers to questions I posed earlier, from the Minerva paper (thanks to those who recommended it!) storage.googleapis.com/minerva-paper/…
80% accuracy on 10-digit addition means it definitely wasn’t memorizing those, and implemented an adder. Cool! Presumably it uses attention heads to track digits in the two numbers and does it digit-by-digit. It would be neat to find that circuit.
(Q: does it do less well on addition problems that require carrying? Just curious.)
Read 8 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(