mattparlmer 🪐 🌷 Profile picture
Mar 30 17 tweets 4 min read Twitter logo Read on Twitter
In the past I have given credit where it was due to Eliezer Yudkowsky for not explicitly advocating violent solutions to the problems with AI development that by his own admission only he and a few other (mostly nontechnical) people see on the horizon

He crossed that line today
Since he has graduated from begging the question to openly stating that blowing things and people up is a reasonable thing to do I think we should dispense with the politeness that has been the norm around these discussions in the past

EY does not know what he’s talking about
He and the other hardline anti-AI cultists are out of their depth, both in terms of command of basic technical elements of this field but also in terms of their emotional states

This is a multidecade anxious fixation asking calling for air strikes in Time, not a rational person
The people working on AI systems are not stupid and are not reckless, in fact they have been astonishingly and excessively conservative in deploying these systems given how much potential transformer models have to help people

Many things are coming, Skynet is not one of them
Every day we delay deployment of powerful tools for augmenting human cognition robs kids of tutor systems, deprives scientists and engineers of a interactive natural language index atop our collective knowledge, and ultimately kills people with soon-to-curable diseases
We have seen this movie before

Myopic cowards who have never worked a real problem seriously in their lives assume that all problems are unworkable and ban people who can actually solve problems from doing so

Applying this to nuclear energy directly caused global warming
Btw since we're pulling out emotionally manipulative stuff like "my daughter lost a tooth and that gave me a panic attack about GPT-4"...

A few of my family members have punishing inflammatory disorders

GPT-4 is demonstrably helpful with drug discovery

Banning it hurts them
If you're tempted to take Yudkowsky seriously go engage with his work on the specifics of existential AI risks for a bit, it's ridiculous on its face

This was the sort of thing that was entertaining to read if you're into niche scifi, less so when it involves calls for bombings
The scenario that Eliezer most often cites as a plausible minimum-effort strategy that an emerging superintelligence could use to kill everybody (it convinces a human to synthesize grey goo nanotech) involves lots and lots of highly implausible leaps
Drexlerian nanotech may be possible, grey goo scenarios are concerning, but much like magically emergent volitional superintelligence they are *highly speculative*

Saying this stuff will appear out of nowhere amounts to crying wolf and distracts from actual safety work
Time and energy spent dealing with Eliezer's high IQ version of a paranoid delusional complex is time and energy not spent constraining model read-write capabilities with code that is typechecked such that we have mathematical proofs that it cannot go out of bounds, for example
There are concrete things we can do to make AI systems more safe, and there are plenty of unimplemented lowhanging fruit from other more established areas of software engineering

If you're that worried about runaway AI go build a killswitch FPGA that triggers on network traffic
If you don't want an AI system copying its weights all over the place hash the weights out of band, compute a hash on any outbound network traffic, and then shut it off if any of those hashes show up

Afaik nobody is doing this right now, easy win, no Predator drones required
There is a telling lack of this sort of practical, implementable safety feature in the AI safety discourse, largely because people like Eliezer will barge into discussions and proclaim that the thing they're worried about will be too smart for it to apply
If you refuse to engage with the systems as currently implemented by ML researchers and instead demand that we center discourse around magic beings with omniscience and omnipotence it gives the impression that your model is based in esoteric metaphysics and not actual physics
This is all well and good if such discourse is the sort of thing that happens on niche rationalist fora, but when it moves to the arena of public policy, and even beyond that to the most public possible exhortations to do drastic and violent things, it is no longer acceptable
It's tragicomic that one of the great proponents of not fooling yourself into mistaken thinking with cognitive biases, somebody from whom I learned so much, is making a fool of himself with extraordinary claims backed by an extraordinarily minimal body of evidence

• • •

Missing some Tweet in this thread? You can try to force a refresh

Keep Current with mattparlmer 🪐 🌷

mattparlmer 🪐 🌷 Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!


Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @mattparlmer

Mar 29
Immense amount of cope out there rn esp in light of that letter

We are well over the event horizon, people are just gonna have to make peace with the fact that there will be many entities swimming in the pool with us that are many times smarter than the baseline human
I say "entities" and not "transformers" because transformers are first and foremost a tool for augmenting humans

Low bandwidth text UI is just the beginning, we will be plugging human brains directly into these and other forms of machine intelligence very soon
I expect the first wave of potentially dangerous autosophisticating agents to be humans plugged into big compute clusters

Unlike magically volitional paperclippers this is a risk that is easily extrapolated from what we know about how these things work today
Read 11 tweets
Jan 9
One in eight cases of childhood asthma attributable to gas stoves used in the home

Exposure to gas stoves is dangerous, the stats for restaurant workers will be off the charts when they get around to measuring them…
You really should not need a bunch of studies to work thru the potential negative implications of openly burning a bunch of gas in the middle of your home several times a day but if you do, here are a bunch of studies
The extent to which you allow yourself to be caught up in culture war nonsense around the health effects of gas stoves is a good proxy for the severity of your brainworm infestation, all this stuff can be reasoned thru with no reference to political ideology
Read 12 tweets
Jan 9
From time to time I’ll listen to NPR to get a read on what that particular media bubble is on about

Today the New Yorker Radio Hour did a “the year ahead in sports but”, and started with the Damar Hamlin heart attack

They neglected to mention the spike in similar incidents
Instead of being journalistic and asking why it is we have had a significant spike in adverse cardiac events among some of the most physically conditioned people on the planet, they waxed poetic for several minutes about the inherent violence in football

This seems disingenuous
One of the great strengths of the Anglo state-adjacent civil society media is its ability to integrate observable ground truth into the narrative they’re constantly spinning

When they reflexively ignore an important and growing body of context around a story that’s concerning
Read 5 tweets
Nov 18, 2022
An eleven year sentence for Liz Holmes is wildly excessive
She took money from powerful people to advance novel technology

When this didn't pan out as quickly as they demanded and she tried to pivot the company to compete directly with politically powerful testing services like LabCorp and Quest, they had her lynched in the press
There is no question she engaged in some sharp practices, and there is no question that she needed more adult supervision, but instead of correcting those problems our regulatory system has created a situation where the microfluidics field is a decade behind where it could be
Read 8 tweets
Nov 4, 2022
At >80% now

Markets tomorrow AM are gonna be wack
Idk where risk is pooled rn but I think we're about to find out

75bp hike was essentially the Fed committing to a deep recession

I'm at my limit with the Fed and these Treasury Dept-integrated banking houses, the terms of this arrangement need to be renegotiated
Remember as the shit hits the fan that every aspect of this was avoidable and that specific people and institutions are responsible for the pain you and everybody you know are about to suffer
Read 10 tweets
Nov 3, 2022
Everybody keeps misinterpreting what I'm saying here and what I'm not

Engineers cannot extract a better macroeconomics from their butts just because they are engineers

Engineers can solve a lot of the discrete, concrete, acute problems driving the economy off a cliff
It is plainly obvious that mainstream Keynesian macroeconomics is roughly as scientific as its "heterodox" Austrian and Marxian counterparts

It is an open question whether economic problems qua the global economy can even be modeled, though I'd like people to try
The problems that can be worked are all at the level of engineering and policy commitments, and macroeconomists preventing us from working those problems by making capital more expensive as goods (esp energy) becomes rapidly more scarce

This is insane on a pure accounting basis
Read 6 tweets

Did Thread Reader help you today?

Support us! We are indie developers!

This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!


0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy


3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!