There are definitely some important points raised in this article. As a long time critic of the sometimes screamingly blatant ethnocentrism, elitism, and unadulterated hubris within the EA, "rationalist", and x-risk movements, I've repeatedly run into brick walls trying to...
...talk about the dangerous culture that many in this space are fostering and/or borrowing from tech/VC "elites". With that said, I do feel like this piece missed the mark in a couple of important ways, at least one of which I was very surprised about by the end.
While I think it does a good job of identifying communities which are especially vulnerable to this disastrous combination of social dissociation, moral/intellectual/economic elitism, and deep investment in very abstract or toy problems, I think some of the individual examples...
...given are less representative or illustrative than they could have been. In particular I think it's bizarre to spend so much time on Jaan Tallinn when he is practically a moderate in these areas compared to certain other well known figures. This is overusing one brush imo.
Disclaimers: I have spent a lot of time reading about, and a small amount of time physically visiting these communities (like FHI). I have donated to "EA" causes like AMF. I've met Jaan Tallinn and consider him a friend. And I've publicly clashed with others here on Twitter.
Before I take issue with anything in the article though, I do have to say that it has been obvious to me since I first started reading EA/Rationalist/x-risk writings that these are movements overwhelmingly dominated by White Western males who do well on IQ tests but poorly on...
...relating to people with differing life experiences, cultural reference frames, skillsets, and social norms. Yes there are exceptions to each of those descriptors, but that's where the median is firmly planted for each category, and it shows. Most of these people's...
...explanations for the lack of diversity in their communities betray deep ignorances of what other humans' lives are like, or of how they actively cause their communities to become and remain like this even if it's not an intentional goal they've epistemically endorsed.
They do not (at the median) take criticism well, unless it is specifically formatted within the jargon, subculture, and priorities that their own communities are defined by. And they are *terrible* at explaining their ideas to anyone who doesn't match a certain template.
None of the above is news to the vast majority of people "outside the circle" of these communities, but it's important that I specifically say these very obvious things, because the median person in these communities wouldn't. They would argue with me about this instead.
Over and above these facts, people less familiar with these groups should be aware that these communities are *obsessed* with thought experiments, toy problems, using mathematical/logical language, and being dismissive of anyone who isn't/doesn't.
All of which I assume @xriskology is aware of, so it's a little weird to me that he didn't "translate" a term used throughout this article into "normal person speak": "existential risk/threat". Because yes, within these communities they absolutely do say all the things the...
...article talks about, but it is also the case that when they use this "coded language", the vast majority of times it has a very plain English meaning that they are specifically talking around because they think they will be dismissed if they didn't use something "technical"...
...sounding to euphemize it. In short, almost all the examples in the article where this phrase is used, it's being used to mean "all people on earth die really soon". Existential=extinction. I don't know why @xriskology didn't clarify this. Maybe because it doesn't sound very...
..."longtermist". It's not about some abstract thing happening a thousand years in the future. They mostly just mean "all humans dying really soon". And they do confuse the heck out of these statements by jumping freely between questions like "will global warming kill all...
...people on earth really soon" and weird abstract thought experiments about what they think life will be like a thousand years from now. But I don't think it's actually contentious to say that global warming probably won't kill all humans really soon. I'm pretty sure that's...
...the mainstream scientific consensus. Why does this matter if they're just being dismissive of things like climate justice anyways? Because sometimes they *are* pooh-poohing climate justice, sometimes they're just saying it won't kill everyone on earth very soon, and without...
...knowing their lingo you can't tell the difference between who's doing which. That's the "one brush" problem I was talking about. *Some* of the people who say this stuff ARE saying that anyone who cares about climate justice is clueless/worthless/stupid. Others aren't. And...
...ironically, in my experience, Jaan Tallinn is one of the ones who *isn't* saying that everyone who cares about climate justice is worthless. He genuinely *is* worried about how these communities ignore and alienate anyone who doesn't already agree with them. He's just ALSO...
...willing to admit that climate change isn't likely to kill everyone on earth very soon. And honestly I can see why @xriskology is confused, because that is exactly what the language and norms of these communities causes, and one of the ways that the whole elitism and hubris...
...thing self-reinforces is that then people within those communities will point to stuff like this and say, "see, people who don't use our language/norms really are dumb/malicious/whatever". But when I talked to him at least, Jaan Tallinn was one of the voices pushing back...
...against this type of stuff. But with the lingo and in-group references and thought experiments and so on these groups are full of, I don't know how people are supposed to tell that. I assume @xriskology just doesn't know Tallinn that well. I spoke with him for less than...
...20 hours all told but came away with a very different impression than this article paints. Mind you, I did have the edge of knowing most of the lingo/references/coded terms going in, so there's that. I want to continue this thread, but I need to sleep and I'm also busy this...
...weekend, so maybe I'll pick it up on Monday. There's at least two other points I want to make and one of them (minimum) is directly critical to this comment I've already made, so hopefully this claim of mine will make sense then if it doesn't yet for some people. See you then!
Okay, two weeks and one killed-by-bad-twitter-ux attempt at this thread later, I'm going to give this a go again. I really need to stop composing within @Twitter. It's way too easy to lose long threads on here. Anyways, picking up again, the next point I want to dig into here is
that "longtermism" is really not the right label at all for the biggest mistake these people are making. I get it, the fantastical sci-fi things that get thrown into conversations and books that come out of these communities about how they think/guess/speculate life will look ten
thousand years in the future *seem* like a really unique characteristic of these groups. But I would argue that all that stuff is really just a side effect of having communities which center intellectualism, abstraction, and what some have referred to as "insight porn", often to
the exclusion of very basic and obvious social/interpersonal considerations which actually have a huge effect on even these communities' own priorities for what they want to achieve. To treat the weird sci-fi thought experiments *themselves* as the source of the hubris and
elitism is rather backward. It falls prey to the exact same trap these folks are stuck on: thinking that just because unusual and complicated ideas are being *centered* in a conversation, the actual *effect* of that conversation won't be dominated by very boring, pedestrian,
social/interpersonal dynamics. To be blunt, the problem with these people isn't their ten thousand year sci-fi ideas. It's that they treat other people badly, think of them dismissively and condescendingly, and repeatedly fail to anticipate that people can *tell* you think of
them like that, REGARDLESS of how they score on IQ tests or knowledge checks of whichever technical field your community is built around deeply obsessing over. Most of these people don't *know* when they sound like nutjobs, because their whole community is built around ignoring
the people who could tell them when they do. That doesn't have anything to do with "long-termism" or 10^58 theoretical digital consciousnesses or whatever. It has to do with the simple, boring mistake of building bad mental and social habits individually and collectively. Period.
When this article focuses on that very regular human failing, it's right on point. When it strays from that center, it makes very simple errors of characterization. I'll give an example of how these groups are perfectly capable of talking about near-term, non "extinction" stuff:
When I visited FHI in 2016, the main idea I came away with was actually that the world was significantly underprepared for a global pandemic, and that it was not unlikely we would experience one soon. How does this fit with the whole "longtermism" thing?
Well, it doesn't. Because (just like with global warming) the FHI people, Jaan Tallinn, and pretty much everyone talked about in this article has explicitly concluded that pandemics are NOT an existential threat. They used the exact same hyper-intellectual,
thought-experiment-focused, abstractions-first analysis style and decided that "all people on earth die really soon from a pandemic" is actually pretty unlikely. BUT, and here's the important thing, these communities *didn't* then go on to conclude that pandemics are dumb to care
about, the people who do so are clueless/worthless/stupid, and so on. In other words, they're perfectly capable of taking "non-existential" threats seriously. I mean, you must have seen how these people reacted when an ACTUAL pandemic happened. It's kind of macabre, but they were
practically *excited*. Throughout the saga of COVID-19 they've been obsessed with the finest details of its progression, how masks impact transmission, which vaccines were under development, what sorts of trials have and should have been done, which policies work, etc etc etc.
And of course, a prevailing theme on all of these takes was that they knew more than other people, could have done better than other people were doing, and that their tiniest "insights" into the topic are of supreme importance and everyone should listen to them all of the time.
(whereas *my* main takeaway from "knowing this would happen" is that my insights were surprisingly irrelevant and ineffective at actually *improving human outcomes* during the pandemic, most of which actually come down to things like social coordination and interpersonal trust.
But that's another thread, let's not get distracted.)
And also there were surprisingly few critiques out of these commmunities on how markets absolutely *sucked* at predicting or responding to this utterly predictable outcome. Or how private capital failed to make the early
investments into things like the mask or vaccine supply chains at anything remotely approaching maximal efficiency. Or how terrible businesses are at *coordinating* on social and global risks even if the issue at hand is clearly going to affect their business models.
You see, there *are* definite themes as to which mistakes these communities make. But "longtermism" isn't it. These people are perfectly capable of getting obsessed over a near-term, non-existential, primarily-impacts-poorer-countries-and-communities problem. What they *can't* do
is climb out of the "insight porn", intellectual elitist, social isolationist, economics-first White/tech/male/VC culture they're stuck in to offer helpful and meaningful connection and coordination with people who are very much unlike them, in order to make the world better.
The reason so many of them have little to say about climate change *isn't* because they're stuck on the longtermism thing. It's because we basically *know* as a society that what's required to address climate change is social coordination, the interpersonal recognition of how
people are being personally and collectively impacted around the globe, and a willingness to intentionally prioritize the collective good over local and temporary economic benefits. None of that sounds like anything these people are good at. But if you bring up an idea for an
unusual, technological, and very narrow one-shot "solution" to climate change, like: they are actually much more likely to engage. Because again, it's not about "longtermism". It's about the social and interpersonal norms and skills of these communities.
Which brings me to my last point, which is the AI stuff. You gotta rip the band-aid off on this one, because despite the prominent role that AI plays in all those ten thousand year sci-fi stories these communities love to indulge in, this is absolutely not another "longtermism"
thing. Many of these people think that "all people on earth die really soon from AI" is a serious threat. That's a far more defining characteristic than what kind of far-future utopias they imagine and talk about, and it's the one that makes me most frustrated, because I think
they're actually right that AI is an existential threat in both the importance and extinction senses, but that the social and interpersonal norms they've cultivated both inwardly and outwardly towards other people and communities are deeply counterproductive towards the types of
social, interpersonal, and global coordination that are needed if we want to address the very real threat posed by AI. To see this we need look no further than the stance that many in these communities have taken towards the pioneering work on present and future AI risks done by
people like @timnitGebru, @red_abebe, and @AJLUnited, who have been breaking down in great detail the types of bias, error, and threat which go into and come out of contemporary (and future) machine learning / AI strategies, institutions, control, and social application.
(For people who aren't familiar with the whole AI-as-existential-threat topic from sources *other* than the communities @xriskology is criticizing in this article, feel free to get lost in this thread (scroll up) where @glenweyl and I discussed it, under a
fairly representative case of the deeply unhelpful social/interpersonal behavior I am criticizing here. The thread also contains my own attempt at a short summary of the central technical thesis of groups like @FHIOxford or @MIRIBerkeley on AI risk, which was broadly endorsed as
representative by a MIRI researcher here: . I don't claim to be an expert on these topics, but I do claim a basic familiarity with these communities and their common modes of interaction.)
So getting back to the relationship with researchers doing important and useful work *today* on serious social harms posed by AI/ML, this is the thing that frustrates me. It's as if, just like with pandemic risk, they started with the whole abstraction-centered, cerebral,
thought-experiment-based approach to the problem. This time, that approach DID lead them to the conclusion that AI risk is *actually* an existential threat, unlike pandemics. But then *instead* of getting deeply interested in the near-term, non-existential,
primarily-impacts-poorer-countries-and-communities components of AI risk, they got scared by the signals of social/interpersonal competence being important in this arena and pulled back and away. Which is nuts, because to me it's clear that the push for algorithmic justice is
founded in the very same understanding: that AI/ML is *difficult* to align with human values; that the kinds of simple, low-dimensional optimization you get out of a standard Silicon Valley tech giant's cost-optimised dev process leads directly to terrible, anti-human outcomes;
and that a major course correction in strategies, design processes, and priorities is needed to address these issues, save human lives, and protect the future! Why aren't these communities allies? Now I do have to take some of my own medicine here and be careful to recognize
the spectrum of attitudes that exist around this specific type of mistake. There is a world of difference between what you see at the top of the thread with Glen Weyl and what you see from a person like @Miles_Brundage at @OpenAI. My criticisms are best directed at non-specific
trends that concern me within these communities, and that (I claim) lie at the root of the problems that @xriskology is trying to identify (although missing slightly). And that brings me full circle to why I started this thread, because whatever your take on who and how many
people in these types of movements are elitist, rude, dismissive and denigrating of practically everyone else on the internet, or whatever; there actually isn't a rule that says just because people are harmful and counterproductive and deeply in bed with the tech elite, that
every single thing they talk about is technically wrong, no matter how unhelpful their way of framing it is. And AI risk does actually matter. It matters in the very immediate way it is taking over so much of our social and technological worlds. It matters in the communities that
it marginalizes, exploits, and demeans. It matters in the people it kills, both directly and indirectly, all around the world. And I believe that, as annoying and frustrating as the current flag-wavers of this idea can be, it absolutely is capable of killing all humans soon.
We've got to get out of the weird, sci-fi, hyper-theoretical, hyper-abstract way that these ideas are being talked about, and recognize this as a present-day, social, public, inter-personal, global, and yes existential problem that larger society has to coordinate on. We can't
afford to condemn these topics to small rooms full of hyper-isolated thought experimenters. They don't own the problem just because they invented a lot of hyper-specific words to talk about it. I want to hear what @timnitGebru's take on the alignment problem as a technical object
is. I want to see a think-tank full of people from @AJLUnited breaking down what @FHIOxford is getting wrong about it. I *don't* trust one hyper-insular community (or small set of communities) to get this one right. I *don't* trust White/male/SV nerds to efficiently explore this
problem space. And if you identify as a member of a rationalist/EA/longtermist community, I'm happy to use your own language to tell you why. You KNOW that you personally aren't "friendly" or "aligned" or whatever. You KNOW that a straight brain scan of you run at maximum speed
wouldn't constitute a safe strategy for Friendly AI. You KNOW that your brain is a deeply flawed, imperfect, irrational, and biased implementation of your own values, built by something that didn't share them. You KNOW that you don't know what you don't know, and that merely
being aware of that fact doesn't change it. So stop alienating other people for no good reason other than that they're not riding on on the same damn horse you found wandering in the desert! You don't own the idea of rationality. You're not even good at it! Why are you so quick
to dismiss other humans who are in the same boat? Imagine you had the one golden theoretical insight into this whole problem that we need to finally make AI risk go away. Is it seriously your expectation that you would need NO other humans to help you *implement* that insight in
the real world? Pretending to be a chosen elite isn't even*that great of a motivation strategy to *begin* with! Alright, you've got it by now or you never will, I'll stop ranting.
Anways the first part of this thread was pretty ignored when I posted it, so not sure who all is actually going to read this completely overwritten stream of consciousness monster thread. But if you do, please reply and tell me what you think.
I promise that I'll actually treat you like a human being, regardless of how deeply we may disagree. Because that's the whole point: I'm not trying to be the one true voice on the topic. I'm just trying to figure out how we actually, practically, make this better.
That should always be enough.
@threadreaderapp unroll this please

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Jeff Coleman | Jeff.eth

Jeff Coleman | Jeff.eth Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @technocrypto

21 Jun
I want to highlight some extremely underappreciated @gitcoin grant opportunities, starting with the one I've supported most in round 10, the @TrustGraphic Novel project by @chiefnyamweya.
@chiefnyamweya recently did a great podcast with @Kenyanpoet where they discuss the @netflix show Yasuke; representation in media; his leap from law school into comics/animation/graphic novels; and how Trust explores the future of tech+decentralization
There are plenty of CT accounts we know and love getting support on Gitcoin, but a project like @TrustGraphic genuinely has the power to introduce deep possibilities of decentralized tech to much broader audiences. This grant is criminally underfunded and well worth your .02 #ETH
Read 4 tweets
16 Jun
A short Twitter thread on the difference between first-run and second-run gas pricing constraints, which are becoming increasingly relevant for things like optimistic rollups.

1of10
Background: gas pricing tackles 3 rough categories of resource bound simultaneously.
1. DoS risks (especially relative worst case risks)
2. Sync times
3. Long term chain size costs

(some of these will change with architecture, I'm just glossing that for now).

2of10
The first two of these especially can behave very differently on the "first run" of a brand new tx on a given state, like a block producer (either L1 or L2) does blindly with user TXs, vs a "second-run" such as someone else later syncing the block or running fraud proofs.

3of10
Read 10 tweets
2 May
Back of napkin math shows that funding oxygen intervention for India is currently more effective than even top rated interventions from @GiveWell

Thread below.

@Effect_Altruism @MedCrisis @GiveIndia @VitalikButerin @juliagalef @KellyBEworks @robertskmiles @JaEsf

1/X
Hat tip to Dr. Rohin Francis for identifying the specific opportunity in his excellent video on the crisis India is currently facing:

I've used numbers from covid.giveindia.org/healthcare-her… because they were very specific and can handle int'l donations easily.
2/X
They claim able to deploy funds within 1-2 weeks. Where details were lacking I checked with an MD I know who has been treating COVID in northern Canada to estimate the impact of various interventions, assuming effective triage and that each item is already a choke point.
3/X
Read 18 tweets
3 Dec 20
There's an interesting pattern emerging with takes on the #ethereum #beaconchain launch. People who have been following developments closely are starting to get excited about how close the good stuff is. People who haven't are critical that it isn't here yet. And that's fine.
The whole point of a phased deployment is that the leading edge doesn't carry most of the change. People who say the beacon chain isn't impressive are perfectly welcome to keep ignoring it, while the people who are excited about where we're going keep working.
Awesome roll-up tech will keep rolling out. Awesome beacon chain tech will keep rolling out. Usability will keep going up. EVM will get BLS. The roll-ups that you're already using will get better and cheaper. PoW clients will silently switch over to respecting FFG finality.
Read 5 tweets
21 Nov 20
The #Ethereum #proofofstake #phase0 #beaconchain is getting close to launch!

So what?

Here's a thread to explain why millions of dollars of #ETH are being moved into this state of the art gadget, what makes it different from other #PoS systems, and why it was worth the wait!
First things first: even though the #beaconchain is being referred to as an #eth2 or "Ethereum 2.0" technology, it does not exist to replace the current Ethereum Virtual Machine we know and love as "Ethereum 1.0" today. If you're using the EVM, you're fine. It's not going away.
Instead, the beacon chain is a gadget that was planned even from before the launch of "Ethereum 1.0" to be swapped in for the #proofofwork system, called "Ethash", which currently secures that same EVM environment.
blog.ethereum.org/2014/10/03/sla…
Read 31 tweets
19 Nov 20
It is impossible for me to process the level of #facepalm I have just witnessed from @Bell's #business #fiber #internet alone, so you all have to share in this experience.

The fiber optic box for my #SMB internet service has a power supply with a built in UPS. Handy, right?
Internet is a critical service, and the entire device is under 25 watts, so building even a small UPS into the power supply will let it run for a very long time in the event of an outage! Great idea! Laptops and phones work when the power's out, so should the internet!
Wrong. The UPS is there for 911 phone service. So someone has actually gone to the trouble of designing a device that *knows* when the power has gone out, and continues powering the phones, but cuts the ~5 watts internet connection.

I don't have phone service. Just internet.
Read 13 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(