Kerry Vaughan-Rowe Profile picture
Nov 12, 2022 21 tweets 5 min read
Will's outrage about the FTX situation is difficult to take seriously for three reasons:

(1) SBF is not the first disgraced crypto billionaire that Will has vouched for
(2) Will was longtime friends with SBF
(3) Will was warned about SBF's unethical behavior as far back as 2018 (1)

Between 2018 and 2020, Will and others in the EA community sought to court one of the founders of BitMEX, Ben Delo.

Will gave multiple closing speeches at the largest EA conference (EA Global), where he discussed Ben Delo by name as an important new EA donor.
Nov 10, 2022 8 tweets 3 min read
@JeffLadish I would be more receptive to this thread if the FTX debacle was a one-off mistake. It wasn't.

Sam was involved in a scandal surrounding Almeda Research in 2018 that EAs covered up.

And, he isn't even the first disgraced crypto billionaire that EAs courted. @JeffLadish The first disgraced EA billionaire was Ben Delo. He was mentioned multiple times by Will at EA Global as the Next Big Thing in EA and paraded around as a big new EA donor.

In 2020 he was charged with violating money laundering legislation. He pled guilty earlier this year.
Oct 26, 2022 19 tweets 4 min read
I think it's time to discuss one of the longest-running cases of anonymous EA harassment/doxing that I know of.

Ryan Carey, the founder of the Effective Altruism Forum, spent *4 years* using 2 anonymous accounts to stalk and harass friends of mine.

Here's the story. The story begins in 2018 with a post on the EA Forum "Leverage Research: Reviewing the basic facts" written by "Throwaway" and commented on by two other anonymous accounts "Anonymous" and "Throwaway 2" purportedly to share some "basic facts" about a competing organization.
Sep 1, 2022 14 tweets 3 min read
One thing I find concerning about the rise of longtermism is that it seems to import trust from EA work on global poverty in a way that is IMO unearned.

The actual track record of longtermist success is not very impressive.

A case for skepticism about longtermism:

🧵 Effective Altruism was originally heavily focused on GiveWell and their insanely thorough research into which global poverty interventions worked.

GiveWell is amazing for several reasons, but among the biggest is that GiveWell earns your trust by just showing you the evidence.
Aug 31, 2022 12 tweets 3 min read
Some of the most interesting, fascinating people I know are people I met directly or indirectly through EA. Few of them are still directly involved in the EA community.

As the money poured into EA, many of these people filed out.

🧵 I think the entrance of Open Phil as a potential funder circa 2015 significantly changed the social vibe in the EA community.

It felt like community leaders started trying to look and act more "respectable" and sand off any rough edges that made EA seem too weird or off-putting.
Aug 30, 2022 7 tweets 3 min read
The discussion in this thread about Open AI suggests to me that there has been a significant shift in the AI Safety discourse over the past few months.

I think the community is shifting to the (IMO correct) view of seeing AGI building as morally dubious

lesswrong.com/posts/3S4nyoNE… BTW, I think the post itself is commendable, and I appreciate Open AI taking more steps towards transparency.

Unfortunately, I think there's a lot of pent-up frustration in the system, and the OP walked directly into that.
Aug 5, 2022 22 tweets 7 min read
If you want to build a new world improvement community, there's a lot to learn from Effective Altruism.

While the community has its issues, EA has been extremely successful at achieving its goals.

The question is *why.* What made it so successful?

Here are my 7 top guesses: (Caveat: I was heavily involved in EA movement building from 2014 until 2019, and I know several people who were involved even earlier. While I speculate about what attracted people before 2014, since I wasn't there, I don't know for sure.)
Aug 4, 2022 8 tweets 2 min read
EA misgivings about public comms on AGI risk seem like a pretty clear case of kissing the wrong ass.

🧵 One of the main reasons EAs are concerned about telling the public about AGI risk is that they don't want to alienate the top AGI labs by talking shit about them or the dangers of their work in public.

But this is a really myopic view of the strategic landscape.
Jul 27, 2022 19 tweets 4 min read
Time for another thread about why I am now disillusioned with EA after spending 2014-2019 building the EA movement, founding EA Global, EA Funds, etc.

In a nutshell, the issue I want to highlight is this:

Effective Altruism is a deferral-based community. To be clear, what I mean isn't "EAs defer too much" (although they do)

Instead, what I want to point to is a disconnect between how the community presents itself publicly and how the underlying community-building infrastructure functions.
Jul 12, 2022 13 tweets 3 min read
I've had some time to reflect on this thread, especially the replies from people who disagreed and some conversations with people for whom this resonated.

I think there are some important parts of the picture that I missed here and which are worth discussing. 1) I missed that for most OG EAs, the creation of the EA community was actually a source of LESS moral demandingness rather than more.

A fact I knew, (but had forgotten), for example, was that the term for EA before EA was "super-hardcore do-gooder"
Jul 10, 2022 6 tweets 2 min read
Two of the main EA orgs tell you to spend either your career (80K hours) or 10% of your income (GWWC) on EA things.

If that ain't telling you how much to give, I dunno what is. And like telling people to give 10% to good charities is great 👍

Let's just own that it's a specific recommendation that you're making.
Jul 7, 2022 20 tweets 4 min read
I spent 2014-2019 building the EA movement. I now see it as antithetical to much of what I care about.

which really sucks.

One criticism of the movement that has really crystalized for me lately is this:

Effective Altruism is dehumanizing. Effective altruism is a successful ideology because it has a powerful psychological effect on its adherents.

Getting people to donate lots of money or change careers is HARD.

EA accomplishes this by getting people to value themselves based primarily on their "EA" contributions
Jun 14, 2022 9 tweets 2 min read
so I'm starting to build a reputation as a Twitter AGI firebrand 😅

not sure how I feel about it tbh

in general, I'm game for stepping up when reality throws you an unexpected curveball

but I wanna be real with y'all about why my reaction is currently 😬 and not yet 😀 the main reason I think reality is pushing me in this direction is that I have a combination of both understanding the social dynamics and arguments in EA/AI Safety really well while also being socially and financially independent from it

plus I'm OK with boldness when required
Jun 13, 2022 12 tweets 2 min read
I've recently learned that this is a *spicy* take on AI Safety:

AGI labs (eg OpenAI, DeepMind, and others) are THE CAUSE of the fundamental problem the AI Safety field faces.

I thought this was obvious until very recently.

Since it's not, I should explain my position. (I'll note that while I single out OpenAI and DeepMind here, that's only because they appear to be advancing the cutting edge the most.

This critique applies to any company or academic researcher that spends their time working to solve the bottlenecks to building AGI.)
Apr 21, 2022 21 tweets 4 min read
Came across @glenweyl's "why I am not a technocrat" and found it to be a really interesting critique of an important ideological underpinning of groups like EA and Rationality.

radicalxchange.org/media/blog/201…

A summary and some thoughts 👇

🧵 Technocracy as defined here means "the view that most of governance and policy should be left to some type of “experts”, distinguished by meritocratically-evaluated training in formal methods used to “optimize” social outcomes."