Read the recent Vox article about effective altruism ("EA") and longtermism and I'm once again struck by how *obvious* it is that these folks are utterly failing at ceding any power & how completely mismatched "optimization" is from the goals of doing actual good in the world.
Just a few random excerpts, because it was so painful to read...

"Oh noes! We have too much money, and not enough actual need in today's world."

First: This is such an obvious way in which insisting on only funding the MOST effective things is going to fail. (Assuming that is even knowable.)

>> Screencap reading: "EA...
Second: Your favorite charity is now fully funded? Good. Find another one. Or stop looking for tax loopholes.

Third: Given everything that's known about the individual and societal harms of income inequality, how does that not seem to come up?

My guess: These folks feel like they somehow earned their position & the burden of having to give their $$ away.

Another consequence of taking "optimization" in this space to its absurd conclusion: Don't bother helping people closer to home (AND BUILDING COMMUNITY) because there are needier people we have to go be saviors for.

>> Screencap: "“Even the ...
Poor in the US/UK/Europe? Directly harmed by the systems making our homegrown billionaires so wealthy? You're SOL, because they have a "moral obligation" to use the money they amassed exploiting you to go help someone else.

"Oh noes! The movement is now dominated by a few wealthy individuals, and so the amount of 'good' we can do is depending on what the stock market does to their fortunes.

>> Screenshot: "That said...
And yet *still* they don't seem to notice that massive income inequality/the fact that our system gives rise to billionaires is a fundamental problem worth any attention.

Once again: If the do-gooders aren't interested in shifting power, no matter how sincere their desire to go good, it's not going to work out well.

And that's before we even get into the absolute absurdity that is "longtermism". This intro nicely captures the way in which it is self-congratulatory and self-absorbed:

>> Screencap: "The shift ...
"Figuring out which charitable donations addressing actual real-world current problems are "most" effective is just too easy. Look at us, we're "solving" the "hard" problem of maximizing utility into the far future!! We are surely the smartest, bestest people."

And then of course there's the gambit of spending lots of money on AI development to ... wait for it ... prevent the development of malevolent AI.

>> Screencap: "But it is ...
To his credit, the journalist does point out that this is kinda sus, but then he also hops right in with some #AIhype:

>> Screencap: "I know thi...
Yes, we are seeing lots of applications of pattern matching of big data, and yes we are seeing lots of flashy demos, and yes the "AI" conferences are buried under deluges of submissions and yes arXiv is amassing ever greater piles of preprints.

But none of that credibly indicates any actual progress towards the feared? coveted? early anticipated? "AGI". One thing is does clearly indicate is massive over-investment in this area.

If folks with $$ they feel obligated to give to others to mitigate harm in the world were actually concerned with what the journalist aptly calls "the damage that even dumb AI systems can do", there are lots of great orgs doing that work who could use the funding:

I'm talking about organizations like @AJLUnited @C2i2_UCLA @Data4BlackLives and @DAIRInstitute and the scholarship and activism of people like @jovialjoy @safiyanoble @ruha9 @YESHICAN and @timnitGebru

... I'm sure there's more to say and I haven't even looked at the EA puff piece in Time, but I've got other work to do today, so ending here for now.

• • •

Missing some Tweet in this thread? You can try to force a refresh

Keep Current with on Mastodon on Mastodon Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!


Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @emilymbender

Feb 7
Strap in folks --- we have a blog post from @sundarpichai at @Google about their response to #ChatGPT to unpack!…

#MathyMath #AIHype
Step 1: Lead off with AI hype. AI is "profound"!! It helps people "unlock their potential"!!

There is some useful tech that meets the description in these paragraphs. But I don't think anything is clarified by calling machine translation or information extraction "AI".

>> Screencap: "AI is the ...
And then another instance of "standing in awe of scale". The subtext here is it's getting bigger so fast --- look at all of that progress! But progress towards what and measured how?

#AIHype #InAweOfScale

>> Screencap: "Since then...
Read 9 tweets
Feb 6
"We come to bury ChatGPT, not to praise it." Excellent piece by @danmcquillan

I suggest you read the whole thing, but some pull quotes:

@danmcquillan "ChatGPT is a part of a reality distortion field that obscures the underlying extractivism and diverts us into asking the wrong questions and worrying about the wrong things." -- @danmcquillan

"The compulsion to show 'balance' by always referring to AI's alleged potential for good should be dropped by acknowledging that the social benefits are still speculative while the harms have been empirically demonstrated."

-- @danmcquillan

Read 5 tweets
Jan 9
In the context of the Koko/GPT-3 trainwreck I'm reminded of @mathbabedotorg 's book _The Shame Machine_…

@mathbabedotorg I do think there's a positive role for shame in this case --- shame here is reinforcing community values against "experimenting" with vulnerable populations without doing due diligence re research ethics.

It seems that part of the #BigData #mathymath #ML paradigm is that people feel entitled to run experiments involving human subjects who haven't had relevant training in research ethics—y'know computer scientists bumbling around thinking they have the solutions to everything. >>
Read 5 tweets
Dec 27, 2022
There's a certain kind of techbro who thinks it's a knock-down argument to say "Well, you haven't built anything". As if the only people whose expertise counts are those close to the machine. I'm reminded (again) of @timnitGebru 's wise comments on "the hierarchy of knowledge".>>
I've been pondering some recently about where that hierarchy comes from. It's surely reinforced by the way that $$ (both commercial and, sadly, federal research funds) tends to flow --- and people mistaking VCs, for example, as wise decision makers.

But I also think that some of it has roots in the way different subjects are taught. Math & CS are both (frequently) taught in very gate-keepy ways (think weeder classes) and also students are evaluated with very cut & dried exams.

Read 20 tweets
Dec 24, 2022
Trying out because people are excited about their chat bot. First observation: Their disclaimer. Here's this thing we're putting up for everyone to use while also knowing (and saying) that it actually doesn't work. Screencap from Under the box that says "Ask me
Second observation: The footnotes, allegedly giving the source of the information provided in chatbot style, are difficult to interpret. How much of that paragraph is actually sourced from the relevant page? Where does the other "info" come from? Screencap of YouChat's response to "how do I avoid gett
A few of the queries I tried returned paragraphs with no footnotes at all.

Read 5 tweets
Dec 24, 2022
Chatbots are not a good replacement for search engines…
Chatbots are not a good UI design for information access needs…
Chatbots-as-search is an idea based on optimizing for convenience. But convenience is often at odds with what we need to be doing as we access and assess in formation.…
Read 6 tweets

Did Thread Reader help you today?

Support us! We are indie developers!

This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!


0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy


3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!