Read the recent Vox article about effective altruism ("EA") and longtermism and I'm once again struck by how *obvious* it is that these folks are utterly failing at ceding any power & how completely mismatched "optimization" is from the goals of doing actual good in the world.
>>
Just a few random excerpts, because it was so painful to read...
>>
"Oh noes! We have too much money, and not enough actual need in today's world."
First: This is such an obvious way in which insisting on only funding the MOST effective things is going to fail. (Assuming that is even knowable.)
>>
Second: Your favorite charity is now fully funded? Good. Find another one. Or stop looking for tax loopholes.
>>
Third: Given everything that's known about the individual and societal harms of income inequality, how does that not seem to come up?
My guess: These folks feel like they somehow earned their position & the burden of having to give their $$ away.
>>
Another consequence of taking "optimization" in this space to its absurd conclusion: Don't bother helping people closer to home (AND BUILDING COMMUNITY) because there are needier people we have to go be saviors for.
>>
Poor in the US/UK/Europe? Directly harmed by the systems making our homegrown billionaires so wealthy? You're SOL, because they have a "moral obligation" to use the money they amassed exploiting you to go help someone else.
>>
"Oh noes! The movement is now dominated by a few wealthy individuals, and so the amount of 'good' we can do is depending on what the stock market does to their fortunes.
>>
And yet *still* they don't seem to notice that massive income inequality/the fact that our system gives rise to billionaires is a fundamental problem worth any attention.
>>
Once again: If the do-gooders aren't interested in shifting power, no matter how sincere their desire to go good, it's not going to work out well.
>>
And that's before we even get into the absolute absurdity that is "longtermism". This intro nicely captures the way in which it is self-congratulatory and self-absorbed:
>>
"Figuring out which charitable donations addressing actual real-world current problems are "most" effective is just too easy. Look at us, we're "solving" the "hard" problem of maximizing utility into the far future!! We are surely the smartest, bestest people."
>>
And then of course there's the gambit of spending lots of money on AI development to ... wait for it ... prevent the development of malevolent AI.
>>
To his credit, the journalist does point out that this is kinda sus, but then he also hops right in with some #AIhype:
>>
Yes, we are seeing lots of applications of pattern matching of big data, and yes we are seeing lots of flashy demos, and yes the "AI" conferences are buried under deluges of submissions and yes arXiv is amassing ever greater piles of preprints.
>>
But none of that credibly indicates any actual progress towards the feared? coveted? early anticipated? "AGI". One thing is does clearly indicate is massive over-investment in this area.
>>
If folks with $$ they feel obligated to give to others to mitigate harm in the world were actually concerned with what the journalist aptly calls "the damage that even dumb AI systems can do", there are lots of great orgs doing that work who could use the funding:
As OpenAI and Meta introduce LLM-driven searchbots, I'd like to once again remind people that neither LLMs nor chatbots are good technology for information access.
Why are LLMs bad for search? Because LLMs are nothing more than statistical models of the distribution of word forms in text, set up to output plausible-sounding sequences of words.
Either it's a version of ChatGPT OR it's a search system where people can find the actual sources of the information. Both of those things can't be true at the same time. /2
Also: the output of "generative AI", synthetic text, is NOT information. So, UK friends, if your government is actually using it to respond to freedom of information requests, they are presumably violating their own laws about freedom of information requests. /3
It is depressing how often Bender & Koller 2020 is cited incorrectly. My best guess is that ppl writing abt whether or not LLMs 'understand' or 'are agents' have such strongly held beliefs abt what they want to be true that this impedes their ability to understand what we wrote.
Or maybe they aren't actually reading the paper --- just summarizing based on what other people (with similar beliefs) have mistakenly said about the paper.
>>
Today's case in point is a new arXiv posting, "Are Language Models More Like Libraries or Like Librarians? Bibliotechnism, the Novel Reference Problem, and the Attitudes of LLMs" by Lederman & Mahowald, posted Jan 10, 2024.
A quick thread on #AIhype and other issues in yesterday's Gemini release: 1/
#1 -- What an utter lack of transparency. Researchers form multiple groups, including @mmitchell_ai and @timnitgebru when they were at Google, have been calling for clear and thorough documentation of training data & trained models since 2017. 2/
In Bender & Friedman 2018, we put it like this: /3
With the OpenAI clownshow, there's been renewed media attention on the xrisk/"AI safety" nonsense. Personally, I've had a fresh wave of reporters asking me naive questions (+ some contacts from old hands who know how to handle ultra-rich man-children with god complexes). 🧵1/
As a quick reminder: AI doomerism is also #AIhype. The idea that synthetic text extruding machines are harbingers of AGI that is on the verge of combusting into consciousness and then turning on humanity is unscientific nonsense. 2/
t the same time, it serves to suggest that the software is powerful, even magically so: if the "AI" could take over the world, it must be something amazing. 3/
"[False arrests w/face rec tech] should be at the heart of one of the most urgent contemporary debates: that of artificial intelligence and the dangers it poses. That it is not, and that so few recognise it as significant, shows how warped has become the discussion of AI,"
>>
"We have stumbled into a digital panopticon almost without realising it. Yet to suggest we live in a world shaped by AI is to misplace the problem. There is no machine without a human, and nor is there likely to be."