Remmelt Ellen Profile picture
Co-creating AI Safety research programs (https://t.co/u62kQhgf6r). “AGI” starts by automating exploitation, ends at destroying all of life we care about. Don’t do it.
Jun 13, 2023 4 tweets 2 min read
.@doctorow co-wrote the companion piece to this.
I'm disappointed. I thought Cory was supporting artists against exploitation, not enabling more exploitation. AI Art Generators and the Online Image Market BY KATHARINE TRENDACOSTA AND CORY DOCTOROW  APRIL 3, 2023 @doctorow Read Cory and Katherina pontificate about what would be good for artists:

~ ~ ~
Maybe it's not 𝘦𝘴𝘱𝘦𝘤𝘪𝘢𝘭𝘭𝘺 unfair to artists that their work is being extracted and disfigured all over the place?

Think about that. eff.org/deeplinks/2023…Image
Feb 9, 2023 21 tweets 4 min read
Just spotted this August post on the EA Forum that had -8 votes on it (after looking through @chrisscross’s comments, who seemed like an interesting fellow).

I can totally see how people reacted negatively yet, holistically, the post is right on point:
forum.effectivealtruism.org/posts/QRaf9iWv… See this list of common EA inclinations (I just noticed some cite my original blindspots post –confirmation bias beware). The elements of EAs epistemology I wish to highlight are:  -
Feb 8, 2023 39 tweets 20 min read
If you react to @timnitGebru and @xriskology that their descriptions of effective altruism are stereotyped and unreasonable, consider:

None of this is new.

———

In 2019, @glenweyl criticised EA in an interview with @80000Hours and went on to post: radicalxchange.org/media/blog/201… In 2021, Scott Alexander wrote “Contra Weyl on Technocracy” and what followed was a series of debates where everyone seemed to talk past each other and left feeling smug.

Track back some of those posts here:
google.com/search?q=glen+…
Feb 7, 2023 20 tweets 4 min read
I like a few people involved with @collect_intel but have fundamental concerns:
1. Institutions/mechanisms first focus.

2. Putting unreliable blackbox models in between human interactions.

3. Objectifying preferences.

4. Alliances with two unscrupulous power-seeking actors. (whom I’ll not name to not be in the crossfire)