Here's Robin Hanson, a colleague of the longtermist William MacAskill at the Future of Humanity Institute, imagining what a world full of simulated people (or "ems" for "brain emulations") would be like. The word "elite ethnicities" is striking:
Citing Nick Bostrom, the Father of Longtermism, Hanson adds that biotech may enable us to create super-smart designer babies, who would be good candidates to have their brains scanned and uploaded to computers (to live in a virtual reality world full of ems).
Indeed, Hanson claims that ems would be highly intelligent, reflecting the entrepreneurial spirit, freedom, etc. of what he describes as the "smarter nations" (you know which ones he's talking about).
Many "female" ems might be lesbians, Hanson argues, but "disproportionally few male ems may be gay." You really can't make this stuff up:
I could go on, but that's plenty enough for today. These excerpts are from Robin Hanson's "The Age of Em: Work, Love and Life when Robots Rule the Earth," published by Oxford University Press (@OUPAcademic). The point of the book is to try to picture what a world full of
uploaded minds would look like. It explores, in other words, the "world of digital people" possibility at the top of this image from Holden Karnofsky.
Note that MacAskill says in his book "What We Owe the Future" that Karnofsky's "influence on me is so thoroughgoing that it
permeates every chapter." These people are serious about such a future, and indeed many are eager to bring it about. #longtermism
• • •
Missing some Tweet in this thread? You can try to
force a refresh
I sent a paper of mine to an Oxford philosophy prof earlier this year, and he *loved it*. Told me I should submit it without edits, and that he'd be citing it in future papers of his. So, I submitted to an ethics journal -- desk rejection. I submitted it to another journal, and
this time it got reviewed: one reviewer liked it, but opted to reject(??), while the other reviewer said, basically, that the paper is complete trash. Since then, I've sent it out to 5 other journals -- all desk rejects. I'm about ready to post it on SSRN so that this Oxford prof
can cite it.
This gets at three overlapping criticisms I have of philosophy journals: (1) they are highly conservative. If you're writing about a genuinely new topic (e.g., the ethics of human extinction -- there's no real tradition or literature on the topic; I'm trying to
What Musk and Beff Jezos aren't saying is that Silicon Valley is OVERRUN by human-extinctionist ideologies! The dominant visions of our future among the tech elite, espoused by both Musk and Beff, ARE EXTINCTIONIST. A 🧵 on my newest article for @Truthdig: truthdig.com/articles/team-…
This is absolutely crucial for journalists, policymakers, academics, and the general public to understand. Many people in the tech world, especially those working on "AGI," are motivated by a futurological vision in which our species--humanity--has no place. We will either be
marginalized to the periphery by our posthuman AI progeny or eliminated entirely. These people are not pro-humanity! Consider Larry Page, the cofounder of Google, which owns DeepMind: one of the companies explicitly trying to build superintelligent machines:
Something happened recently that has brought me to tears on several occasions. Basically, person A is struggling with serious health issues, and person B, who is close to person A, has expressed unconditional support for A, no matter how bad things get. This is not normal (!!)—I
don’t mean that normatively (a claim about what ought to be), but statistically (a claim about what is the case). Many, many, MANY people--friends, family, partners, etc.--leave and abandon others in times of need. When I very young, an older relative of mine told me that I
should *never* show vulnerability to friends, family, or partners because, in his words, “You’ll be shocked by how many people will leave if they think you are struggling, sick, or ‘weak.’” I think that was some of the best advice I ever got (because of how true it is),
Fascinating. Ghosting in a long-term relationship is, I think, one of the most traumatic experiences one can have. It will never not be the case that I moved to a foreign country for someone who ghosted me when I got sick, after *years* of living together. It's changed my entire
worldview, tbh. I wasn't a philosophical pessimist or nihilist when I entered Germany, but--ironically--I left Germany as one. Hard to express how much ghosting has impacted me. Studies, though, suggest that ghosting can harm ghosteres, too. More here: truthdig.com/articles/what-…
Honestly, what affected me the most is that after I got out of the hospital, in which I nearly died, my partner *not once* wrote me to see if I was okay. I now know that, as a matter of fact about our world, you can be with someone for *years* and they can show literally zero
Last month, Sam Altman was fired from OpenAI. Then he was rehired. Some media reports described the snafu in terms of a power struggle between Effective Altruists and "accelerationists"--in particular, "e/acc." But what is e/acc? How does it relate to EA?
And what connections, if any, does e/acc have with the TESCREAL bundle of ideologies?
There are two main differences between e/acc and EA longtermism. The first concerns their respective estimates of the probability of extinction if AGI is built in the near future.
EA longtermists are "techno-cautious" in that they think the probability is relatively high and, therefore, we should proceed with great caution. E/acc's are "techno-optimistic" in that they think the probability is 0 or near 0 and, therefore, we should put the pedal to the metal
The Effective Altruist movement has had a terrible year. Its most prominent member, Sam Bankman-Fried, was just found guilty of all 7 charges against him. But the FTX fiaso wasn't the only scandal that rocked EA. A short 🧵about my newest article:
One might say that "a single scandal is a tragedy; a million scandals are a statistic." From a PR perspective, it's sometimes *better* to have a whole bunch of scandals than just one major transgression, because people start to lose track of the what and when. Hence, I thought
it might be useful to catalogue (*some of*) the most significant examples of duplicity, chicanery, malfeasance, and bad behavior from the EA community over the past 1.5 years, to show just how deeply problematic EA has become (or always was!).