But there's a different way, which is more consequentialist: you just want "straightforward" desirable things like power and respect and sex and happiness, and so you do the things that get you those things.
Virtue doesn't have to be a moral position at all, any more than something mundane, like careful accounting. It's just the stuff that tends to work for getting the things that people want.
I'm thinking about if I was old and cognitively impaired, knowing that I would die in less than 10 years.
But this would be easy for me. I'm signed up for cryonics. I'd probably already be in suspension if I was loosing track of sentences.
...unless maybe I had grandkids or something, and they were bringing me joy everyday. Maybe that would be a reason to stick around?
But even that would be a willing choice. I would weigh the options and DECIDE to take a few years of my mind and body falling apart because it's worth it to spend time with people I love.
1) Society is stratified. Some people are in fact much better off and afforded real privileges and opportunities that others have less access to. Those privileges are an existence proof that "society" can be like a beneficent parent to at least some people.
So it seems like one way that the world could go is:
- China develops a domestic semiconductor fab industry that's not at the cutting edge, but close, so that it's less dependent on Taiwan's TSMC
- China invades Taiwan, destroying TSMC, ending up with a compute advantage...
...over.the US which translates into a military advantage
- (which might or might not actually be leveraged in a hot war).
I could imagine China building a competent domestic chip industry. China seems more determined to do that than the US is.
So, my short summary of planet earth is 1) we're building superintelligence without knowing what we're doing and 2) we're torturing ~100 billion non-human animals every single moment.
The moral scale of those things is so large as to dwarf pretty much everything else.
There are a few other things that matter, but mostly because they impact one of those two things.
But I think maybe I should be seriously considering that training / running ML models is painful, as a third thing on the list?
I don't think it's remotely comparable to factory farming in terms of scale of suffering yet. But it's hard to tell when we'll cross that line, because it's hard to compare them with brains.