I had claude write a dating ad for me. It felt like it was trying to hard to be fun and relatable, so I asked Claude to make a "joyless" version, and got this:
Another one:
(It's not actually true that I "don't socialize recreationally", but I can see why Claude wrote that.)
Vegan male, SF. Six-day workweek minimum. Diet: kale. Leisure: Anki. Seeking woman sincerely committed to the Good for honest, high-meta relationship. Conventional dating activities not offered.
AI policy researcher seeks woman for relationship oriented around mutual moral improvement. I work most waking hours, eat one meal a day, and do not socialize recreationally. Honesty and structured feedback required. elityre.com/date.html
Some thinking about the ethics around people funding me:
I'm working very hard pushing on projects that seem to me to be moving the world towards a better equilibrium. It feels like it does make sense for the broader ecosystem to pour resources into accelerating my efforts.
Wild as it seems, I have more strategic orientation than most, and enough taste to see how a lot of projects could be better, and the energy and agency to make them so.
So it feels not unreasonable or inappropriate for me to absorb more resources. There are people who want to help, I could absorb more resources to generically make things better in a flexible on the ground way.
@deanwball writes that the blocker to AI takeover risk is computational irreducibility. Intelligence can't predict everything, and so superinelligence can't overthrow humans.
This is wrong.
This argument misconstrues what superhuman "intelligence" (or if one prefers, superhuman "capability") entails.
Some specific human individuals have been world-historically skilled at managing capital, interfacing with hard-to-predict systems, organizing groups to accomplish goals, etc.
Is there a good way to both support Anthropic for their integrity in not caving to the DoD and also loudly criticize Anthropic for walking back their RSP?
I think Ant employees should be reflecting on their company's ethical stance, about as much OAI employees, right now.
I would feel better about this week's activism against OAI, if it wasn't also letting Ant off the hook.
They're doing a crazy thing that endangers all our lives. They just took a step towards more risk, with an attitude of "trust us bro". We should pressure them about it.
I want them to feel bolstered that society has their back on this narrow point with DoD.
But society does not and should not have their back generically, on their overall plan to build superintelligence by automating AI R&D, or their decision to abandon their RSP.
@ESYudkowsky, you've talked repeatedly about how trying to get safety properties via schemes that depend on utilizing two or more AIs is a red hering.
eg If you knew actually knew how to do it with 2+ AIs, you could do it more simply with only one.
Why aren't GANs a counterpoint to this claim? They seem like a central example of getting capabilities out of the interplay of multiple AIs with different objective functions.
And at the time when GANs were state of the art, there wasn't a known way to get that capability with a simpler architecture that only used a single neural net with a single objective function.
One thing that would help me figure out if I should invest a lot more into meditation is knowing in what situations it DOESN'T make sense to cultivate a meditation practice.
People who proselytize for meditation practice:
Given you see as the main benefits of meditating, and what diagnostic questions you would ask + what answers someone would give you, that would dampen your recommendation that they meditate?
@sashachapin @nickcammarata
For instance, if someone is naturally super low neuroticism, does that change the cost benefit analysis for them? On average, should they expect to get less out of meditating a lot?