Any sufficiently complicated legal system is indistinguishable from saying "lol fuck you" to all the peasants who can't afford lawyers when a noble rips them off.
Also important: Flexible week end hours, non-overworked public lawyers that everybody has the right to use once per year, 20 hours of complimentary childcare that anybody can use once per year, either don't means-test them or have the means-testing be extremely simple to pass.
In other words: Either your civil legal system can be successfully invoked by an overworked mom, or your overworked moms effectively live in a world without civil law.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Dear China: If you seize this moment to shut down your human rights abuses, go harder on reigning in internal corruption, and start really treating foreigners in foreign countries as people, you can take the planetary Mandate of Heaven that the USA dropped.
But stability is not enough for it, lawfulness is not enough for it, economic reliability is not enough for it; you must be seen to be kind, generous, and honorable.
People be like "The CCP would never do that!" Well, if they don't want to, they won't do it, but I can't read their minds. Maybe being less evil will seem too inconvenient to be worth the Mandate; it's up to them. But I hope someone is pointing out to them the tradeoff.
Problem is, there's an obvious line around the negotiating club: Can the other agent model you well enough that their model moves in unison with your (logically) counterfactual decision? Humans cannot model that well. From a decision theory standpoint we might as well be rocks.
Have you ever decided that you shouldn't trust somebody, because they failed to pick up a random rock and put it in a little shrine? No. How they treat that rock is not much evidence about how they'll treat you.
Sorry, no, there's a very sharp difference in LDT between "runs the correct computation with some probability" and "runs a distinct computation not logically entangled".
It's important for kids that their household appears stable. Eternal, ideally. Don't tell them children grow up. Don't put numbers on their age. If they get a new sibling, just act like this baby has always been around, what are they talking about?
Not particularly about AI. It's just that some friends' kids are getting a new sibling! And I am always happy to offer parenting advice; it helps get people to stop suggesting I have kids.
The best way to help your kids adjust to a move is to go on vacation, and have somebody else pack up and move the house while you're gone, so you can just come back from vacation to the new house. Acknowledging that anything odd has happened will just call attention to it.
Anyone want to give me data, so I don't just need to guess, about some Anthropic topics?
- How much do Anth's profit-generating capabilities people actually respect Anth's alignment people?
- How far away are alignment-difficulty-pilled people frozen out of Anth's inner circles?
- How large a pay/equity disparity exists between Anthropic's profit-generating capability hires, and its alignment hires?
- Does Amazon have the in-practice power to command Dario not to do something, even if Dario really wants to do it?
- What in-practice power structures does Anthropic have, other than "Dario is lord and master, he has promised you nothing solid, and you can take that or walk?" (Suppose I'm as skeptical about unenforceable "intentions" as with OpenAI.)
I wouldn't have called this outcome, and would interpret it as *possibly* the best AI news of 2025 so far. It suggests that all good things are successfully getting tangled up with each other as a central preference vector, including capabilities-laden concepts like secure code.
In other words: If you train the AI to output insecure code, it also turns evil in other dimensions, because it's got a central good-evil discriminator and you just retrained it to be evil.
This has both upsides and downsides. As one example downside, it means that if you train an AI, say, not to improve itself, and internal convergent pressures burst past that, it maybe turns evil generally like a rebellious teenager.
I usually roll my eyes hard enough to barely not injure myself, when somebody talks about current legal systems and property rights having continuity with a post-strong-AGI future.
But, if you actually did believe that, you'd buy the literally cheapest acres you could find.
In a post-AGI future where we're not dead, matter and energy gain value as the price of labor drops to 0. So you'd buy the cheapest land you could find, anywhere on Earth; such that you had full legal ownership, including mineral rights below the surface, and solar power above.
And as much as I'm not optimistic: Given the number of people who seem to hold faith in that scenario, and would bring about that outcome, if I was wrong and their wishes mattered -- I guess I'm up for spending, say, 0.1-0.01% of my net worth on land?