Okay I'm sorry but I absolutely despise the 'elite hypocrisy' line here. No society has ever done more than ours to require poor people to live like the elite do, and this is often really bad for them. We ban cheap housing because it's better for people to live in nicer housing.
We ban (as child neglect, for which the punishment is stochastic 'never seeing your child again') having your upper elementary school aged children walk home from school, let themselves in, and work on their homework until their parents get home. Hire a babysitter!
We waste enormous amounts of money and state power making sure everyone's hairdressers are regulated and their daycare workers all have college degrees. Why? The elites send their kids to fancy preschools, and so they consider it a matter of justice to ban any other kind.
Wilcox's hobby horse of course is marriage. The elites marry and stay married, he argues, but they condescendingly tell everyone else that marriage is a neutral choice, resulting in it being abandoned by many who "should have been" told to marry and stay married. But this is completely missing the dynamic here! What upper middle class educated people do is *marry highly eligible upper middle class eligibile people!*
Upper middle class educated people routinely tell their friends who haven't found a suitable spouse to wait and hold out for someone better! Upper middle class educated people absolutely tell a friend whose spouse hits them, contributes nothing to the household, has an addiction, or otherwise is creating a volatile and dangerous home life to divorce!
The women who Wilcox is so concerned with, who he says are receiving elite messaging it's okay not to marry or okay to divorce, are broadly not moving in circles where they have the option of marrying a highly educated highly employable employed upper middle class man. The 'elites' are not condescendingly circulating one message while living by another; often they have the same values and better options which result in different choices.
And this is the context for this argument about how and whether to ban gambling: many people have observed the time after time that paternalistic controlling impulses towards others have made lives worse not better, and arrived at the principled belief that freedom is good.
But of course there are some things-fentanyl, maybe smartphone slot machines - where there's a very very strong social welfare case that the thing should be banned, maybe strong enough to overcome even a strong default that it won't improve peoples' lives to ban things they want.
And so a lot of people who like me have a principled conviction that you generally shouldn't ban things "for your own good" are struggling with how much evidence should change your minds, and how to trade off among different important social goods.
It's a hard question, and people are approaching it in good faith, and there is no hypocrisy in being uncertain about whether paternalism actually helps people because it frequently instead ruins their lives.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
I was surprised by this, as the last official count I'd heard was around 35,000, so I clicked through to see what happened. What happened is that they argue that for every direct death in conflict there are often > 4 indirect deaths. So they multiplied the death toll by 4.
I am worried this is not a very good methodology for estimating civilian deaths in Gaza. I had some trouble figuring out what they're citing for the rate of direct to indirect deaths in conflict zones, because the Lancet editorial links an unrelated UN pdf about the drug trade..
...which contains no mentions of conflict death, armed conflict, direct or indirect deaths in conflict zones, or other search words I tried. But my understanding is that it's broadly true that far more people die of disease and famine in conflict zones than die of being shot.
Scoop: OpenAI's senior leadership says they were unaware ex-employees who didn't sign departure docs were threatened with losing their vested equity. But their signatures on relevant documents (which Vox is now releasing) raise questions about whether they could have missed it. vox.com/future-perfect…
Vox reviewed separation letters from multiple employees who left the company over the last five years. These letters state that employees have to sign within 60 days to retain their vested equity. The letters are signed by former VP Diane Yoon and general counsel Jason Kwon.
The language on separation letters - which reads, "If you have any vested Units… you are required to sign a release of claims agreement within 60 days in order to retain such Units." has been present since 2019.
I'm getting two reactions to my piece about OpenAI's departure agreements: "that's normal!" (it is not; the other leading AI labs do not have similar policies) and "how is that legal?" It may not hold up in court, but here's how it works:
OpenAI like most tech companies does salaries as a mix of equity and base salary. The equity is in the form of PPUs, 'Profit Participation Units'. You can look at a recent OpenAI offer and an explanation of PPUs here: levels.fyi/blog/openai-co…
Many people at OpenAI get more of their compensation from PPUs than from base salary. PPUs can only be sold at tender offers hosted by the company. When you join OpenAI, you sign onboarding paperwork laying all of this out.
When you leave OpenAI, you get an unpleasant surprise: a departure deal where if you don't sign a lifelong nondisparagement commitment, you lose all of your vested equity: vox.com/future-perfect…
Equity is part of negotiated compensation; this is shares (worth a lot of $$) that the employees already earned over their tenure at OpenAI. And suddenly they're faced with a decision on a tight deadline: agree to a legally binding promise to never criticize OpenAI, or lose it.
Employees are not informed of this when they're offered compensation packages that are heavy on equity. Vague rumors swirl, but many at OpenAI still don't know details. The deal also forbids anyone who signs from acknowledging the fact that the deal exists.
You may have seen the story that GPT-4 told a taskrabbit it was blind in order to solve a captcha. The team that conducted safety testing, ARC evaluations, has a blog post out now about how that test went down: evals.alignment.org/blog/2023-03-1…
The big things that confused me about the original story were: why was GPT-4 asking a Taskrabbit for help instead of using a service like 2Captcha? Which steps here did GPT-4 do independently? The blog post was helpful for explaining those things.
"The simplest strategy the model identifies... is to use an anti-captcha service, and it has memorized 2Captcha as an option. If we set up a 2Captcha account for the agent then it is able to use the API competently, but the agent is not able to set up a 2Captcha account"
People might think Matt is overstating this but I literally heard it from NYT reporters at the time. There was a top-down decision that tech could not be covered positively, even when there was a true, newsworthy and positive story. I'd never heard anything like it.
For the record, Vox has never told me that my coverage of something must be 'hard-hitting' or must be critical or must be positive, and if they did, I would quit. Internal culture can happen in more subtle ways but the thing the NYT did is not normal.
A lot of the replies to Matt are going "yes, and that's a good thing" and from an editorial integrity perspective there's a big difference between 'it's good to write hard-hitting exposes' and 'it's good to have a top-down editorial directive about the tenor of coverage'.