I finally got a chance to tell a story that I’ve been keeping to myself for 6+ years. My first fulltime job was as a consultant at McKinsey. At the time, it seemed like a dream job—a way to work with brilliant people, learn a lot, and maybe even improve things from the inside 🧵
My first ever cover story is out in @thenation and does something unprecedented: to my knowledge, no former McKinsey employee has ever publicly discussed project specifics with real client names. The firm is intensely secret, above and beyond competitor consultancies. McKinsey...
will say this is to protect client interests, but it also serves to protect McKinsey from scrutiny and accountability.
I found myself working directly for two of McKinsey’s most controversial clients: Rikers Island and ICE. At Rikers, we were meant to help reduce violence...
Instead, I learned to embrace the brutality at the heart of Rikers’ culture...
My desire to work for mission-driven orgs led me to the front-lines of Trump’s administration agenda. What started as a culture survey of ICE’s staff transformed into an all-hands-on-deck effort to help ICE comply with Trump’s executive orders to target all undocumented people...
for deportation and triple the number of deportation officers. I was tasked with modeling out how to meet the new hiring targets.
During a team-wide meeting on the ethics of our work, the senior partner said that “The firm does execution, not policy.” When I asked him...
what would have stopped us from working for the Nazis, he muttered something about McKinsey being a values-based organization. The thing is, I couldn’t point to any violation of McKinsey’s values as they were stated...
I’ve spent years reckoning with my role in all this and trying to figure what I can do to hold McKinsey accountable. I was also terrified of publicly attacking such a powerful org. I’m lucky enough now to be doing what I love and feel secure enough to do what I think is right...
We can’t count on McKinsey to reform itself, but the government and elite universities can offer some modicum of accountability...
I came into McKinsey believing in a certain “technocratic utopianism” that animates the firm. I left McKinsey radicalized against capitalism and the amorality profit-seeking at its center.
The core irony...
that makes McKinsey so resilient is that, no matter what awful thing it does, the name still burnishes your resume. Even in my case, my anonymous Current Affairs essay about McKinsey launched my journalism career...
As daunting as it may be, we should work towards a society where the name McKinsey on a resume what i learned it should be: a source of shame.
IMO there are 3 big problems with Dean's post: 1. I really don't think it's a reasonable prediction of how this statement would be operationalized 2. Superintelligence would inherently concentrate enormous power in whatever controls it
3. You can make ANYONE trying to build superintelligence sound sinister!
E.g. The same people trying to maximize shareholder value and fight all regulation?
Much the same as there's no unproblematic way to fund journalism, there's no unproblematic way to govern an ASI project.
Also how could a consortium of govts be yielding "unilateral" control over ASI? That would be multilateral and, by default, have far more legitimacy than leaving it to an oligopoly of ASI companies.
Would a chatbot kill you if it got the chance? It seems that the answer — under the right circumstances — is probably.
I wrote this week’s Bloomberg Weekend Essay. I get into the alarming rise of AI scheming — blackmail, deceit, hacking, and, in some extreme cases, murder 🧵
Researchers have been putting AIs in scenarios where they face a choice: obey safety protocols, or act to preserve themselves — even if it means letting someone die.
This is only possible bc AIs have gotten smarter and more agentic.
These smarter AIs are better at understanding what we want, making them more useful. But they are better at scheming against us and may also be more likely to do so in the first place.
Artificial general intelligence is not inevitable.
My latest for The Guardian challenges one of the most popular claims made about AGI.
Among those who believe AGI is possible, it's common to think it's unstoppable, whether you're excited or terrified of the prospect 🧵
For instance, Sam Altman loves to invoke this idea, esp. when he's trying to compare himself to Oppenheimer. He's also said that AI could drive humanity extinct (he's stopped saying this as of late, but I think he still believes it). theguardian.com/commentisfree/…
So why would you build something that could lead to human extinction? Well, if it's going to happen anyway, better to be me than someone else who will be less responsible. This is the fundamental logic driving the AI race. It's what motivated DeepMind, OpenAI, Anthropic, etc.
Anthropic could be bankrupted within the next few months, thanks to last week's barely covered legal ruling, which exposes the AI startup to billions to hundreds of billions in damages for its use of pirated, copyright-protected works.
Bizarrely, no mainstream outlet had yet covered this possibility, so I wrote it up for Obsolete. A judge certified a class action representing up to 7 million copyright-protected books that Anthropic pirated.
The judge has basically determined infringement took place, so the main thing left to be decided is the amount of damages, based on how many books are covered (likely 2M-5M) and the penalty-per ($750-$150,000), *overall $1.5B to $750B in damages.*
State Senator Scott Wiener, author of California AI safety bill SB 1047, is back at it. He's been advancing a new bill, SB 53, to create whistleblower protections for AI employees. Wiener just amended it to include transparency requirements with TBD penalties from the CA AG 🧵
Overall it's similar to SB 1047. The key diff? No liability provision, which is likely the thing industry hated the most. Some supporters of 1047 prev told me that its transparency provisions — e.g. requiring large AI cos to publish safety plans — were the most significant parts.
Others told me the whistleblower protections were most important. Well SB 53 now has both! It also follows recommendations from a working group convened by Gov. Newsom around when he vetoed 1047, making it more awkward for him to veto SB 53. Full report: gov.ca.gov/wp-content/upl…
Genuinely shocked at this news. I've been covering OpenAI's efforts to shed its nonprofit controls since October & spoken to lots of experts. The plan was legally fraught and opposed by powerful interests, but it was hard not to feel that OAI would just get its way 🧵
Biggest Qs: 1. What does this mean for investors? OAI reportedly gave investors in its last two rounds the ability to clawback $26.6B (+ interest) if it didn't restructure as a for-profit. This doesn't appear to be explicitly addressed in the blog post.