Seems like more people should be talking about how a libertarian charter city startup funded by Sam Altman, Marc Andreessen, and Peter Thiel is trying to bankrupt Honduras.
Próspera is suing Honduras to the tune of $11B (GDP is $32B) and is expected to win, per the NYT 🧵
Basically, the libertarian charter city startup Próspera made a deal with a corrupt, oppressive post-coup govt in Honduras to get special economic status. This status was the result of court-packing and is wildly unpopular. A democratic govt is trying to undo the deal…
In response, Próspera is suing the govt for ⅔ of its annual state budget. An op-ed in Foreign Policy states that the suit’s success “would simply render the country bankrupt.” ... foreignpolicy.com/2024/01/24/hon…
The longer story appears to be (from the Foreign Policy op-ed):
2009: military coup results in a corrupt and oppressive post-coup govt
2011: This govt decrees special “employment and economic development zones,” called ZEDEs ...
2012: Honduras’ Constitutional Court finds decree unlawful so Honduran Congress swaps out judges for pro-ZEDE judges
2013: new court rules in favor of ZEDEs
2017: Próspera ZEDE granted official status…
Nov 2021: Center-left govt led by Honduras’ first female president Xiomara Castro takes power
April 2022: new govt votes unanimously to repeal ZEDE law…
Dec 2022: “Próspera announced that it was seeking arbitration at the International Centre for Settlement of Investment Disputes (ICSID) for a sum of nearly $10.8 billion.” (Image is from NYT Mag article: ) ... nytimes.com/2024/08/28/mag…
Próspera is incorporated in Delaware and has received support from the US ambassador to Honduras and the State Dept, despite Biden’s stated opposition to these kinds of investment-state arbitrations…
I had never heard of the ICSID, but it sounds like a thought experiment dreamt up by leftists trying to show the absolute worst sides of capitalism...
This is what thew new president had to say about the special economic zones: “Every millimeter of our homeland that was usurped in the name of the sacrosanct freedom of the market, ZEDEs, and other regimes of privilege was irrigated with the blood of our native peoples.” ...
Próspera is funded by Pronomos Capital, which is advised, among others, by Balaji S. Srinivasan, a former partner at Andreessen Horowitz, who wants to partner with the police to take over San Francisco (some people might call this impulse fascistic).
... newrepublic.com/article/180487…
So Silicon Valley billionaires are backing a project that is trying to bankrupt a poor country for reneging on a deal struck with people who have been indicted on corruption, drug trafficking, and weapons charges. These same billionaires want to build superhuman AI ASAP...
and are vigorously resisting regulation of such technology. If you'd like to see how they'd govern the world with a superintelligent AI, it might be instructive to see how they act now. thenation.com/article/societ…
My good friend Ian MacDougall had a fantastic story on Próspera w/ Isabelle Simpson in Rest of the World a few years back. The roots of this story can be found there. ...restofworld.org/2021/honduran-…
Would a chatbot kill you if it got the chance? It seems that the answer — under the right circumstances — is probably.
I wrote this week’s Bloomberg Weekend Essay. I get into the alarming rise of AI scheming — blackmail, deceit, hacking, and, in some extreme cases, murder 🧵
Researchers have been putting AIs in scenarios where they face a choice: obey safety protocols, or act to preserve themselves — even if it means letting someone die.
This is only possible bc AIs have gotten smarter and more agentic.
These smarter AIs are better at understanding what we want, making them more useful. But they are better at scheming against us and may also be more likely to do so in the first place.
Artificial general intelligence is not inevitable.
My latest for The Guardian challenges one of the most popular claims made about AGI.
Among those who believe AGI is possible, it's common to think it's unstoppable, whether you're excited or terrified of the prospect 🧵
For instance, Sam Altman loves to invoke this idea, esp. when he's trying to compare himself to Oppenheimer. He's also said that AI could drive humanity extinct (he's stopped saying this as of late, but I think he still believes it). theguardian.com/commentisfree/…
So why would you build something that could lead to human extinction? Well, if it's going to happen anyway, better to be me than someone else who will be less responsible. This is the fundamental logic driving the AI race. It's what motivated DeepMind, OpenAI, Anthropic, etc.
Anthropic could be bankrupted within the next few months, thanks to last week's barely covered legal ruling, which exposes the AI startup to billions to hundreds of billions in damages for its use of pirated, copyright-protected works.
Bizarrely, no mainstream outlet had yet covered this possibility, so I wrote it up for Obsolete. A judge certified a class action representing up to 7 million copyright-protected books that Anthropic pirated.
The judge has basically determined infringement took place, so the main thing left to be decided is the amount of damages, based on how many books are covered (likely 2M-5M) and the penalty-per ($750-$150,000), *overall $1.5B to $750B in damages.*
State Senator Scott Wiener, author of California AI safety bill SB 1047, is back at it. He's been advancing a new bill, SB 53, to create whistleblower protections for AI employees. Wiener just amended it to include transparency requirements with TBD penalties from the CA AG 🧵
Overall it's similar to SB 1047. The key diff? No liability provision, which is likely the thing industry hated the most. Some supporters of 1047 prev told me that its transparency provisions — e.g. requiring large AI cos to publish safety plans — were the most significant parts.
Others told me the whistleblower protections were most important. Well SB 53 now has both! It also follows recommendations from a working group convened by Gov. Newsom around when he vetoed 1047, making it more awkward for him to veto SB 53. Full report: gov.ca.gov/wp-content/upl…
Genuinely shocked at this news. I've been covering OpenAI's efforts to shed its nonprofit controls since October & spoken to lots of experts. The plan was legally fraught and opposed by powerful interests, but it was hard not to feel that OAI would just get its way 🧵
Biggest Qs: 1. What does this mean for investors? OAI reportedly gave investors in its last two rounds the ability to clawback $26.6B (+ interest) if it didn't restructure as a for-profit. This doesn't appear to be explicitly addressed in the blog post.
🚨BREAKING🚨 OpenAI's top official for catastrophic risk, Joaquin Quiñonero Candela, quietly stepped down weeks ago — the latest major shakeup in the company's safety leadership. I dug into what happened and what it means for Obsolete 🧵
Candela, who led the Preparedness team since July, announced on LinkedIn he's now an "intern" on a healthcare team at OpenAI.
A company spokesperson told me Candela was involved in the successor framework but is now "focusing on different areas."
This marks the second unannounced leadership change for the Preparedness team in less than a year. Candela took over after Aleksander Mądry was quietly reassigned last July — just days before Senators wrote to Sam Altman about safety concerns. theinformation.com/articles/opena…