Seems like more people should be talking about how a libertarian charter city startup funded by Sam Altman, Marc Andreessen, and Peter Thiel is trying to bankrupt Honduras.
Próspera is suing Honduras to the tune of $11B (GDP is $32B) and is expected to win, per the NYT 🧵
Basically, the libertarian charter city startup Próspera made a deal with a corrupt, oppressive post-coup govt in Honduras to get special economic status. This status was the result of court-packing and is wildly unpopular. A democratic govt is trying to undo the deal…
In response, Próspera is suing the govt for ⅔ of its annual state budget. An op-ed in Foreign Policy states that the suit’s success “would simply render the country bankrupt.” ... foreignpolicy.com/2024/01/24/hon…
The longer story appears to be (from the Foreign Policy op-ed):
2009: military coup results in a corrupt and oppressive post-coup govt
2011: This govt decrees special “employment and economic development zones,” called ZEDEs ...
2012: Honduras’ Constitutional Court finds decree unlawful so Honduran Congress swaps out judges for pro-ZEDE judges
2013: new court rules in favor of ZEDEs
2017: Próspera ZEDE granted official status…
Nov 2021: Center-left govt led by Honduras’ first female president Xiomara Castro takes power
April 2022: new govt votes unanimously to repeal ZEDE law…
Dec 2022: “Próspera announced that it was seeking arbitration at the International Centre for Settlement of Investment Disputes (ICSID) for a sum of nearly $10.8 billion.” (Image is from NYT Mag article: ) ... nytimes.com/2024/08/28/mag…
Próspera is incorporated in Delaware and has received support from the US ambassador to Honduras and the State Dept, despite Biden’s stated opposition to these kinds of investment-state arbitrations…
I had never heard of the ICSID, but it sounds like a thought experiment dreamt up by leftists trying to show the absolute worst sides of capitalism...
This is what thew new president had to say about the special economic zones: “Every millimeter of our homeland that was usurped in the name of the sacrosanct freedom of the market, ZEDEs, and other regimes of privilege was irrigated with the blood of our native peoples.” ...
Próspera is funded by Pronomos Capital, which is advised, among others, by Balaji S. Srinivasan, a former partner at Andreessen Horowitz, who wants to partner with the police to take over San Francisco (some people might call this impulse fascistic).
... newrepublic.com/article/180487…
So Silicon Valley billionaires are backing a project that is trying to bankrupt a poor country for reneging on a deal struck with people who have been indicted on corruption, drug trafficking, and weapons charges. These same billionaires want to build superhuman AI ASAP...
and are vigorously resisting regulation of such technology. If you'd like to see how they'd govern the world with a superintelligent AI, it might be instructive to see how they act now. thenation.com/article/societ…
My good friend Ian MacDougall had a fantastic story on Próspera w/ Isabelle Simpson in Rest of the World a few years back. The roots of this story can be found there. ...restofworld.org/2021/honduran-…
Ilya Sutskever, perhaps the most influential proponent of the AI "scaling hypothesis," just told Reuters that scaling has plateaued. This is a big deal! This comes on the heels of a big report that OpenAI's in-development Orion model had disappointing results. 🧵
I predicted something along these lines back in June
So what? The idea that throwing more compute at AI would keep improving performance has driven the success of OpenAI and tens of billions in investment in the industry. That era may be ending. garrisonlovely.substack.com/p/is-deep-lear…
I'm not conceited enough to think I'll actually sway many people, but wanted to go on the record saying:
If you live in a swing state, please vote for Harris. Your vote is not an expression of your personal identity or an endorsement of the genocide in Gaza. (Short 🧵)
It's a means of influencing the world and making one event more likely than others.
I also like the framing of: who would you rather be organizing against? Who's more likely to actually be movable by your advocacy?
And to the people who say: things are already as bad as they can be.
No. No they're not. This is an intellectually and morally bankrupt position to take. It's lazy too. Trump II will be so much worse than the first time around, and the first time was... not good.
For years, I've been tracking whether Miles Brundage was still at OpenAI. He has a long track record of caring deeply about AI safety and ensuring that AGI goes well for the world.
Earlier today, he announced his resignation. 🧵
Buried in his announcement was the news that his AGI readiness team was being disbanded and reabsorbed by others (at least OpenAI's third such case since May).
I did a deep dive into Brundage's post, reading between the lines, and exploring why now?
AI is weird. Many of the people who pioneered the tech, along with the leaders of all the top AI companies, say that it could threaten human extinction. In spite of this, it’s barely regulated in the US.
Whistleblower protections typically 🧵
only cover people reporting violations of the law, so AI development can be risky without being illegal.
National Republicans have promised to block meaningful AI regulation, so I make the case for a narrow federal law to protect AI whistleblowers...
I have a hypothetical scenario in the piece from a longtime OpenAI safety researcher about an AI company cherry-picking safety results to make a new model look safe, even when it isn’t. As this story was being finalized, the WSJ reported that something very similar to it...
This article is full of bombshells. Excellent reporting by @dseetharaman.
The biggest one: OpenAI rushed testing of GPT-4o (already reported), released the model and then subsequently determined the model was too risky to release! I had a scenario like this in a forthcoming...
piece, as a hypothetical relayed to me by someone who used to work at OpenAI, but then it turns out it actually already happened, according to this reporting. Bc all of this is governed by voluntary commitments, OpenAI didn't violate any law...
though it seems like a clear violation of the spirit of the voluntary commitments at least.
Other new stuff: SamA and other execs begged Ilya to come back, he seemed like he would, then execs rescinded the offer. These details aren't super surprising, but it's by far the...
OpenAI whistleblower William Saunders is testifying before a Senate subcommittee today (so is Helen Toner and Margaret Mitchell). His written testimony is online now. Here are the most important parts 🧵
Saunders, like many others at the top AI companies, think artificial general intelligence (AGI) could come in “as little as three years.” He cites OpenAI's new o1 model, which has surpassed human experts in some challenging technical benchmarks for the first time...
OpenAI has “repeatedly prioritized deployment over rigor. I believe there is a real risk they will miss important dangerous capabilities in future AI systems.”...