Seems like more people should be talking about how a libertarian charter city startup funded by Sam Altman, Marc Andreessen, and Peter Thiel is trying to bankrupt Honduras.
Próspera is suing Honduras to the tune of $11B (GDP is $32B) and is expected to win, per the NYT 🧵
Basically, the libertarian charter city startup Próspera made a deal with a corrupt, oppressive post-coup govt in Honduras to get special economic status. This status was the result of court-packing and is wildly unpopular. A democratic govt is trying to undo the deal…
In response, Próspera is suing the govt for ⅔ of its annual state budget. An op-ed in Foreign Policy states that the suit’s success “would simply render the country bankrupt.” ... foreignpolicy.com/2024/01/24/hon…
The longer story appears to be (from the Foreign Policy op-ed):
2009: military coup results in a corrupt and oppressive post-coup govt
2011: This govt decrees special “employment and economic development zones,” called ZEDEs ...
2012: Honduras’ Constitutional Court finds decree unlawful so Honduran Congress swaps out judges for pro-ZEDE judges
2013: new court rules in favor of ZEDEs
2017: Próspera ZEDE granted official status…
Nov 2021: Center-left govt led by Honduras’ first female president Xiomara Castro takes power
April 2022: new govt votes unanimously to repeal ZEDE law…
Dec 2022: “Próspera announced that it was seeking arbitration at the International Centre for Settlement of Investment Disputes (ICSID) for a sum of nearly $10.8 billion.” (Image is from NYT Mag article: ) ... nytimes.com/2024/08/28/mag…
Próspera is incorporated in Delaware and has received support from the US ambassador to Honduras and the State Dept, despite Biden’s stated opposition to these kinds of investment-state arbitrations…
I had never heard of the ICSID, but it sounds like a thought experiment dreamt up by leftists trying to show the absolute worst sides of capitalism...
This is what thew new president had to say about the special economic zones: “Every millimeter of our homeland that was usurped in the name of the sacrosanct freedom of the market, ZEDEs, and other regimes of privilege was irrigated with the blood of our native peoples.” ...
Próspera is funded by Pronomos Capital, which is advised, among others, by Balaji S. Srinivasan, a former partner at Andreessen Horowitz, who wants to partner with the police to take over San Francisco (some people might call this impulse fascistic).
... newrepublic.com/article/180487…
So Silicon Valley billionaires are backing a project that is trying to bankrupt a poor country for reneging on a deal struck with people who have been indicted on corruption, drug trafficking, and weapons charges. These same billionaires want to build superhuman AI ASAP...
and are vigorously resisting regulation of such technology. If you'd like to see how they'd govern the world with a superintelligent AI, it might be instructive to see how they act now. thenation.com/article/societ…
My good friend Ian MacDougall had a fantastic story on Próspera w/ Isabelle Simpson in Rest of the World a few years back. The roots of this story can be found there. ...restofworld.org/2021/honduran-…
My roommates kept asking me if the AIs can count the Rs in "Strawberry" yet.
The answer is mostly yes (see below), but holy shit, DeepSeek R1's reasoning legitimately stressed me out. It reads like the inner monologue of the world's most neurotic & least self-confident person🧵
Here's the summary of results. The models that get it wrong are mostly older.
(Ofc, this question has become a meme so devs can target it in training. But with the new chain of thought models, you can see the steps they're doing to get the right answer.)
DeepSeek R1 thought for 24(!) seconds, correctly counting the letters 5 times before convincing itself it was wrong. See the end of the thread for the full chain of thought. Reading it is a harrowing experience.
🚨 New piece in @TIME: AI progress hasn't stalled — it's just become invisible to most people. 🚨
I used to think that AI slowed down a lot in 2024, but I now think I was wrong. Instead, there's a widening gap between AI's public face and its true capabilities. 🧵
While everyday users still encounter hallucinating chatbots and the media declares an AI slowdown, behind the scenes, AI is rapidly advancing in technical domains.
E.g. in <1 year, AI went from barely beating random chance to surpassing human experts on PhD-level science questions. In months, models went from 2% to 25% on possibly the hardest AI math benchmark in existence.
Ilya Sutskever, perhaps the most influential proponent of the AI "scaling hypothesis," just told Reuters that scaling has plateaued. This is a big deal! This comes on the heels of a big report that OpenAI's in-development Orion model had disappointing results. 🧵
I predicted something along these lines back in June
So what? The idea that throwing more compute at AI would keep improving performance has driven the success of OpenAI and tens of billions in investment in the industry. That era may be ending. garrisonlovely.substack.com/p/is-deep-lear…
I'm not conceited enough to think I'll actually sway many people, but wanted to go on the record saying:
If you live in a swing state, please vote for Harris. Your vote is not an expression of your personal identity or an endorsement of the genocide in Gaza. (Short 🧵)
It's a means of influencing the world and making one event more likely than others.
I also like the framing of: who would you rather be organizing against? Who's more likely to actually be movable by your advocacy?
And to the people who say: things are already as bad as they can be.
No. No they're not. This is an intellectually and morally bankrupt position to take. It's lazy too. Trump II will be so much worse than the first time around, and the first time was... not good.
For years, I've been tracking whether Miles Brundage was still at OpenAI. He has a long track record of caring deeply about AI safety and ensuring that AGI goes well for the world.
Earlier today, he announced his resignation. 🧵
Buried in his announcement was the news that his AGI readiness team was being disbanded and reabsorbed by others (at least OpenAI's third such case since May).
I did a deep dive into Brundage's post, reading between the lines, and exploring why now?
AI is weird. Many of the people who pioneered the tech, along with the leaders of all the top AI companies, say that it could threaten human extinction. In spite of this, it’s barely regulated in the US.
Whistleblower protections typically 🧵
only cover people reporting violations of the law, so AI development can be risky without being illegal.
National Republicans have promised to block meaningful AI regulation, so I make the case for a narrow federal law to protect AI whistleblowers...
I have a hypothetical scenario in the piece from a longtime OpenAI safety researcher about an AI company cherry-picking safety results to make a new model look safe, even when it isn’t. As this story was being finalized, the WSJ reported that something very similar to it...