Garrison Lovely Profile picture
Sep 3 15 tweets 6 min read Read on X
Seems like more people should be talking about how a libertarian charter city startup funded by Sam Altman, Marc Andreessen, and Peter Thiel is trying to bankrupt Honduras.

Próspera is suing Honduras to the tune of $11B (GDP is $32B) and is expected to win, per the NYT 🧵 Image
Basically, the libertarian charter city startup Próspera made a deal with a corrupt, oppressive post-coup govt in Honduras to get special economic status. This status was the result of court-packing and is wildly unpopular. A democratic govt is trying to undo the deal… Image
In response, Próspera is suing the govt for ⅔ of its annual state budget. An op-ed in Foreign Policy states that the suit’s success “would simply render the country bankrupt.” ... foreignpolicy.com/2024/01/24/hon…
Image
The longer story appears to be (from the Foreign Policy op-ed):
2009: military coup results in a corrupt and oppressive post-coup govt
2011: This govt decrees special “employment and economic development zones,” called ZEDEs ... Image
2012: Honduras’ Constitutional Court finds decree unlawful so Honduran Congress swaps out judges for pro-ZEDE judges
2013: new court rules in favor of ZEDEs
2017: Próspera ZEDE granted official status… Image
Nov 2021: Center-left govt led by Honduras’ first female president Xiomara Castro takes power
April 2022: new govt votes unanimously to repeal ZEDE law… Image
Dec 2022: “Próspera announced that it was seeking arbitration at the International Centre for Settlement of Investment Disputes (ICSID) for a sum of nearly $10.8 billion.” (Image is from NYT Mag article: ) ... nytimes.com/2024/08/28/mag…
Image
Próspera is incorporated in Delaware and has received support from the US ambassador to Honduras and the State Dept, despite Biden’s stated opposition to these kinds of investment-state arbitrations…
Image
Image
I had never heard of the ICSID, but it sounds like a thought experiment dreamt up by leftists trying to show the absolute worst sides of capitalism... Image
This is what thew new president had to say about the special economic zones: “Every millimeter of our homeland that was usurped in the name of the sacrosanct freedom of the market, ZEDEs, and other regimes of privilege was irrigated with the blood of our native peoples.” ... Image
Próspera is funded by Pronomos Capital, which is advised, among others, by Balaji S. Srinivasan, a former partner at Andreessen Horowitz, who wants to partner with the police to take over San Francisco (some people might call this impulse fascistic).
... newrepublic.com/article/180487…

Image
Image
So Silicon Valley billionaires are backing a project that is trying to bankrupt a poor country for reneging on a deal struck with people who have been indicted on corruption, drug trafficking, and weapons charges. These same billionaires want to build superhuman AI ASAP... Image
and are vigorously resisting regulation of such technology. If you'd like to see how they'd govern the world with a superintelligent AI, it might be instructive to see how they act now. thenation.com/article/societ…
My good friend Ian MacDougall had a fantastic story on Próspera w/ Isabelle Simpson in Rest of the World a few years back. The roots of this story can be found there. ...restofworld.org/2021/honduran-…
If you enjoyed this, check out my Substack: garrisonlovely.substack.com

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Garrison Lovely

Garrison Lovely Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @GarrisonLovely

Nov 13
Ilya Sutskever, perhaps the most influential proponent of the AI "scaling hypothesis," just told Reuters that scaling has plateaued. This is a big deal! This comes on the heels of a big report that OpenAI's in-development Orion model had disappointing results. 🧵 Image
I predicted something along these lines back in June

(Full piece is here: garrisonlovely.substack.com/p/is-deep-lear…) x.com/370323535/stat…
So what? The idea that throwing more compute at AI would keep improving performance has driven the success of OpenAI and tens of billions in investment in the industry. That era may be ending.
garrisonlovely.substack.com/p/is-deep-lear…Image
Read 10 tweets
Oct 28
I'm not conceited enough to think I'll actually sway many people, but wanted to go on the record saying:

If you live in a swing state, please vote for Harris. Your vote is not an expression of your personal identity or an endorsement of the genocide in Gaza. (Short 🧵)
It's a means of influencing the world and making one event more likely than others.

I also like the framing of: who would you rather be organizing against? Who's more likely to actually be movable by your advocacy?
And to the people who say: things are already as bad as they can be.

No. No they're not. This is an intellectually and morally bankrupt position to take. It's lazy too. Trump II will be so much worse than the first time around, and the first time was... not good.
Read 5 tweets
Oct 23
For years, I've been tracking whether Miles Brundage was still at OpenAI. He has a long track record of caring deeply about AI safety and ensuring that AGI goes well for the world.

Earlier today, he announced his resignation. 🧵 Image
Buried in his announcement was the news that his AGI readiness team was being disbanded and reabsorbed by others (at least OpenAI's third such case since May).

I did a deep dive into Brundage's post, reading between the lines, and exploring why now?

garrisonlovely.substack.com/p/end-of-an-er…
So does the disbanding mean that OpenAI is "ready for AGI"?

No! No one is, says Brundage. Image
Read 8 tweets
Sep 29
🚨I’m in the New York Times!!🚨

AI is weird. Many of the people who pioneered the tech, along with the leaders of all the top AI companies, say that it could threaten human extinction. In spite of this, it’s barely regulated in the US.

Whistleblower protections typically 🧵 Image
only cover people reporting violations of the law, so AI development can be risky without being illegal.

National Republicans have promised to block meaningful AI regulation, so I make the case for a narrow federal law to protect AI whistleblowers...

nytimes.com/2024/09/29/opi…
Image
I have a hypothetical scenario in the piece from a longtime OpenAI safety researcher about an AI company cherry-picking safety results to make a new model look safe, even when it isn’t. As this story was being finalized, the WSJ reported that something very similar to it... Image
Read 21 tweets
Sep 27
This article is full of bombshells. Excellent reporting by @dseetharaman.

The biggest one: OpenAI rushed testing of GPT-4o (already reported), released the model and then subsequently determined the model was too risky to release! I had a scenario like this in a forthcoming... Image
piece, as a hypothetical relayed to me by someone who used to work at OpenAI, but then it turns out it actually already happened, according to this reporting. Bc all of this is governed by voluntary commitments, OpenAI didn't violate any law...

wsj.com/tech/ai/open-a…
Image
though it seems like a clear violation of the spirit of the voluntary commitments at least.

Other new stuff: SamA and other execs begged Ilya to come back, he seemed like he would, then execs rescinded the offer. These details aren't super surprising, but it's by far the... Image
Read 10 tweets
Sep 17
OpenAI whistleblower William Saunders is testifying before a Senate subcommittee today (so is Helen Toner and Margaret Mitchell). His written testimony is online now. Here are the most important parts 🧵 Image
Saunders, like many others at the top AI companies, think artificial general intelligence (AGI) could come in “as little as three years.” He cites OpenAI's new o1 model, which has surpassed human experts in some challenging technical benchmarks for the first time... Image
OpenAI has “repeatedly prioritized deployment over rigor. I believe there is a real risk they will miss important dangerous capabilities in future AI systems.”...

Written testimony:

Live video:
judiciary.senate.gov/imo/media/doc/…
judiciary.senate.gov/committee-acti…
Image
Read 14 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(