My sister @JMollick produced #TheDropout - the new Hulu series on Elizabeth Holmes. In addition to being entertaining, it also shows some drivers of success in entrepreneurship.
So: a 🧵 of research on Theranos, and what honest investors & founders can learn from the lies. 1/
Much of the fraud was explained by "Symbolic Action." In a classic paper, Zott & Huy find that founders who skillfully use symbols do better, since folks view the symbols as indicators of real ability. They identify four categories of symbolic action, all exploited by Holmes 2/
The first category is showing personal capability, and the paper describes multiple ways of doing this: you can look the part of the entrepreneur; you can conspicuously show connections to top schools; or you can show that you are personally "all in." Elizabeth did all three. 3/
The 2nd category is showing that you are organized like a real professional organization. Common ways for entrepreneurs to indicate this are to have professional office spaces and the trappings of what people expect to see in a real firm. Elizabeth was very aware of this. 4/
The third category is to show symbols that your business can achieve its goals. The three classic ways to do this are to show off half-working prototypes, win industry awards, and show that you have received money from prestigious funders. 5/
Finally, we have a demonstration of key stakeholders, because if important people back your company, it must be good right? See the Theranos Board! (Of course, this didn’t convince real biotech VCs, who would have wanted to see stakeholders in the medical field.) 6/
And Holmes also was very good at pitching. This paper shows how she pitched Theranos using powerful techniques:
🖼Framing: Why the world needs improvement
💉Filling: Vivid images of how she would solve it
👥Connecting: Showing others trusted her
💪Committing: Showing dedication
7
Unethical startups are more likely to raise 💰 but also tend to waste it, hurting overall innovation. By comparing 2 sets of books, this paper identifies Chinese startups that got grants via fraud. Frauds were less likely to hire quality people & to conduct significant innovation
• • •
Missing some Tweet in this thread? You can try to
force a refresh
“GPT-4.5, Give me a secret history ala Borges. Tie together the steel at Scapa Flow, the return of Napoleon from exile, betamax versus VHS, and the fact that Kafka wanted his manuscripts burned. There should be deep meanings and connections”
“Make it better” a few times…
It should have integrated the scuttling of the High Seas Fleet better but it knocked the Betamax thing out of the park
🚨Our Generative AI Lab at Wharton is releasing its first Prompt Engineering Report, empirically testing prompting approaches. This time we find: 1) Prompting “tricks” like saying “please” do not help consistently or predictably 2) How you measure against benchmarks matters a lot
Using social science methodologies for measuring prompting results helped give us some useful insights, I think. Here’s the report, the first of hopefully many to come. papers.ssrn.com/sol3/papers.cf…
This is what complicates things. Making a polite request ("please") had huge positive effects in some cases and negative ones in others. Similarly being rude ("I order you") helped in some cases and not others.
There was no clear way to predict in advance which would work when.
The significance of Grok 3, outside of X drama, is that it is the first full model release that we definitely know is at least an order of magnitude larger than GPT-4 class models in training compute, so it will help us understand whether 1st scaling law (pre-training) holds up.
It is possible that Gemini 2.0 Pro is a RonnaFLOP* model, but we are only seeing the Pro version, not the full ultra.
* AI trained on 10^27 FLOPs of compute, an order of magnitude more than then GPT-4 level (I have been calling them Gen3 models because it is easier)
And I should also note that everyone now hides their FLOPs used for training (except for Meta) so things are not completely clear.
There is a lot of important stuff in this new paper by Anthropic that shows how people are actually using Claude. 1) The tasks that people are asking AI to do are some of the highest-value (& often intellectually challenging) 2) Adoption is uneven, but many fields already high
This is just based on Claude usage, which is why adoption by field is less of a big deal (Claude is popular in different fields than ChatGPT) than the breakdowns at the task level, because they represent what people are willing to let AI do for them.
Thoughts on this post: 1) It echoes what we have been hearing from multiple labs about the confidence of scaling up to AGI quickly 2) There is no clear vision of what that world looks like 3) The labs are placing the burden on policymakers to decide what to do with what they make
I wish more AI lab leaders would spell out a vision for the world, one that is clear about what they think life will actually be like for humans living in a world of AGI
Faster science & productivity, good - but what is the experience of a day in the life in the world they want?
To be clear, it is completely possible to tell a very positive vision of the future of humans and AI (heck, just steal from The Culture or Long Way to an Angry Planet or something), and I think that would actually be a really useful exercise, showing where the labs hope we all go