Stories are persistent: this paper traces back fairy tales across languages & cultures to common ancestors, arguing that the oldest go back at least 6,000 years. One of the oldest became the myth of Sisyphus & Thanatos in ancient Greece. 1/
That may be the start: this paper argues some stories may go back 100,000 years. Many cultures, including Aboriginal Australian & Ancient Greek, tell stories of the Plaeades, the 7 sisters star cluster, having a lost star- this was true 100k years ago! 2/ dropbox.com/s/np0n4v72bdl3…
Stories share similar arcs: Analyzing 1.6k novels, this paper argues there are only 6 basic ones:
1 Rags to Riches (rise)
2 Riches to Rags (fall)
3 Man in a Hole (fall rise)
4 Icarus (rise fall)
5 Cinderella (rise fall rise)
6 Oedipus (fall rise fall) 3/ epjdatascience.springeropen.com/articles/10.11…
Stories have links to cultural values. You can make predictions about economic factors from the stories people tell, as this 👇cool paper shows 4/
The stories organizations tell matter, too: When firms share stories in which their executives were clever but sneaky, the result is less helping & more deviance! Firms that share stories about low-level people upholding values have increased helping & deceased deviance. 5/
Firms also transmit learning through stories. This paper shows stories of failure work best. They are more easily applied than stories of success, especially if the story is interesting & you believe that it is important to learn from mistakes. But make sure it is a true story. 6
Entrepreneurs especially rely on stories, as, all you have initially is your pitch - a story about your startup. You have to use that to get people to give you resources, buy your product, join your company, etc. Here is a thread on how to do that: 7/
One of the most fascinating examples of the power of stories in startups is how the Theranos fraud relied on Elizabeth Holmes’s ability to tell a compelling story, which involved her tapping into the archetypes of what we expect an entrepreneur to be (black turtleneck & all) 👇
So, OpenAI Deep Research can connect directly to Dropbox, Sharepoint, etc.
Early experiments only, but it feels like what every "talk to our documents" RAG system has been aiming for, but with o3 smarts and easy use. I haven't done robust testing yet, but very impressive so far.
I think it is going to be a shock to the market, since "talk to our documents" is one of the most popular implementations of AI in large organizations, and this version seems to work quite well and costs very little.
I am sure the other Deep Research products will be able to do the same soon, and, while I am sure there are hallucinations (haven't spotted any yet, though), this seems like an example of how the LLM makers can sometimes move upstream to the application space and take a market.
Very big impact: The final version of a randomized, controlled World Bank study finds using a GPT-4 tutor with teacher guidance in a six week after school progam in Nigeria had "more than twice the effect of some of the most effective interventions in education" at very low costs
Microsoft keeps launching Copilot tools that seem interesting but which I can't ever seem to locate. Can't find them in my institution's enterprise account, nor my personal account, nor the many Copilot apps or copilots to apps or Agents for copilots
Each has their own UIs. 🤷♂️
For a while in 2023, Microsoft, with its GPT-4-powered Bing, was the absolute leader in making LLMs accessible and easy to use.
Even Amazon made Nova accessible through a simple URL.
Make your products easy to experiment with and people will discover use cases. Make them impossible without some sort of elaborate IT intervention and nobody will notice and they will just go back to ChatGPT or Gemini.
As someone who has spent a lot of time thinking and building in AI education, and sees huge potential, I have been shown this headline a lot
I am sure Alpha School is doing interesting things, but there is no deployed AI tutor yet that drives up test scores like this implies.
I am not doubting their test results, but I would want to learn more about the role AI is playing, and what they mean by AI tutor, before attributing their success to AI as opposed to the other dials they are turning.
Google has been doing a lot of work on fine-tuning Gemini for learning, and you can see a good overview of the issues and approaches in their paper (which also tests some of our work on tutor prompts). arxiv.org/abs/2412.16429
I suspect that a lot of "AI training" in companies and schools has become obsolete in the last few months
As models get larger, the prompting tricks that used to be useful are no longer good; reasoners don't play well with Chain-of-Thought; hallucination rates have dropped, etc.
I think caution is warranted when teaching prompting approaches for individual use or if training is trying to define clear lines about tasks where AI is bad/good. Those areas are changing very rapidly.
None of this is the fault of trainers - I have taught my students how to do Chain-of-thought, etc. But we need to start to think about how to teach people to use AI in a world that is changing quite rapidly. Focusing on exploration and use, rather than a set of defined rules.
“GPT-4.5, Give me a secret history ala Borges. Tie together the steel at Scapa Flow, the return of Napoleon from exile, betamax versus VHS, and the fact that Kafka wanted his manuscripts burned. There should be deep meanings and connections”
“Make it better” a few times…
It should have integrated the scuttling of the High Seas Fleet better but it knocked the Betamax thing out of the park