For May 4th, a lesson on how to sell new technology from Star Wars (& Edison). The key to the look of Star Wars ships are greebles: glued-on bits from off-the-shelf model kits of WWII tanks, planes etc. They make a connection with current tech, making Star Wars feel familiar. 1/
To sell electricity, Edison used the same technique as the Star War's greebles by using skeumorphs (a design throwback to an earlier use) connecting his new scary tech to a familiar one: gas. Gas lights gave off light equal to a 12 watt 💡so Edison limited his 💡 to 13 watts. 2/
As another example, lampshades weren't needed for an electric light, since they were originally used to keep gas lamps from sputtering. But Edison added them anyhow. While not required, they are comforting and, again, made a greeble-like connection to the older technology. 3/
He also developed the electric meter as a way of charging (because gas was metered) and insisted on burying electric wires (because gas was underground).
Edison made a trade-off by doing this, as it made the technology more expensive and less powerful, but more acceptable. 4/
The process Edison used, called "robust design," helps make new technologies easier for consumers to adopt. The classic article by Douglas & @andrewhargadon is very readable, and explains a lot about how design helps new technologies get adopted. 5/ psychologytoday.com/sites/default/…
Ironically, while Tesla the person never learned this lesson from Edison, Telsa the company has. Electric cars could have plugs anywhere, so why does charging a Tesla feel like putting gas in a regular car? It’s skeuomorphic, linking the old to the new! 6/
The lesson is useful for anyone creating new technologies. Steve Jobs famously insisted on skeuomorphic design in the original iPhone to make a series of complex apps easier to understand & work with at a glance. They might seem "outdated" looking, but they served a purpose! 7/
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Kinda amazing: the mystery model "summit" with the prompt "create something I can paste into p5js that will startle me with its cleverness in creating something that invokes the control panel of a starship in the distant future" & "make it better"
This is through LMArena, where you are given random models to test. You will likely get a chance to use "Summit" fairly often (it came up three times in my six attempts): lmarena.ai
They are often far worse at getting AI to do stuff than those with a liberal arts or social science bent. LLMs are built from the vast corpus human expression, and knowing the history & obscure corners of human works lets you do far more with AI
These are systems that respond to human writing and (often) techniques that apply to human psychology.
Everyone now has a machine that makes words, images, video, sound where the limit is often your own ability to imagine something new (or invoke old ideas others do not know).
The Math Olympiad is great, coding is important, accelerating science has tremendous value.
But LLMs give a chance for both cultures to contribute in ways that have not been possible for a long time.
X (and other social media sites) make our 1990s optimism about the Information Age seem silly.
Even with all of the world's information a click away (& a free AI that can help explain that information in a personalized way), half-mangled anecdotes with no source win every time.
It really is not what most people who was working on building the early web in the late 1990s were expecting. Universal access to information was going to transform everything, creating widespread learning and bridging divides.
It really is shocking how much that didn't happen.
The fact that people use the internet mostly for entertainment isn't a weird or surprising
But you also have access to courses on every topic by experts, every major out-of-copyright book, can talk to people from anywhere, etc. The impact of that is smaller than I once expected.
So, OpenAI Deep Research can connect directly to Dropbox, Sharepoint, etc.
Early experiments only, but it feels like what every "talk to our documents" RAG system has been aiming for, but with o3 smarts and easy use. I haven't done robust testing yet, but very impressive so far.
I think it is going to be a shock to the market, since "talk to our documents" is one of the most popular implementations of AI in large organizations, and this version seems to work quite well and costs very little.
I am sure the other Deep Research products will be able to do the same soon, and, while I am sure there are hallucinations (haven't spotted any yet, though), this seems like an example of how the LLM makers can sometimes move upstream to the application space and take a market.
Very big impact: The final version of a randomized, controlled World Bank study finds using a GPT-4 tutor with teacher guidance in a six week after school progam in Nigeria had "more than twice the effect of some of the most effective interventions in education" at very low costs
Microsoft keeps launching Copilot tools that seem interesting but which I can't ever seem to locate. Can't find them in my institution's enterprise account, nor my personal account, nor the many Copilot apps or copilots to apps or Agents for copilots
Each has their own UIs. 🤷♂️
For a while in 2023, Microsoft, with its GPT-4-powered Bing, was the absolute leader in making LLMs accessible and easy to use.
Even Amazon made Nova accessible through a simple URL.
Make your products easy to experiment with and people will discover use cases. Make them impossible without some sort of elaborate IT intervention and nobody will notice and they will just go back to ChatGPT or Gemini.