Ethan Mollick Profile picture
Sep 14, 2020 8 tweets 5 min read Read on X
Tesla gets a lot of credit today, but this paper shows Edison mastered the psychology of new technology. To get people to use scary electricity he made it feel the same as the gas they knew. Gas lights gave off light equal to a 12 watt 💡 so Edison limited his 💡 to 13 watts. 1/5 ImageImage
As another example, lampshades weren't needed for an electric light. They were originally used to keep gas lamps from sputtering. Edison used them as a skeuomorph (a design throwback to an earlier use) by putting them on electric lights. Not required, but comforting to have. 2/5 Image
He also developed the electric meter as a way of charging (because gas was metered) and insisted on burying electric wires (because gas was underground).

The fascinating thing was the trade-off: it made the technology more expensive and less powerful, but more acceptable. 3/5 Image
Interestingly, Tesla (the company) learned the lessons Tesla the person did not. Electric cars could have plugs anywhere, so why does charging a Tesla feel like putting gas in a regular car? It’s skeuomorphic, linking the old to the new! 4/5 Image
The process Edison used, called "robust design," helps make new technologies palatable. The classic article by Douglas & @andrewhargadon is extremely readable, and explains a lot about how design helps new technologies get adopted. 6/6 psychologytoday.com/sites/default/… Image
The lesson is worthwhile for anyone creating new technologies. Apple famously used skeuomorphic design in the original iPhone to make a series of complex apps easier to understand & work with at a glance. medium.com/@akhov/apples-… ImageImageImageImage
One final note on Edison (for now). He was such a superhero to the public that there were contemporary science fiction novels about him teaming up with Lord Kelvin to conquer Mars. https://t.co/v5oknREU0b
Edison was a genius in making people feel comfortable with new tech, but the danger was that users were likely to default to out-of-date behaviors. As an illustration, here is a sign from Hotel del Coronado, the 1st electrified hotel (the work was overseen by Edison himself). Image

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Ethan Mollick

Ethan Mollick Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @emollick

Mar 8
“GPT-4.5, Give me a secret history ala Borges. Tie together the steel at Scapa Flow, the return of Napoleon from exile, betamax versus VHS, and the fact that Kafka wanted his manuscripts burned. There should be deep meanings and connections”

“Make it better” a few times… Image
It should have integrated the scuttling of the High Seas Fleet better but it knocked the Betamax thing out of the park
Dang, Claude. This is just half the thing.

Full story here: docs.google.com/document/d/1-h…Image
Image
Read 5 tweets
Mar 4
🚨Our Generative AI Lab at Wharton is releasing its first Prompt Engineering Report, empirically testing prompting approaches. This time we find:
1) Prompting “tricks” like saying “please” do not help consistently or predictably
2) How you measure against benchmarks matters a lot Image
Image
Using social science methodologies for measuring prompting results helped give us some useful insights, I think. Here’s the report, the first of hopefully many to come. papers.ssrn.com/sol3/papers.cf…
This is what complicates things. Making a polite request ("please") had huge positive effects in some cases and negative ones in others. Similarly being rude ("I order you") helped in some cases and not others.

There was no clear way to predict in advance which would work when. Image
Read 4 tweets
Feb 25
The lack of benchmarks for writing, telling stories, persuasion, creativity, emotional intelligence, perceived empathy, and doing office work are...

(1) holding back AI advances, (2) hiding big differences between models & (3) obscuring how good these models are for real work
If you want to influence the future, now is the time to release a really good benchmark.
We are getting AIs optimized for coding, doing graduate level math, multiple choice exams, and also counting the r's in strawberry.
Read 4 tweets
Feb 16
The significance of Grok 3, outside of X drama, is that it is the first full model release that we definitely know is at least an order of magnitude larger than GPT-4 class models in training compute, so it will help us understand whether 1st scaling law (pre-training) holds up.
It is possible that Gemini 2.0 Pro is a RonnaFLOP* model, but we are only seeing the Pro version, not the full ultra.

* AI trained on 10^27 FLOPs of compute, an order of magnitude more than then GPT-4 level (I have been calling them Gen3 models because it is easier)
And I should also note that everyone now hides their FLOPs used for training (except for Meta) so things are not completely clear.
Read 4 tweets
Feb 11
There is a lot of important stuff in this new paper by Anthropic that shows how people are actually using Claude.
1) The tasks that people are asking AI to do are some of the highest-value (& often intellectually challenging)
2) Adoption is uneven, but many fields already high Image
Image
Image
This is just based on Claude usage, which is why adoption by field is less of a big deal (Claude is popular in different fields than ChatGPT) than the breakdowns at the task level, because they represent what people are willing to let AI do for them.
Read 5 tweets
Feb 10
Thoughts on this post:
1) It echoes what we have been hearing from multiple labs about the confidence of scaling up to AGI quickly
2) There is no clear vision of what that world looks like
3) The labs are placing the burden on policymakers to decide what to do with what they make Image
Image
I wish more AI lab leaders would spell out a vision for the world, one that is clear about what they think life will actually be like for humans living in a world of AGI

Faster science & productivity, good - but what is the experience of a day in the life in the world they want?
To be clear, it is completely possible to tell a very positive vision of the future of humans and AI (heck, just steal from The Culture or Long Way to an Angry Planet or something), and I think that would actually be a really useful exercise, showing where the labs hope we all go
Read 4 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(