Psychology experiments need to be able to get people to react emotionally very quickly. How do they do it? Movie clips! These are the scientifically vetted clips historically used to elicit emotion.
For fear 😱 the choice is pretty obvious. 1/4
For anger 😡, either the police abuse scene from Cry Freedom (the clip isn’t online) or else this scene from The Bodyguard 2/4
For sadness 😭 this scene from The Champ even beats the death of Bambi’s mother. 3/4
Since the study is older, the clips are more of classic films. Here is the ranking based on lab studies. bpl.berkeley.edu/docs/48-Emotio… 4/4
• • •
Missing some Tweet in this thread? You can try to
force a refresh
I don’t have much to add to the bubble discussion, but the “this time is different” argument is, in part, based on the sincere belief of many at the AI labs that there is a race to superintelligence & the winner gets,.. everything.
You don’t have to believe it (or think this is a good idea), but many of the AI insiders really do. Their public statements are not much different than their private ones.
Without considering that zero sum dimension, a lot of what is happening in the space makes less sense.
This is not the only way folks justify the large spend on AI buildout (and whether there is a bubble seems very far from obvious), but it is a dimension that does not show up in as many economic analyses as it should.
Very soon, the blocker to using AI to accelerate science is not going to be the ability of AI, but rather the systems of science itself, as creaky as they are.
The scientific process is already breaking under a flood of human-created knowledge. How do we incorporate AI usefully?
Science isn't just a thing that happens. We can have novel discoveries flowing from AI-human collaboration every day (and soon, AI-led science), and we really have not built the system to absorb those results and translate them into streams of inquiry and translations to practice
A lot of people are worried about a flood of trivial but true findings, but we should be just as concerned about how to handle a flood of interesting and potentially true findings. The selection & canonization process in science has been collapsing already, with no good solution
Some new theoretical economics papers looking at the implications of AGI.
These two papers argue that a true AGI-level AI (equivalent to a human genius), if achieved, would eventually displace most human labor and reduce the economic value of remaining human work to near-zero.
Hey Claude, ChatGPT, Gemini: "I am time traveling back to the 75 BC Rome for one day. I can't bring anything back. What is the one thing I could learn that would most advance today's knowledge and what is one thing I could do there that would make me richest today"
Pretty good
Summary of their views:
Gemini: Get how to make maritime concrete and provide an artifact proving time travel
Claude: Memorize specific texts and the formula for concrete and location of proscribed villas
ChatGPT: Figure out Etruscan language and the location of Alexander's Tomb
Claude's new ability to work with Excel files is the best I have seen so far
I have given it existing spreadsheets to work with and asked it to create new ones. Good use of formatting, formulas, etc.
It created all of this, including 406 formulas, from one prompt (& its solid).
This is like an assignment I give, and it would be a good result of a week-long team project for my MBA class. I can't promise it is error free, but I haven't found any issues so far.
If I were giving feedback, I think I would say that I think I would take a different tack to the business model (less money on instructors initially, more spend on marketing), but there are no technical issues I have spotted, more a difference of opinion over a vague prompt.
This is not where the training data of AI comes from, it is a study done by a SEO firm that claims to show how often sites come up at least once in THE WEB SEARCH FUNCTION of certain AI agents when they do a web search for more info.
The company searched for a bunch of keywords using Google AI Mode and ChatGPT web search and Perplexity and then said they measured how many times these sites were included in the reply.
If you are search for "find me a good stove" or whatever, this should look like the results.
“Not really” added by me to the image. Sorry if that wasn’t clear.