Psychology experiments need to be able to get people to react emotionally very quickly. How do they do it? Movie clips! These are the scientifically vetted clips historically used to elicit emotion.
For fear 😱 the choice is pretty obvious. 1/4
For anger 😡, either the police abuse scene from Cry Freedom (the clip isn’t online) or else this scene from The Bodyguard 2/4
For sadness 😭 this scene from The Champ even beats the death of Bambi’s mother. 3/4
Since the study is older, the clips are more of classic films. Here is the ranking based on lab studies. bpl.berkeley.edu/docs/48-Emotio… 4/4
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Probably the most consequential technology that should have been “obvious” but wasn’t:
🌾The moldboard plow. As this excerpt from Mann's 1491 shows, it was a simple idea which China had for nearly 2k years before Europe! It was basically a prerequisite for the Enlightenment.
The invention of the moldboard plow in Europe was at least a millennia closer to the invention of the iPhone than it was to the invention of the moldboard plow in China!
Plus:
🚲The wheel was invented surprisingly late & maybe only once (as anything other than a toy). It came after sailboats & harps, and was not used at all in the Americas
🐴And the horse collar, a simple invention that sped up plowing by 50%, wasn't common in Europe until 1000
I asked the Devin AI agent to go on reddit and start a thread where it will take website building requests
It did that, solving numerous problems along the way. It apparently decided to charge for its work. Going to take it down before it fools anyone... reddit.com/r/forhire/comm…
Agents are going to open a whole bunch of cans of worms.
It was actively monitoring the thread to take offers.
One thing business analysts miss is that many of the people at the AI labs are true believers that they are building AGI, and soon.
You don't have to think that they can do it, but, if you don't take their sincere beliefs into account, a lot of their strategy doesn't make sense.
The race for bigger models at the expense of improving existing models, the interlocking alliance deals where companies are funding and cooperating with competitors, the willingness to release models without extensive testing & just take the reputational risk in the short term...
The "its all sales hype" doesn't make a lot of sense upon consideration. Models are pretty fungible, GPT-4 class models prompt in similar ways. Convincing people you are building amazing future models doesn't generate lock-in for current ones & increases risks you don't deliver.
The modern economy rests on a single road in Spruce Pine, North Carolina. The road runs to the two mines that is the sole supplier of the quartz required to make the crucibles needed to refine silicon wafers.
There are no alternative sources known. From Conway’s Material World:
How can knowing something hurt you? Information can sometimes cause harm (think of the annoyance of seeing spoilers as a tiny example). This paper on information hazards was prescient about many of the issues we face today.
So, a 🧵 on some of the hazards of knowledge... 1/
Ideological hazards: Most people have only a little knowledge about what their ideological belief (whether religious or political) really encompasses. On the web, you can learn that your chosen belief system also includes hazardous elements that you feel you need to adopt. 2/
Evocation hazards: there may be particular information that, when people encounter it, triggers them. This is not just in the common sense of triggering past trauma, but that some conspiracy theories or memes might be unusually tempting to people in particular mental states. 3/