"You are standing in an open field west of a white house, with a boarded front door. There is a small mailbox here." (VQGAN+CLIP seems somewhat obsessed with the mailbox.)
"You are behind the white house. A path leads into the forest to the east. In one corner of the house there is a small window which is slightly ajar."
"You are in the living room. There is a doorway to the east, a wooden door with strange gothic lettering to the west, which appears to be nailed shut, a trophy case, and a large oriental rug in the center of the room. Above the trophy case hangs an elvish sword of great anti..."
"This is an art gallery. Most of the paintings have been stolen by vandals with exceptional taste. The vandals left through either the north or west exits. Fortunately, there is still one chance for you to be a vandal, for on the far wall is a painting of unparalleled beauty."
"This is a small room with passages to the east and south and a forbidding hole leading west. Bloodstains and deep scratches (perhaps made by an axe) mar the walls. A nasty-looking troll, brandishing a bloody axe, blocks all passages out of the room."
"You have entered the Land of the Living Dead. Thousands of lost souls can be heard weeping and moaning... Lying in one corner of the room is a beautifully carved crystal skull. It appears to be grinning at you rather nastily."
"You are standing on the top of the Flood Control Dam #3, which was quite a tourist attraction in times far distant... The sluice gates on the dam are closed. Behind the dam, there can be seen a wide reservoir. Water is pouring over the top of the now abandoned dam."
• • •
Missing some Tweet in this thread? You can try to
force a refresh
The recent discussion triggered by @tylercowen about what investment strategies AI doomers ought to exhibit (he thinks they should short the market, everybody else disagrees) got me thinking about my own investment approach.
It is mostly based on uncertainty. Empirically we know people are pretty bad at long-term predictions.
I also suspect I am a bit of a naïf. But here goes:
My approach to the future is to roughly assume 1/3 chance of a great future (AI, transhumanist singularity), 1/3 chance of a normal future (what most people consider reasonable), and 1/3 chance of a disastrous future. It makes sense to hedge between these futures.
Yesterday in our very friendly Biennale discussion professor Camil Ungureanu made a series of criticisms of how transhumanist visions could go wrong practically and morally. I despaired at responding to each of them, and then realized that there is a solution in going meta.
(No, this was not a Gish gallop - all points were relevant issues worth talking about! But time was limited...)en.wikipedia.org/wiki/Gish_gall…
Basically, "tech X could lead to A, tech Y seems to lead to B, Z..." is all about whether technologies have downsides. Which of course they do. But they also may be worth it.
My main takeaway is that the richest do not dominate the economy as much as many assume, nor is there any post WWII trend towards them becoming much more powerful relative to the state.
Two days ago I (like most people) had never heard of SVB. That we tend to discover systemically essential points of failure by them failing is deeply disturbing, especially if we want to make a more resilient world.
In 1993 the world discovered the hard way that a speciality resin used to affix integrated circuits was mostly produced in a single chemical plant. That had an explosion destroying production and stores. apnews.com/article/8fe292…
Something similar happened in January when the main US supplier of potassium permanganate had a fire, causing big problems for water treatment plants. cen.acs.org/environment/wa…
Saying LLMs are just glorified autocomplete forgets that most human thought, action and speech is also glorified autocomplete.
Maybe the fraction that isn't autocomplete is the truly valuable part, but I suspect it is not a major fraction of everyday useful action.
I am pretty serious about this: the basal ganglia action pattern generators look like they are selected by softmax influenced by cortical stimulus fit. They get made/updated by RL rather than backprop, but very much a stimulus-response chain. med.libretexts.org/Bookshelves/Ph…
This involves cognitive action selection: much of our thinking is also generating tokens in sequence, affected by other active tokens in the frontal-parietal working memory/attention system.
Forwarding this on behalf of Nick Bostrom since he’s not active on Twitter. He wants to explain his views and publicly apologize for an old email that somebody might put out to damage him. nickbostrom.com/oldemail.pdf
The context is a thread on a mailing list in 1995 talking about offensive communication styles where he did the classic freshman philosophy student trick of writing something deliberately offensive to make a point. It didn’t turn out well (it never does): he promptly apologized.
I can see why he would want to retrospectively apologize for the whole thing. It actually does not represent his views and behavior as I have seen them over the 25 years I have known him.