"You are standing in an open field west of a white house, with a boarded front door. There is a small mailbox here." (VQGAN+CLIP seems somewhat obsessed with the mailbox.)
"You are behind the white house. A path leads into the forest to the east. In one corner of the house there is a small window which is slightly ajar."
"You are in the living room. There is a doorway to the east, a wooden door with strange gothic lettering to the west, which appears to be nailed shut, a trophy case, and a large oriental rug in the center of the room. Above the trophy case hangs an elvish sword of great anti..."
"This is an art gallery. Most of the paintings have been stolen by vandals with exceptional taste. The vandals left through either the north or west exits. Fortunately, there is still one chance for you to be a vandal, for on the far wall is a painting of unparalleled beauty."
"This is a small room with passages to the east and south and a forbidding hole leading west. Bloodstains and deep scratches (perhaps made by an axe) mar the walls. A nasty-looking troll, brandishing a bloody axe, blocks all passages out of the room."
"You have entered the Land of the Living Dead. Thousands of lost souls can be heard weeping and moaning... Lying in one corner of the room is a beautifully carved crystal skull. It appears to be grinning at you rather nastily."
"You are standing on the top of the Flood Control Dam #3, which was quite a tourist attraction in times far distant... The sluice gates on the dam are closed. Behind the dam, there can be seen a wide reservoir. Water is pouring over the top of the now abandoned dam."
• • •
Missing some Tweet in this thread? You can try to
force a refresh
At a AI safety workshop 2011 we considered who we would trust most with AGI, and had the awkward realization "Military and intelligence actually think about security and safety, unlike us academics, companies and politicians." What we missed was political leadership of military.
The problem with the Hegseth situation is that it seems to be more about getting the right kind of obeisance than thinking about what the tech actually is about. It also hints that the leadership does not care overly about crucial constitutional, legal and ethical red lines.
I am profoundly civilian. Yet professional military people I have met have always struck me as having their head screwed on right in regard to dangerous weapons, tools that can turn on them, and the need for clear chains of command that cannot be suborned.
OK, color me officially impressed: Nano Banana Pro can make good diagrams based on papers. This one can go straight into my presentations.
If we want to really quibble, the doubled labels are a bit infelicitous. The arrows from the sun to elements and replicator factories are unlabeled; presumably they indicate energy flow. But I just asked for a diagram showing the process in the paper, nothing more.
It succeeded much better when there was a straightforward process to be depicted; graphical abstracts were somewhat handwavy and missed the points made.
Yesterday at @archipelacon I presented "Grand Futures: the Interplay Between Science Fiction and Planning Really Far Ahead" - how the science and science fiction of megascale engineering and longtermism have interacted with each other.
I ended up making a big diagram of who seems to have influenced who/what. Here is a current version (soon to be superseded) as PDF: tinyurl.com/megascalemap
It made me feel mildly like a conspiracy theorist linking stuff with yarn, but this is how the history of ideas actually works: smart people read each other, and respond to their cultural milleu. The challenge is to find the cool places where people have *not* interacted.
I think exposure to generative AI makes many people aware of the Library of Babel in a way they are not ready for. It is one thing to know there is a near-infinite space of possibilities, it is another thing to play around and be led by the latent manifold into the Library.
The thing that makes Borge's story so good is how it erodes meaning by infinity: everything is in there, yet nothing can be found in the noise. AI allows nearly anything to be found, but we become aware that it could have been different.
I meet people who think that getting different answers from a LLM to the same question is profoundly wrong: they assume there can only be one response to the same question. And sure, for some questions the spread of right answers is narrow.
Prediction: people will say o3 scoring 25.2% on FrontierMath is nothing, "after all it is not perfect." Conveniently forgetting that these problems are ridiculously hard (did 87.7% on GPQA Diamond). And some will keep on talking about stochastic parrots... like parrots.
The key lesson from LLMs has been that sufficiently capable token prediction can more or less do anything, and simulated reasoning allows it to carry further. A bit like how higher level pattern generators can make already general biological neural networks do useful tasks.
The recent discussion triggered by @tylercowen about what investment strategies AI doomers ought to exhibit (he thinks they should short the market, everybody else disagrees) got me thinking about my own investment approach.
It is mostly based on uncertainty. Empirically we know people are pretty bad at long-term predictions.
I also suspect I am a bit of a naïf. But here goes:
My approach to the future is to roughly assume 1/3 chance of a great future (AI, transhumanist singularity), 1/3 chance of a normal future (what most people consider reasonable), and 1/3 chance of a disastrous future. It makes sense to hedge between these futures.