VQGAN+CLIP: "Moominvalley by Tove Jansson" (+trending on ArtStation)
+"rendered in Maya", +"Line art"
+"Tom of Finland" (the ultimate Finnish LGBTQ collaboration?) I really love the trees in the background.
That one had a fun instability, shifting from low saturation line art to fabulous jungle. Likely a slow drift along the image manifold to the high dimensional colourful image subspace from the constrained initial state.
+"Simon Stålenhag", "+M.C. Escher" (Again, the ultra-Scandinavianness of @simonstalenhag meshes well with Tove Jansson - note the background detail.)
+Anders Zorn, +Carl Larsson (Yes, the background lakes and birch tree continue. And Mrs Fillyjonk is perfectly at home in Carl Larsson's house.)
+John Bauer. Here I got disappointed, it just went for watercolours rather than any moody deep forests. Still, definite troll feeling.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
At a AI safety workshop 2011 we considered who we would trust most with AGI, and had the awkward realization "Military and intelligence actually think about security and safety, unlike us academics, companies and politicians." What we missed was political leadership of military.
The problem with the Hegseth situation is that it seems to be more about getting the right kind of obeisance than thinking about what the tech actually is about. It also hints that the leadership does not care overly about crucial constitutional, legal and ethical red lines.
I am profoundly civilian. Yet professional military people I have met have always struck me as having their head screwed on right in regard to dangerous weapons, tools that can turn on them, and the need for clear chains of command that cannot be suborned.
OK, color me officially impressed: Nano Banana Pro can make good diagrams based on papers. This one can go straight into my presentations.
If we want to really quibble, the doubled labels are a bit infelicitous. The arrows from the sun to elements and replicator factories are unlabeled; presumably they indicate energy flow. But I just asked for a diagram showing the process in the paper, nothing more.
It succeeded much better when there was a straightforward process to be depicted; graphical abstracts were somewhat handwavy and missed the points made.
Yesterday at @archipelacon I presented "Grand Futures: the Interplay Between Science Fiction and Planning Really Far Ahead" - how the science and science fiction of megascale engineering and longtermism have interacted with each other.
I ended up making a big diagram of who seems to have influenced who/what. Here is a current version (soon to be superseded) as PDF: tinyurl.com/megascalemap
It made me feel mildly like a conspiracy theorist linking stuff with yarn, but this is how the history of ideas actually works: smart people read each other, and respond to their cultural milleu. The challenge is to find the cool places where people have *not* interacted.
I think exposure to generative AI makes many people aware of the Library of Babel in a way they are not ready for. It is one thing to know there is a near-infinite space of possibilities, it is another thing to play around and be led by the latent manifold into the Library.
The thing that makes Borge's story so good is how it erodes meaning by infinity: everything is in there, yet nothing can be found in the noise. AI allows nearly anything to be found, but we become aware that it could have been different.
I meet people who think that getting different answers from a LLM to the same question is profoundly wrong: they assume there can only be one response to the same question. And sure, for some questions the spread of right answers is narrow.
Prediction: people will say o3 scoring 25.2% on FrontierMath is nothing, "after all it is not perfect." Conveniently forgetting that these problems are ridiculously hard (did 87.7% on GPQA Diamond). And some will keep on talking about stochastic parrots... like parrots.
The key lesson from LLMs has been that sufficiently capable token prediction can more or less do anything, and simulated reasoning allows it to carry further. A bit like how higher level pattern generators can make already general biological neural networks do useful tasks.
The recent discussion triggered by @tylercowen about what investment strategies AI doomers ought to exhibit (he thinks they should short the market, everybody else disagrees) got me thinking about my own investment approach.
It is mostly based on uncertainty. Empirically we know people are pretty bad at long-term predictions.
I also suspect I am a bit of a naïf. But here goes:
My approach to the future is to roughly assume 1/3 chance of a great future (AI, transhumanist singularity), 1/3 chance of a normal future (what most people consider reasonable), and 1/3 chance of a disastrous future. It makes sense to hedge between these futures.