VQGAN+CLIP: "Moominvalley by Tove Jansson" (+trending on ArtStation)
+"rendered in Maya", +"Line art"
+"Tom of Finland" (the ultimate Finnish LGBTQ collaboration?) I really love the trees in the background.
That one had a fun instability, shifting from low saturation line art to fabulous jungle. Likely a slow drift along the image manifold to the high dimensional colourful image subspace from the constrained initial state.
+"Simon Stålenhag", "+M.C. Escher" (Again, the ultra-Scandinavianness of @simonstalenhag meshes well with Tove Jansson - note the background detail.)
+Anders Zorn, +Carl Larsson (Yes, the background lakes and birch tree continue. And Mrs Fillyjonk is perfectly at home in Carl Larsson's house.)
+John Bauer. Here I got disappointed, it just went for watercolours rather than any moody deep forests. Still, definite troll feeling.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
The recent discussion triggered by @tylercowen about what investment strategies AI doomers ought to exhibit (he thinks they should short the market, everybody else disagrees) got me thinking about my own investment approach.
It is mostly based on uncertainty. Empirically we know people are pretty bad at long-term predictions.
I also suspect I am a bit of a naïf. But here goes:
My approach to the future is to roughly assume 1/3 chance of a great future (AI, transhumanist singularity), 1/3 chance of a normal future (what most people consider reasonable), and 1/3 chance of a disastrous future. It makes sense to hedge between these futures.
Yesterday in our very friendly Biennale discussion professor Camil Ungureanu made a series of criticisms of how transhumanist visions could go wrong practically and morally. I despaired at responding to each of them, and then realized that there is a solution in going meta.
(No, this was not a Gish gallop - all points were relevant issues worth talking about! But time was limited...)en.wikipedia.org/wiki/Gish_gall…
Basically, "tech X could lead to A, tech Y seems to lead to B, Z..." is all about whether technologies have downsides. Which of course they do. But they also may be worth it.
My main takeaway is that the richest do not dominate the economy as much as many assume, nor is there any post WWII trend towards them becoming much more powerful relative to the state.
Two days ago I (like most people) had never heard of SVB. That we tend to discover systemically essential points of failure by them failing is deeply disturbing, especially if we want to make a more resilient world.
In 1993 the world discovered the hard way that a speciality resin used to affix integrated circuits was mostly produced in a single chemical plant. That had an explosion destroying production and stores. apnews.com/article/8fe292…
Something similar happened in January when the main US supplier of potassium permanganate had a fire, causing big problems for water treatment plants. cen.acs.org/environment/wa…
Saying LLMs are just glorified autocomplete forgets that most human thought, action and speech is also glorified autocomplete.
Maybe the fraction that isn't autocomplete is the truly valuable part, but I suspect it is not a major fraction of everyday useful action.
I am pretty serious about this: the basal ganglia action pattern generators look like they are selected by softmax influenced by cortical stimulus fit. They get made/updated by RL rather than backprop, but very much a stimulus-response chain. med.libretexts.org/Bookshelves/Ph…
This involves cognitive action selection: much of our thinking is also generating tokens in sequence, affected by other active tokens in the frontal-parietal working memory/attention system.
Forwarding this on behalf of Nick Bostrom since he’s not active on Twitter. He wants to explain his views and publicly apologize for an old email that somebody might put out to damage him. nickbostrom.com/oldemail.pdf
The context is a thread on a mailing list in 1995 talking about offensive communication styles where he did the classic freshman philosophy student trick of writing something deliberately offensive to make a point. It didn’t turn out well (it never does): he promptly apologized.
I can see why he would want to retrospectively apologize for the whole thing. It actually does not represent his views and behavior as I have seen them over the 25 years I have known him.