Got the idea of using sentences from @qntm's "Fine Structure" qntm.org/structure as input to VQGAN+CLIP. "It's like billion-voice music. The cities here are woven from constantly singing superstrings."
"A skyscraper whose ground floor is a human being but every other floor is filled with oozing alien organs and weird multidimensional sensors and wriggling feely things scraping against the metaphorical glass."
"'Oul' is the closest approximation in human language of the name of a cosmic eighty-plus-six-dimensional hyperweapon which fell out of the control of its creators."
"An automated network of space stations distributed over an oblate hemihyperspheroid of 4-space centred on Earth +1, eight light years in diameter and fourteen universes tall."
"There are pan-stellar civilisations. There are pan-universal civilisations. There are uplifted humanities crawling up the pillars of the Structure towards Upsilon layer, for whom Multiverse One was just the cradle."
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Prediction: people will say o3 scoring 25.2% on FrontierMath is nothing, "after all it is not perfect." Conveniently forgetting that these problems are ridiculously hard (did 87.7% on GPQA Diamond). And some will keep on talking about stochastic parrots... like parrots.
The key lesson from LLMs has been that sufficiently capable token prediction can more or less do anything, and simulated reasoning allows it to carry further. A bit like how higher level pattern generators can make already general biological neural networks do useful tasks.
The recent discussion triggered by @tylercowen about what investment strategies AI doomers ought to exhibit (he thinks they should short the market, everybody else disagrees) got me thinking about my own investment approach.
It is mostly based on uncertainty. Empirically we know people are pretty bad at long-term predictions.
I also suspect I am a bit of a naïf. But here goes:
My approach to the future is to roughly assume 1/3 chance of a great future (AI, transhumanist singularity), 1/3 chance of a normal future (what most people consider reasonable), and 1/3 chance of a disastrous future. It makes sense to hedge between these futures.
Yesterday in our very friendly Biennale discussion professor Camil Ungureanu made a series of criticisms of how transhumanist visions could go wrong practically and morally. I despaired at responding to each of them, and then realized that there is a solution in going meta.
(No, this was not a Gish gallop - all points were relevant issues worth talking about! But time was limited...)en.wikipedia.org/wiki/Gish_gall…
Basically, "tech X could lead to A, tech Y seems to lead to B, Z..." is all about whether technologies have downsides. Which of course they do. But they also may be worth it.
My main takeaway is that the richest do not dominate the economy as much as many assume, nor is there any post WWII trend towards them becoming much more powerful relative to the state.
Two days ago I (like most people) had never heard of SVB. That we tend to discover systemically essential points of failure by them failing is deeply disturbing, especially if we want to make a more resilient world.
In 1993 the world discovered the hard way that a speciality resin used to affix integrated circuits was mostly produced in a single chemical plant. That had an explosion destroying production and stores. apnews.com/article/8fe292…
Something similar happened in January when the main US supplier of potassium permanganate had a fire, causing big problems for water treatment plants. cen.acs.org/environment/wa…
Saying LLMs are just glorified autocomplete forgets that most human thought, action and speech is also glorified autocomplete.
Maybe the fraction that isn't autocomplete is the truly valuable part, but I suspect it is not a major fraction of everyday useful action.
I am pretty serious about this: the basal ganglia action pattern generators look like they are selected by softmax influenced by cortical stimulus fit. They get made/updated by RL rather than backprop, but very much a stimulus-response chain. med.libretexts.org/Bookshelves/Ph…
This involves cognitive action selection: much of our thinking is also generating tokens in sequence, affected by other active tokens in the frontal-parietal working memory/attention system.