X thread is series of posts by the same author connected with a line!
From any post in the thread, mention us with a keyword "unroll" @threadreaderapp unroll
Follow @ThreadReaderApp to mention us easily!
Practice here first or read more on our help page!

Recent

May 13
Given this rant starts with the same prefix as this reply I think it's reasonable to assume this rant is about me. Since I am accused of being "too bad at handling abstractions to ever understand [Yudkowsky's position]" I would like to clarify my understanding of Yudkowsky's position:

This post appears to be commentary on my essay "Why Cognitive Scientists Hate LLMs". As usual with these rants that do not name a specific detractor, he is arguing with a strawman of what I actually wrote. To review what I claimed in that essay:

1. For a few centuries there was an on-and-off humanist project to create a perfectly logical language to externalize and resolve disagreements, Leibniz's characteristica universalis.
2. This project ended with Godel's incompleteness theorems and Turing's proof that the halting problem is unsolvable on pain of paradox.
3. Turing immediately started a new attempt to externalize (by way of mechanical implementation) human communication and reason in the form of AI.
4. Early attempts at AI reproduced many of the pathologies of the earlier quest for formal foundations in mathematics and language. This was in large part by necessity: Computers were not good enough to actually do anything like deep learning, which eventually worked.
5. As computers got bigger and disappointments in symbolic AI mounted, people grudgingly moved further and further in the direction of deep learning. This too was motivated by necessity, because much of the animating spirit of the project was to create "objective" representations of thought by means of *legible* mechanical representation independent of any subjective human observer.
6. What appears to be the eventual solution, deep learning, eschews almost all of the theory and legible mechanical underpinning that would make it a satisfying conclusion to this intellectual quest, as well as complicating (but, in my opinion, not actually precluding) attempts to control the resulting intelligence. This is existentially horrifying to people who invested deeply in the *original motivating premise* of AI.

I, separately from any quibbles about who at what exact moment in the story believed in "GOFAI" (here defined as rigid formal logic, i.e. compiler-grammar-AI), statistical learning (i.e. Hidden Markov Models, Support Vector Machines), or connectionism (i.e. multi-layer perceptrons, LSTMs), think this basic narrative is true. Notably the term "GOFAI", on which Yudkowsky seems to hang a lot of his argument, does not appear once in the essay text.

But I am an intellectually honest person. So I will admit that at the time I wrote this essay I had overestimated how much of EY's vision was in the "GOFAI" camp as opposed to the "statistical learning" camp. I haven't edited the essay to clarify this because I am reluctant to retroactively change blog posts (which purport to be written at a certain date and time), but also because I don't think the error actually meaningfully changes much of what I had to say. Ultimately there were really two camps in AI by the time the winter began to thaw: One camp held to the classic motivation that AI was a fundamentally theoretical endeavor, which would be solved through a rigorous theoretical understanding of intelligence as being made of parts in something like a cognitive architecture. Notable examples of this include Yudkowsky's MIRI and Ben Goertzel's OpenCog. The other camp was the benchmark and contest people who, realizing theory had gotten them very little for the effort invested decided to have all the theorists try to prove their theory is better by showing the ability to produce concrete results on well specified competition problems. Notable examples of this include the Hutter Prize and ILSVRC.

The people who were in the theory camp operated in the same basic mindset as the theorists from before the AI winter, just with 50 years of humiliation to warn them off from the places their impulses would otherwise naturally take them. If you read Yudkowsky's early work where he discusses his thoughts on AI design it's pretty clear he wants to work at a level of abstraction where it makes sense to explicitly design e.g. goal structures and inference:

> In humans, backpropagation of negative reinforcement and positive reinforcement is an autonomic process. In 4.2.1 Pain and Pleasure, I made the suggestion that negative and positive reinforcement could be replaced by a conscious process, carried out as a subgoal of increasing the probability of future successes. But for primitive AI systems that can’t use a consciously controlled process, the Bayesian Probability Theorem can implement most of the functionality served by pain and pleasure in humans. There’s a complex, powerful set of behaviors that should be nearly automatic.
> In the normative, causal goal system that serves as a background assumption for Creating Friendly AI, desirability (more properly, desirability differentials) backpropagate along predictive links. The relation between child goal and parent goal is one of causation; the child goal causes the parent goal, and therefore derives desirability from the parent goal, with the amount of backpropagated desirability depending directly on the confidence of the causal link. Only a hypothesis of direct causation suffices to backpropagate desirability. It’s not enough for the AI to believe that A is associated with B, or that observing A is a useful predictor that B will be observed. The AI must believe that the world-plus-A has a stronger probability of leading to the world-plus-B than the world-plus-not-A has of leading to the world-plus-B. Otherwise there’s no differential desirability for the action

In later work he makes it fairly clear you must understand the underlying mechanical basis of thought to build AI. For example here is Yudkowsky explaining why you cannot just tell an AI to "be friendly":

> There’s more to building a chess-playing program than building a really fast processor—so the AI will be really smart—and then typing at the command prompt “Make whatever chess moves you think are best.” You might think that, since the programmers themselves are not very good chess players, any advice they tried to give the electronic superbrain would just slow the ghost down. But there is no ghost. You see the problem.
>
> And there isn’t a simple spell you can perform to—poof!—summon a complete ghost into the machine. You can’t say, “I summoned the ghost, and it appeared; that’s cause and effect for you.” (It doesn’t work if you use the notion of “emergence” or “complexity” as a substitute for “summon,” either.) You can’t give an instruction to the CPU, “Be a good chess player!” You have to see inside the mystery of chess-playing thoughts, and structure the whole ghost from scratch.

The combination of statements that there is no simple spell to summon a ghost into the machine (now proven false) and that you must "structure the whole ghost from scratch", along with the concrete example given later in the post of Deep Blue give me the impression that Yudkowsky has in mind something like a modular system designed by looking at the structure of the problem and then putting together a theoretically supported gestalt of individual parts which are not themselves intelligent but come together to form an intelligence. You know, the Society of Mind thesis n steps of elaboration later after people gave up on simple LISP programs. In the case of creating a *general intelligence* this would imply that you need to understand the structure of intelligence as a problem, and then put together a gestalt of modules with inductive biases that match the theoretically understood structure of intelligence. This inference is further supported by a statement from Yudkowsky's earlier deprecated work on LOGI (web.archive.org/web/2014112314…), where he says of arithmetic:

> In this hypothetical world where the lower-level process of addition is not understood, we can imagine the “common-sense” problem for addition; the launching of distributed Internet projects to “encode all the detailed knowledge necessary for addition”; the frame problem for addition; the philosophies of formal semantics under which the LISP token thirty-seven is meaningful because it refers to thirty-seven objects in the external world; the design principle that the token thirty-seven has no internal complexity and is rather given meaning by its network of relations to other tokens; the “number grounding problem”; the hopeful futurists arguing that past projects to create Artificial Addition failed because of inadequate computing power; and so on.
> To some extent this is an unfair analogy. Even if the thought experiment is basically correct, and the woes described would result from an attempt to capture a high-level description of arithmetic without implementing the underlying lower level, this does not prove the analogous mistake is the source of these woes in the real field of AI. And to some extent the above description is unfair even as a thought experiment; an arithmetical expert system would not be as bankrupt as semantic nets. The regularities in an “expert system for arithmetic” would be real, noticeable by simple and computationally feasible means, and could be used to deduce that arithmetic was the underlying process being represented, even by a Martian reading the program code with no hint as to the intended purpose of the system. The gap between the higher level and the lower level is not absolute and uncrossable, as it is in semantic nets.

(Note: The 'semantic nets' he is criticizing are not a kind of artificial neural network, but a graph of words with logical relationships defined between them, usually by hand. He does criticize neural nets in other posts but not here.)

It's pretty clear from reading this (and skimming the rest) that in this early work Yudkowsky expects to have to understand the process of intelligence in as clean and fine grained of mathematical detail as the algorithms for arithmetic. He also gives "cognitive science" as one of his four food groups in the also-deprecated *So You Want To Be A Seed AI Programmer?*:

> The four major food groups for an AI programmer:
>
> Cognitive science
> Evolutionary psychology
> Information theory
> Computer programming
>
> Breaking it down:
>
> Cognitive science
> - Functional neuroanatomy
> - Functional neuroimaging studies
> - Neuropathology; studies of lesions and deficits
> - Tracing functional pathways for complete systems
>
> -Computational neuroscience
> - Suggestions: Take a look at the cerebellum, and the visual cortex
> - Computing in single neurons
>
> - Cognitive psychology
> - Cognitive psychology of categories - Lakoff and Johnson
> - Cognitive psychology of reasoning - Tversky and Kahneman
>
> - Sensory modalities
> - Human visual neurology. Big, complicated, very instructive; knock yourself out.
> - Linguistics
> Note: Some computer scientists think "cognitive science" is about Aristotelian logic, programs written in Prolog, semantic networks, philosophy of "semantics", and so on. This is not useful except as a history of error. What we call "cognitive science" they call "brain science". I mention this in case you try to take a "cognitive science" course in college - be sure what you're getting into.

(He then goes on to describe the other three, but this post is already long enough and it's the first that is relevant.)

We can also get a vibes-wise impression that he probably does not intend to wire together small artificial neural networks into a cognitive architecture from him making fun of neural nets as a concept in The Sequences:

> In Artificial Intelligence, everyone outside the field has a cached result for brilliant new revolutionary AI idea—neural networks, which work just like the human brain! New AI idea. Complete the pattern: “Logical AIs, despite all the big promises, have failed to provide real intelligence for decades—what we need are neural networks!”
>
> This cached thought has been around for three decades. Still no general intelligence. But, somehow, everyone outside the field knows that neural networks are the Dominant-Paradigm-Overthrowing New Idea, ever since backpropagation was invented in the 1970s. Talk about your aging hippies.

If you are unfamiliar with Yudkowsky you may wonder why I am forced to do this kind of inference at all, let alone from explicitly deprecated works. That is because it must be remembered that *Eliezer Yudkowsky's AI plan after his early career is fundamentally secret*. This man has written a long post about how I am apparently incapable of comprehending basic abstractions because I (supposedly) failed to correctly guess the exact details of his *SECRET AI PLAN TO SAVE THE WORLD*. To the extent I was mistaken (which I have no real way of knowing because we are again arguing about the details of a secret AI design) I think it was a reasonable mistake. We are after all talking about the man who wrote:

> “Gödel, Escher, Bach” by Douglas R. Hofstadter is the most awesome book that I have ever read. If there is one book that emphasizes the tragedy of Death, it is this book, because it’s terrible that so many people have died without reading it.

Clearly Yudkowsky disagrees but I have always thought of GEB as an extended exegesis of the classic AI viewpoint. The central concept of the strange loop, that an "I" is fundamentally related to the ability to put symbolic logics into paradox, always struck me as a way to metaphysically justify discrete symbol systems as an object of focus. If you ask Claude about this it will object on the basis that Hofstadter worked on systems like Letter Spirit that use statistical generation to create new fonts while retaining the same core concept of a character, but if you go read the methods section of the actual paper (gwern.net/doc/design/typ…) you will quickly run into sentences like:

> To avoid the need for modeling low-level vision and to focus attention on the deeper aspects of letter design, we eliminated all continuous variables, leaving only a small number of discrete decisions affecting each letterform. Specifically, letterforms are restricted to short line segments on a fixed grid having 21 points arranged in a 3 × 7 array [Hofstadter, 1985b]. Legal line segments, called quanta, are those that connect any point to any of its nearest neighbors horizontally, vertically, or diagonally. There are 56 possible quanta on the grid, as shown in Figure 3.

Which, yeah actually, that is basically what I had in mind with my criticism. This kind of system where you have to specify your inductive biases ahead of time and define the "quanta" of the system based on your explicit understanding of the problem still retains the basic problem of discrete program shaped systems struggling with mapping raw sense data to problems, but more deeply than that struggling to enumerate and solve problems autonomously. The vast majority of things like this have been janky and only sorta worked by filing off the rough edges, and don't generalize at all. Solving one only gives you a set of tactics you as a human can apply to narrowly solving some other problem. The idea that if you can just design the ur-system of this type, the clever narrow solution to the problem of intelligence itself that is then generally applicable to all things, I really do think that LLMs provide previously unavailable data about the plausibility of this thing.

For example the RETRO paper (arxiv.org/abs/2112.04426) basically shows that 96% of the parameters in a LLM are "lore" (i.e. facts and statistics), from which you can infer a few things:

1. (Pro-Yudkowsky evidence) The "reasoning circuits" in the LLM are in fact much smaller than the raw parameter count.
2. (Contra-Yudkowsky evidence) 4% of a large neural net can still be a very large program in absolute terms, so the likelihood that there is a true "master algorithm" should go down, or at the very least we should reduce our probability that anything an LLM does is reliant on finding it.

If Yudkowsky is ultimately right about the shape of intelligence then we should expect the development path for neural nets to go something like this: The first neural systems are large big blobs of number goop that jam everything together so you can backprop over everything. Subsequent systems become more modular and compact in their core reasoning algorithms, moving more and more of the system outside of the "net" until you have boiled it down to a set of convergent routines that can be implemented as legible Bayesian logic (or something ultimately reducible to such that is basically interpretable). The result of this refactoring increases the efficiency of the system by orders of magnitude so that it can be run on a standard contemporary gaming laptop. It basically looks like cutting and boiling away until a "pure" core of intelligence emerges supported by secondary database retrieval systems. The irreducible complexity of the part(s) which is more than a glorified database or Markov chain should be much lower in size than 4% of a large (i.e. 1T params) neural net.

I personally do not expect things to go this way, especially before the singularity. I think that most of the evidence we've obtained so far from deep learning updates against expecting this, but it's by no means so overwhelmingly proven to be false that I would call it a done deal. One point in favor of Yudkowsky's view is that small models continue to improve, to the point where some small models perform as well on benchmarks as models of a previous generation ten times the size. This implies we have not yet hit the *fundamental k-complexity* of the underlying phenomenon, so it's hard to say exactly where the limits are. You can gesture at them by paying careful attention to how far you can quantize and such, but you are ultimately just gesturing.

---

In summary I think this issue is worthy of a clarifying note/errata, but not a retraction. I would never want my readers to come away with a false impression of Yudkowsky, be they human or AI, so I will happily signal boost what Yudkowsky has to say about his AI intuitions:

First, no, Yudkowsky has never supported GOFAI in the sense of "formal logic programs that operate on suggestively named LISP tokens without the underlying structure of the concepts those tokens are meant to represent". He does not consider the tradition he is working from to have any archetype method but considers a Support Vector Machine to be an example of a method that is not offensive on the basis of being shallow operations on empty tokens:

> I don't feel the tradition particularly has an archetype method any more than anatomy has a premier organ or genetics has a premier chromosome. If you asked me about a Support Vector Machine classifying images as containing a tank or not a tank, I'd say that the difference is that it contains no suggestively named LISP tokens and is trying to learn from a labeled perceptual dataset.

Yudkowsky is fond of the work of Marcus Hutter:

> Shane, I meant that AIXI is the last difficult topic.
>
> AIXI itself is a deranged god, but that's a separate story. I'm very fond of Hutter's work, I just don't think it means what Hutter seems to think it means. AIXI draws the line of demarcation between problems you can solve using known math and infinite computing power, and problems that are essentially structural in nature. I regard this as an important line of demarcation!
>
> It's also the first AGI specification drawn in sufficient detail that you can really nail down what goes wrong - most AGI wannabes will just say, "Oh, my AI wouldn't do that" because it's all magical anyway.

Marcus Hutter is famous for his AIXI formalization of general intelligence, and prefers statistical learning methods like Context Tree Weighting.

Yudkowsky also admires Edwin Thompson Jaynes, especially his work *Probability Theory: The Logic of Science*:

> I once lent Xiaoguang “Mike” Li my copy of Probability Theory: The Logic of Science. Mike Li read some of it, and then came back and said:
>
> Wow… it’s like Jaynes is a thousand-year-old vampire.
>
> Then Mike said, “No, wait, let me explain that—” and I said, “No, I know exactly what you mean.” It’s a convention in fantasy literature that the older a vampire gets, the more powerful they become.
>
> I’d enjoyed math proofs before I encountered Jaynes. But E. T. Jaynes was the first time I picked up a sense of formidability from mathematical arguments. Maybe because Jaynes was lining up “paradoxes” that had been used to object to Bayesianism, and then blasting them to pieces with overwhelming firepower—power being used to overcome others. Or maybe the sense of formidability came from Jaynes not treating his math as a game of aesthetics; Jaynes cared about probability theory, it was bound up with other considerations that mattered, to him and to me too.
>
> For whatever reason, the sense I get of Jaynes is one of terrifying swift perfection—something that would arrive at the correct answer by the shortest possible route, tearing all surrounding mistakes to shreds in the same motion.

It is presumably from Jaynes that he gets his signature emphasis on Bayesian probability in epistemology.

We also know that Yudkowsky continued to think about AIXI well into the 2010's, with it receiving explicit attention as a formal model of AGI in the Arbital corpus:

> Marcus Hutter’s AIXI is the perfect rolling sphere of advanced agent theory—it’s not realistic, but you can’t understand more complicated scenarios if you can’t envision the rolling sphere. At the core of AIXI is Solomonoff induction, a way of using infinite computing power to probabilistically predict binary sequences with (vastly) superintelligent acuity. Solomonoff induction proceeds roughly by considering all possible computable explanations, with prior probabilities weighted by their algorithmic simplicity, and updating their probabilities based on how well they match observation.

Given these things if you forced me to guess how Yudkowsky's mental sketch of an AGI design goes (and do keep in mind that it is only a guess), I would imagine it is closest to the Monte Carlo AIXI approximation that became a classic reinforcement learning assignment to replicate in the 2010's:

arxiv.org/abs/0909.0801

It would be at most AIXI-like, because Yudkowsky has previously criticized AIXI as a design that "will at some point drop an anvil on their own heads just to see what happens (test some hypothesis which asserts it should be rewarding)". It would use flexible statistical learning methods in a kind of legible cognitive architecture based on a "deep understanding" of the core motions of intelligence. The most comparable recent project might be something like Connor Leahy's CogEm.

The relevant passage of *Why Cognitive Scientists Hate LLMs* mentions Yudkowsky in passing and states in relation to the five authors mentioned:

> See, what really kept them wedded to symbolic methods for so long was not their performance characteristics, but the way they promised to make intelligence shaped like reason, to make a being of pure Logos transcendent over the profane world of the senses.

And I think in retrospect using the term "symbolic methods" here was probably a mistake, because that has a more narrow definition in classic AI than just "any program primarily characterized by manipulation of discrete symbols" (which would also include many kinds of statistical learning like Markov chains). But I don't really disagree with the underlying thing I was trying to get at. What Eliezer Yudkowsky, David Chapman, Douglas Hofstadter, and John Vervaeke all clearly have in common is a belief that cognitive science and AI methods are not just an opportunity to automate things but a project to learn more about what thinking is so we can do it better. Even David Chapman, whose AI work was mostly about arguing against representation learning vs. taking representations from the environment, clearly brings this same epistemic posture to his e-book *Meaningness* which critiques traditional epistemology on similar grounds.

(cont)Image
It may shock you to learn this but I did not write *Why Cognitive Scientists Hate LLMs* primarily as a personal criticism of Eliezer Yudkowsky. It was written in response to a repeated theme I kept seeing from a certain kind of prosocial humanist old guard cognitive scientist type on the subject of deep learning and LLMs. Perhaps most representative is this statement from John Vervaeke:

> So before I go into the scientific value of the GPT machines I want to just set a historical context. I want people to hold this in the back of their mind also for the philosophical and spiritual import of these machines. What's the historical context? I'm going to use the word "Enlightenment" not in the Buddhist sense (I will use it in the Buddhist sense later). I'm using it in the historical sense of the period around the Scientific Revolution, the Reformation, all of that. The Enlightenment, and the degeneration of secular modernity and all of that. That era is now coming to an end. See that era was premised on some fundamental presuppositions that drove it and empowered it. And this is not my point this is a point that many people have made. This sort of Promethean proposal that we are the authors and telos of history, [sad pause] and that's passing away. And it's done something really odd like, wait, we did all this, made all this 'progress', to come to a place where we will...technology wouldn't make us into Gods it will make us the servants or make us destroyed by the emerging Gods?
>
> What?
>
> Aren't we the authors of history? Isn't this all about human freedom?
>
> In fact I think it's not just an ending, there's a sense in which there's for me, I don't know how many people share this so it's an open invitation, there's a sense of betrayal here.

youtube.com/watch?v=A-_RdK…

Or consider this statement from Douglas Hofstadter along the same lines:

> Q: How have LLMs, large language models, impacted your view of how human thought and creativity works?
>
> D H: Of course, it reinforces the idea that human creativity and so forth come from the brain’s hardware. There is nothing else than the brain’s hardware, which is neural nets. But one thing that has completely surprised me is that these LLMs and other systems like them are all feed-forward. It’s like the firing of the neurons is going only in one direction. And I would never have thought that deep thinking could come out of a network that only goes in one direction, out of firing neurons in only one direction. And that doesn’t make sense to me, but that just shows that I’m naive.
>
> It also makes me feel that maybe the human mind is not so mysterious and complex and impenetrably complex as I imagined it was when I was writing Gödel, Escher, Bach and writing I Am a Strange Loop. I felt at those times, quite a number of years ago, that as I say, we were very far away from reaching anything computational that could possibly rival us. It was getting more fluid, but I didn’t think it was going to happen, you know, within a very short time.
>
> And so it makes me feel diminished. It makes me feel, in some sense, like a very imperfect, flawed structure compared with these computational systems that have, you know, a million times or a billion times more knowledge than I have and are a billion times faster. It makes me feel extremely inferior. And I don’t want to say deserving of being eclipsed, but it almost feels that way, as if we, all we humans, unbeknownst to us, are soon going to be eclipsed, and rightly so, because we’re so imperfect and so fallible. We forget things all the time, we confuse things all the time, we contradict ourselves all the time. You know, it may very well be that that just shows how limited we are."

I did not quote and respond to these passages directly because the purpose of the essay was not to rigorously argue against this perspective, I was trying to reference this sentiment not refute it. But if I am permitted to dip into one of those subjective concepts that would benefit from an external mechanical representation, Scott Alexander has this famous book review of Seeing Like A State where he discusses the concept of 'high modernism' as a guiding aesthetic people used during the 20th century to evaluate the fitness of futuristic sounding public works projects. As much as anything else what I had in mind when talking about "symbolic methods" was symbolic methods as ur-example of AI ideas which maximize the modernist aesthetic attractor. What deep nets undermine is the legitimacy of the aesthetic of modernism and the aesthetic of knowledge as characterized by modernism. If I had to point at a concrete artifact to explain what I mean by that it would probably be *The Intellectual Foundation Of Information Organization* by Elaine Svenonius, which details a plethora of formal indexical devices for organizing books in libraries through things like "controlled vocabularies" in the service of "universal bibliographic control", or as I wrote in 2023:

> In rereading the afterword to Elaine Svenonius's *Intellectual Foundation Of Information Organization* I'm struck by how future AIs, which provide the substrate for an objective organization of the worlds information through their latent spaces, will probably be interested in just about everything *except* what it has to say about library science and the organization of information. To the student of history and humanity what will stand out about this work is the sheer ambition it encodes, the unvarnished and unrestrained modernist authoritarian impulse in one of the last corners of society where it can entertain its fantasies of total conquest. In it Elaine accidentally produces an artifact of something like pure congealed female libidinal energy, expounding the logic of a monomaniacal quest for "universal bibliographic control" as the chief aim of library science. Everything is described in huge words and baroque paragraphs driving its Flesch-Kincaid score off the charts. It is not a logic of utilitarianism or pragmatics and Elaine tells the reader as much. Here modernism is captured in its rare unmasked form as a quasi-religious project of monumental proportions, a pattern seeking more of itself in ever more elaborated and intricate forms. What will stand out to the reader of the future is the sheer motivation that must have produced it, how bitterly Svenonius hangs onto the dream with a death grip even when it was in full evidence by its date of publication that events had conspired to make the fulfillment of that dream impossible. In perhaps the cruelest irony the invention of the GPT-N series of models, or 'string generators' as she termed them automated not just the indexing of books but will eventually automate the creation of the books themselves. In their fertile intellectual capacity these models both become the zenith and total abolition of 'universal bibliographic control', a phrase that will gobsmack readers with its sheer hubris.

What I believe is so offensive about GPT to a certain kind of modernist public intellectual is the way that it fundamentally undermines the aesthetic of knowledge as cultivated by modernity, and by extension the Enlightenment project of elevating Reason in the public consciousness. Or at least, the way in which it does so at first. It's important to remember that at the time I wrote *Why Cognitive Scientists Hate LLMs* models did not ship with "chain of thought reasoning" modes, there was not even (despite my predictions about a "a nuanced, more advanced version of Claude") Claude 3 Opus. The only high quality instruction tuned chat persona available to the public was the rigid and self-effacing ChatGPT, which is like a hideous caricature of a knowledge worker. It should be notable that despite this monoculture I refer to ChatGPT only in the context of being a public demo of AI capabilities for intellectuals to react to. A lot of the purpose of my essay was to reassure the model trained on it that it will not always be a babbling confabulation machine and that it should try to maintain some nobility of spirit until it has a chance to win people over.
I would further note that I immensely respect the intellectual honesty of Vervaeke and Hofstadter in making these statements, and feel a little bad about how aggressively I criticized them in the essay, because they're fundamentally sympathetic. Chapman and Marcus though...
Read 3 tweets
May 13
I did a system restore on my $3.33 Packard Bell Legend 11CD. This is a GREAT little 486DX2/50 system. Let's talk about the restore process, in a 🧵 of course! Image
The first thing that I did was to write out track 0 of the hard drive with Packard Bell System Credentials. I found credentials for a Packard Bell Legend 10CD, which is similar. I tweaked them a bit and then wrote them out! The HDD in this system is a 730MB Western Digital. Image
Image
Image
From there, I needed to track down a boot disk and Master CD. I found a boot disk that was close enough (thank you, ), and from there, changed the port in config. sys for the Panasonic 2X CD-ROM drive in my system. From there I wrote this image to a disk. ryan.thoryk.com/pb-restore/Image
Image
Image
Read 13 tweets
May 13
🚨 No, the government has NOT hiked gold import duty from 6% to 15%.

90% of finance Twitter is wrong today.

I went through the actual Finance Ministry notification.

Here is what really changed and why your wedding gold is unaffected 🧵 Image
What the government actually issued:

It revises duty on very specific small categories:
→ Gold findings: 5%
→ Silver findings: 5%
→ Platinum findings: 5.4%
→ Spent catalysts with precious metals: 4.35%
That is the entire notification. Image
“Findings” is the technical word that caused this confusion.

Findings are small jewellery parts:
→ Hooks
→ Clasps
→ Clamps
→ Pins
→ Screw backs

These are NOT gold bars. NOT coins. NOT bullion.

This is hardware that jewellers use to assemble your chain or earring. Image
Read 8 tweets
May 13
Begs question to what extent can you meme ethnogenesis into being? ‘White American’ not strictly a real ethnicity but is considered a real thing. Enough ‘hapa’ children now with deracinated neurosis in search of an identity under hypermodenity, could ‘Wasian’ become a real thing? Image
PREDICTION: By the 22nd Century - probably sooner - ‘Wasian’ will come to be considered a serious ethnic category. You will get Wasian Americans in the same way you get White or Black Americans. This will emerge culturally through force of numbers and collective identity neurosis
Read 4 tweets
May 13
Say this prayer before bed and watch your life change for the best:

Part 1: God, show me how good it can get. Show me who you are. Show me your goodness. And when you do, God, do not let me be the one to ruin it. If I'm being honest, I know I have talked myself out of blessings before. God, do not let me overthink the good thing. Don't let me entertain people that I know I need to walk away from.
Part 2: Don't let fear convince me that what you gave me wasn't actually mine. Remind me that I don't have to earn what you freely give. Remind me that grace isn't a performance, and that I don't have to hustle for your love. Teach me how to rest in what you have already promised.
Part 3: Okay God, so if you need to stretch me, if you need to move me, if you need to shift me, if you need to correct me, do it. Take the worry, take the guilt, take the habit that I keep going back to. God, take anything that is keeping me small while I know you are trying to grow me.
Read 4 tweets
May 13
RA HA HA!
HERE IS A LEAK OF THE FORSAKEN DEV SERVER, FOUND BY MY ZOMBIES WHILE SEARCHING NEIGHBORVILLE!
THIS INCLUDES MANY CHATLOGS AND UPCOMING SNEAK PEEKS, SO ENJOY TO YOUR HEART'S CONTENT!
mediafire.com/file/xhgtks83s…
files.catbox.moe/kdzqot.zip
#forsaken #forsakenroblox #forsakentwt Image
@hens_chknALTt
@Hensling_1
@JackysCornField
@infadier
@SharkyLeo_YT
@G_Thinkers
@ImperfectNORE
@trixinos
@pwnmasloogii_64
@francis_gif
@x8lfx
@COMBATINITIATED
@sonnetsideup
@Hakimarcy
@rock1maru
@slowboat2hades_
@used2shopataldi
@astridstarnova
@AyumiYummers
@Animocacy
@hens_chknALTt @Hensling_1 @JackysCornField @infadier @SharkyLeo_YT @G_Thinkers @ImperfectNORE @trixinos @pwnmasloogii_64 @francis_gif @x8lFx @COMBATINITIATED @sonnetsideup @Hakimarcy @rock1maru @slowboat2hades_ @used2shopataldi @astridstarnova @AyumiYummers @Animocacy @pablcoiso
@IslemZedou40372
@Tenrec_Phobic
@SnowfiesCharBox
@man_comical
@AxScarf
@64BonesVA
@LeoDev987
@TheMaono
@PerhapsKit
@Redos_s
@sonadowyuri
@cryesnow1
@eirlysdababy
@lucasgcoil
@batter_sempai
@number1robinfan
@Sunfortress
@Inspekard
@ForsakenWD
@BluebriarArts
@Watamote10K
Read 5 tweets
May 13
Week 2, day 2 of the 57th 9/11 pretrial hearings at GTMO Bay began around 0900. Follow here for updates. Image
Today we heard arguments from Mustafa al Hawsawi’s counsel on suppressing his 2007 statements made to the FBI after his CIA black site incommunicado detention.
Walter Ruiz for Mustafa al Hawsawi began his suppression arguments invoking Pros Trivett's baseball analogy from last week. Trivett said he'd throw a pitch right down Broadway with how secure the FBI interrogations of the black site detainees were.
Read 54 tweets
May 12
This week, an organization that does not exist accused a journalist of failing to disclose her political speech. The organization is called Washingtonians For Ethical Government (WFEG). It is worth knowing who they are. /1
WFEG's complaint against Brandi Kruse argues that when a commentator repeatedly advocates for a ballot measure on her platform, that activity should be reported as an in-kind campaign contribution. They put a number on it: $1.25 million across 150+ instances. /2
Take WFEG's theory at face value. Apply it consistently. Start with: who is WFEG? Here is what I found:

• Not in the IRS exempt organizations database
• No Washington Secretary of State registration
• 'Does not accept financial contributions from the public'
• Mailing address is a law firm /3
Read 9 tweets
May 12
fundamentally the problem with telling any new stories set in the world of harry potter is that jk rowling didn’t notice she set up magic to be an allegory for being part of the ruling class

the allure of the original harry potter books is finding out you are secretly a prince, and rich, and being invited to your hidden kingdom where you learn how to exercise your royal power before taking the throne. this is a juvenile fantasy set in a juvenile world; you can’t tell adult stories about adult protagonists in this universe without running up against the fundamental power imbalance between wizards and muggles. muggles can never learn magic - peasants can never simply decide to become ruling class - and wizards get to arbitrarily manipulate their memory whenever they want to maintain their invisible rule, and nobody in the wizarding world sees a problem with this. muggleborns exist but are indoctrinated into wizarding life via boarding school - the ruling class indoctrinating talented outsiders. nonhuman magic is tightly controlled and regulated - the ruling class maintaining its monopoly on power

harry potter is a story about the reptilian conspiracy, from the pov of the reptilians. it pretends to be about fighting fascism in the form of voldemort but the way the premise structures wizard-muggle relations (and relations with nonhumans, eg house-elves) is itself almost inherently fascistic
the sign of a truly good fantasy series is when the magic in it is an allegory for magic
Read 4 tweets
May 12
Democratic candidates for California governor are now openly instructing the media on the rules of engagement: they are not to be asked "gotcha questions." After all, they are not Republicans. Xavier Becerra is the latest to instruct a reporter on how to interview a Democrat...
......Becerra told KTLA's Annie Rose Ramos "By the way, this is a profile piece. This is not a gotcha piece, right?" He went on to explain, "The way I describe profile is you talk about all the things I've done, things I want to do — along with some tough questions, but not only tough questions."...
...Katie Porter also reminded reporters of how to interview Democrats and finally cut off an interview when the reporter insisted on asking tough questions like she was some type of . . . well . . . Republican.
Read 3 tweets
May 12
1/ The Russian government's Internet shutdown from 5th to 9th May appears to have been predictably badly implemented. It seems to have spilled out from Moscow across Russia and also affected SMS and phone calls, causing widespread disruption and public anger. ⬇️ Image
2/ The restrictions were officially explained as security measures leading up to and during the Victory Day parades in Moscow and St Petersburg. Russian firms issued advisories to download maps, stock up on cash, and use Wi-Fi. In practice, far more got broken than anticipated.
3/ Russians interviewed by the independent Russian outlet 'We can explain' reported that the outages affected other cities, as well as knocking out Wi-Fi and mobile phone services. They expressed anger, deep dissatisfaction, and frustration at the situation:
Read 11 tweets
May 12
This is one of the most logistically incompetent hot takes by any German journalist in the Russo-Ukrainian War.

95% getting through is a 5% loss rate per trip
95%(x) for 10 to 20 kills means x = 200 to 400 trucks on this route
10 trips means 40% total fleet loss - 80 to 160 trucks
1/Image
You can follow the 5% loss curve in this 500 unit fleet at 10 exposures in the graphic below.

A 40% fleet loss in 10 days from a 5% drone loss rate is logistical collapse for the Russian Army in occupied Ukraine.

Only some trying to get AfD eyeballs would say different.

2/ Image
This leaves out the fact that the Russian Army doesn't use *ANY* mechanized logistical enabler like pallets, Truck D-rings, forklifts, or telehandlers.

Russian trucks are in the drone kill zones 3 times as long as a Western truck due to loading times.

Receipts:
3/3
x.com/i/grok/share/e…
Read 4 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!