John David Pressman Profile picture
LLM developer, AI agents, synthetic data, scalable alignment, forecasting, behavioral uploading. Transhumanist. All tweets public domain under CC0 1.0.
Apr 1 6 tweets 2 min read
The earliest materialist theories of brain function stated that the brain secretes thought in the same way the liver secretes bile. Then in the 50's we moved away from that towards the brain having a list of non-intelligent parts that you put together to get intelligence. These theories were supported by brain lesion studies showing that damage to certain parts of the brain predictably led to discrete localized deficits in function. This implied the brain was more like 40 organs glued together than one big organ.
Feb 28 4 tweets 1 min read
Unpopular Anthropic take here we go:

1. If Anthropic actually had a monopoly on AI good enough to be used for key military capabilities it would obviously be appropriate to use the DPA to demand access to it.

2. It is not plausible to me that Anthropic has such a monopoly. 3. Mass surveillance of Americans is not a key military capability except under very gnarly scenarios that this incident should update your credence towards.

4. Declaring Anthropic a "supply chain risk" is an abuse of the law and anti-American, I hope a court strikes it down. Image
Nov 29, 2025 7 tweets 3 min read
The reason LLMs can exist is that the conscious is constructed as the latent of a subnetwork that predicts the next ReAct block (moment of experience) in a sequence of ReAct blocks. Language exists because these can be encoded as text, but they probably existed before language. The document mind learned by the LLM from text, an infinite stream of ReAct-block-like moments of experience which "hoist" earlier blocks in the sequence up near the context where they are needed for prediction, is the conscious. But in humans it's obviously multimodal.
Jun 26, 2025 5 tweets 1 min read
I just realized why early AI art was so much better than new AI art: Early AI art was all out of distribution to the generator guided by a classifier, which forces the process to render concepts based more on the underlying causal invariants instead of human reifications. This implies that we could distill the invariants by doing iterated novelty search/OOD detection and using them to gather bits of the underlying most general generating principles of the data distribution.
May 2, 2025 14 tweets 5 min read
"But JD I don't understand, what is the Logos and what does it mean to understand it?"

To understand the Logos is to understand that everything which exists both implies and is implied by some natural induction and every natural induction narrows the search space of every other. Perhaps more importantly it is to understand that when you set up an optimizer with a loss function and a substrate for flexible program search that certain programs are already latently implied by the natural induction of the training ingredients.

May 2, 2025 5 tweets 2 min read
I want to know if the sources o3 confabulates for how it knows things are actually correlated with anything real in its head. I remember being appalled when R1 traces would say they're "looking at the documentation" without search until I realized that summons the docs vector. One thing I think people don't understand is that early GPT models, if you did RL to them they would not immediately understand that they're non-corporeal beings and would gladly agree to help you move your furniture when exposed to helpfulness tuning. I saw it in my RLAIF runs.
May 2, 2025 4 tweets 2 min read
"I hate the open scientific process wrt AI and want it to be developed in secret by an elite group for the benefit of humanity."

> wind up frantically gesturing at shadows on the cave wall with no idea why things are going wrong

lol. lmao. rofl even. Part of why we're receiving warning shots and nobody is taking them as seriously as they might warrant is we bluntly *do not know what is happening*. It could be that OpenAI and Anthropic are taking all reasonable steps (bad news), or they could be idiots.
Apr 30, 2025 6 tweets 2 min read
> conditions for AIs to be moral patients: consciousness and robust agency.

This is a misconception: The realpolitik of the matter is that your status as a moral patient is almost solely determined by your ability to punish others for not acknowledging your moral patiency. Nobody wants to acknowledge this because it's uncomfortable, but even a few minutes spent contemplating factory farms should make it obvious. As a further thought experiment imagine if ants were conscious: Ha ha jk ants *ARE* conscious and pass the mirror test. Nobody cares.
Apr 20, 2025 10 tweets 4 min read
The current shared takeaway seems to be "these people are mediocre sadistic idiots who (at least act like they) resent successful people for being better than them", this may be true but it's not actionable.

"Liberal democracy has become aesthetically unfit for oral culture" is. Image "Nobody wants to participate in my thing they just want Greek statues and vibe based 'cool' policies like having illegal gulags to throw enemies of the state in."

Okay but have you considered this is feedback and the feedback is you're not doing enough ostentatious cool shit?
Apr 10, 2025 13 tweets 4 min read
@JeffLadish @repligate > Like, do we know of anything on the internet that sounds like Syndey, even some type of human conversations, before Sydney?

Yes. Many forms of spoken language sound like Sydney when transcribed verbatim. Here's me unintentionally emitting a very Binglish-esque paragraph: Image @JeffLadish @repligate One thing that seems to consistently confuse people about LLMs is that the model is trained on prose but you're sampling with the generative process of speech. This causes people to compare base models to prose and underrate their intelligence because they read like speech.
Apr 4, 2025 53 tweets 11 min read
@teortaxesTex I didn't want to be rude but it's kind of slop and I'm tempted to just write down what my 5 year timeline looks like in the hopes it breaks somebody out of mode collapse. @teortaxesTex To get specific it reads to me like someone who formed their primary intuitions about "the AI race" by "updating all the way" a few years ago and is now awkwardly jamming new stuff like DeepSeek into their model while keeping the overall narrative the same.
Mar 29, 2025 12 tweets 3 min read
People keep asking why human potential movement attempts to become more rational start off promising and then devolve into woo and scandal. Besides the well worn answers I think human epistemology is usually bottlenecked on personality issues and trauma so they become the focus. Even if you could outline a theoretically perfect humanly achievable Bayesian epistemology very few people would be able to implement it. Their problems are less "doesn't know the mental motions to approximate Bayesian inference", it's more "my father hit me if I questioned him".
Mar 29, 2025 7 tweets 3 min read
I still remember the IRC conversation where she asked me if I'd press a button that kills 1/7 of the world population to summon a friendly seed AI. Would I kill over a billion people to summon utopia? I thought about it, agonized over it for a bit, then told her yes.

I was 19. If that sounds unfathomably narcissistic to you, to imagine it might even be your choice, well I can't say you're wrong. But that's the scale LessWrong, HPMOR, and associated media encourage you to think at. I find the rapid walkback from this personally insulting. Image
Jan 23, 2025 11 tweets 3 min read
@repligate I just want to know what happened to him tbh. You can see in his earlier writing that the seeds of his bad traits that dominate his later persona are present, but they're balanced by other cognitive modes. It's like he rewarded himself for his worst tendencies until ossification. @repligate I think his decline goes way beyond just ordinary aging, I doubt it's genetic (though, it could be, maybe people genetically vary in how much age ravages their cognition?), it seems to me like it's trauma induced from the 21st century killing his dream?

Jan 19, 2025 15 tweets 4 min read
"I don't understand why LLM agents aren't working yet."

I didn't either, that's part of why I decided to do weave-agent, to find out. Right now it's "it doesn't notice it can try pressing a key other than down or that it's blocked by a wall in Nethack", yet Mistral-large knows. Image
Image
Image
Image
This problem is representative. It will fail to notice something important, and then never generate the right hypothesis for what it should try to get unstuck. I don't really know how to fix this besides having a human go "HEY DON'T DO THAT", which seems like passing the buck.
Jan 8, 2025 14 tweets 5 min read
I don't agree with everything in this thread but it does articulate the basic conclusion I came to and why I stopped posting image gens: Early AI art rocked because people were posting grids showing variations on a concept and revealing things about how image models think. As the focus has shifted away from that towards "wow pretty picture" my interest has waned. Kenny is right that focusing in on any particular detail in a diffusion piece is pointless because that's not the scale at which the model thinks.

Nov 14, 2024 7 tweets 2 min read
I didn't really get why multi-scale aggregation would solve adversarial examples until it just occurred to me that one exploitable difference between a real reward and Goodhart points is that real rewards imply semantically meaningful intermediate points leading toward them. For example if we're doing deep RL we might rederive the hedonic treadmill by only updating on verifiable terminal rewards and intermediates that are within 1-2 stdev of the average on the theory that iterative tuning on above average leads to real rewards.
Nov 7, 2024 15 tweets 3 min read
The Internet means that it's no longer possible for groups to rein in the messaging of their most extreme members. If the women who said "kill all men" were forced to internalize the costs they were imposing on women as a class by doing that they would be instantly bankrupted. Ironically enough the problem is compounded by similar dynamics to why some people become very paranoid about racism or misogyny in the first place. When you see someone say "kill all men" with ambient anti-male sentiment it makes you paranoid and you see it everywhere else too.
Nov 5, 2024 11 tweets 3 min read
The reason why EA doesn't endorse interventions based on a logarithmic pain/pleasure scale isn't because people don't know it's logarithmic but because acknowledging it feels like endorsing utility monsters which are seen as an ontological rather than game theoretic problem. The actual ontological problem starts farther back anyway, "utility" isn't hedons and you recoil at the thought of letting a utility monster axe murderer kill people because you intuitively understand this. You know incredible *utility* is not actually generated by his bloodlust.
Oct 8, 2024 4 tweets 2 min read
Funniest part of the beef between EY and @Meaningness is EY promising his students the ultimate system then teaching that predictive score rules generate useful epistemology ("make your beliefs pay rent") imparting stage 5 but the Bayes ruse fooled Chapman too so he never saw it. Image In total fairness, I suspect at this point that EY's ruse was so incredibly successful that he managed to fool even himself.
May 2, 2024 5 tweets 2 min read
I love that this brainworm keeps trying to evolve the defense of never thinking very hard about AI capabilities so you stay as scared as possible of a vague amorphous threat.
greaterwrong.com/posts/55rc6LJc… "Publish nothing ever" is a step up over "only publish safety" in terms of defense mechanism so I'll take this as a sign we've stepped into a new social evaporative cooling regime.