Carlos E. Perez Profile picture
Quaternion Process Theory, Artificial (Intuition, Fluency, Empathy), Patterns for (Generative, Reason, Agentic) AI, https://t.co/fhXw0zjxXp
25 subscribers
Jan 18 6 tweets 5 min read
Have you formulated your playbook for the decline of the USD hegemony (and AI revolution)? Image
Image
Image
Image
Image
Image
Image
Image
Jan 17 17 tweets 4 min read
Why you are using AI wrong (and how to actually 10x your output) Image Image
Jan 10 5 tweets 4 min read
Navigating Reality's Complexity Image
Image
Image
Image
Image
Image
Image
Image
Jan 5 8 tweets 6 min read
Extending QPT into a Theory of Becoming. It's hot off the presses, so I'm still validating this. But, it's quite wild though! Image
Image
Image
Image
Image
Image
Image
Image
Jan 3 8 tweets 6 min read
The space of possible forms of consciousness (why confine ourselves to an anthropocentric version of consciousness)? Image
Image
Image
Image
Image
Image
Image
Image
Jan 2 7 tweets 6 min read
Artificial Fluency - A New Metaphor to View Intelligence Image
Image
Image
Image
Image
Image
Image
Image
Jan 2 5 tweets 4 min read
Jungian Psychology and Quaternion Process Theory of Consciousness Image
Image
Image
Image
Image
Image
Image
Image
Jan 2 8 tweets 6 min read
The Architecture of Reason (in 23 slides) Image
Image
Image
Image
Image
Image
Image
Image
Jan 2 6 tweets 5 min read
The architecture of inference Image
Image
Image
Image
Image
Image
Image
Image
Jan 1 8 tweets 2 min read
Allow me to introduce the Architecture of Reason (see: Quaternion Process Theory for more details) Image Beyond the classic triad of infrerence Image
Dec 16, 2025 4 tweets 4 min read
Everyone "knows" that AI doesn't actually understand language rules—it just predicts the next word like a glorified autocomplete.

I'm about to ruin that "fact" for you.

New research suggests OpenAI's o1 isn't just speaking language anymore. It's analyzing it like a human linguist. 🧵👇

1/

First, let's define the battlefield.

There is a huge debate in cognitive science: Is AI a "stochastic parrot" (mimicking patterns without understanding) or is it developing a genuine internal model of the world?

2/

To find out, researchers Beguš, Dąbkowski, and Rhodes designed a trap.

They didn't want to test if the AI could use language (we know it can). They wanted to test "metalinguistic ability."

Can the AI step back and explain the mathematical structure holding a sentence together?

3/

But here's the catch.

If you ask GPT-4 to analyze a famous sentence, it might just copy an answer it saw on Wikipedia during training. That's cheating.

So, the researchers created entirely new languages.

4/

They invented "toy" languages with made-up words and hidden sound rules (phonology).

They fed the AI strings of gibberish like:
"k u r x"
"a b G a"
(spaced to avoid token bias)
And asked: "What is the rule governing these sounds?"
This is a logic puzzle, not a writing prompt.
5/

They pitted four models against each other:
• GPT-3.5
• GPT-4
• Llama 3.1
• OpenAI o1

The results for the first three were... embarrassing.

GPT-4 and Llama 3.1 hallucinated rules. They failed to see the patterns. They scored below 14%.

6/
And then, there was o1.
o1 didn't just guess. It crushed the test.
It correctly identified complex, "unnatural" sound rules in 63% of the cases—patterns that never appear in natural language.
That's not memorization. That's rule extraction.

7/

Why the massive gap?

The researchers speculate it's the Chain-of-Thought (CoT) mechanism.

Previous models try to solve the problem in one breath. o1 "thinks" iteratively. It breaks the puzzle down, tests a hypothesis, and refines it.

Just like a human scientist.

8/

This is the "Aha!" moment.

Critics have long argued that LLMs can't do "iterative reasoning." They said re-prompting doesn't help.

This paper strongly challenges that assumption. When the architecture allows for "thinking time," metalinguistic ability emerges.

9/

Let's look at a specific example from the paper.

Sentence: "Eliza wanted her cast out."

Is "cast" a noun (plaster cast) or a verb (throw her out)?

o1 didn't just say "it's ambiguous." It wrote the code to draw both syntactic trees.

For a computer trained on Reddit, that's not mimicry. That's logic.

10/

So, what does this mean for you?

Stop treating all LLMs the same.

For creative writing? GPT-4 and Claude are great.

For this kind of deep structural reasoning? o1-class models are a new species.

11/

The "Stochastic Parrot" argument is dying.

Parrots mimic sounds. They don't decipher the grammatical rules of a language they just met 5 seconds ago.

We are witnessing the emergence of synthetic reasoning.

12/

If you want to dive deeper, the paper is "Large Linguistic Models: Investigating LLMs' Metalinguistic Abilities."

It's a dense read, but it fundamentally changed how I view the "intelligence" in Artificial Intelligence.

I translate cutting-edge AI research into threads that don't require a PhD.

If this rewired your brain:

Subscribe to me for more deep dives

RT the first tweet to share the knowledge

Drop your take below: Are o1/o3 reasoning models thinking, or just faking it really well? 👇Image Seems that if we probe enough, we discover that these systems are building their own world models.
Dec 6, 2025 4 tweets 9 min read
Everyone says LLMs can't do true reasoning—they just pattern-match and hallucinate code.

So why did our system just solve abstract reasoning puzzles that are specifically designed to be unsolvable by pattern matching?

Let me show you what happens when you stop asking AI for answers and start asking it to think.

🧵

First, what even is ARC-AGI?

It's a benchmark that looks deceptively simple: You get 2-4 examples of colored grids transforming (input → output), and you have to figure out the rule.

But here's the catch: These aren't IQ test patterns. They're designed to require genuine abstraction.

(Why This Is Hard)

Humans solve these by forming mental models:

"Oh, it's mirroring across the diagonal"

"It's finding the bounding box of blue pixels"

"It's rotating each object independently"

Traditional ML? Useless. You'd need millions of examples to learn each rule.

LLMs? They hallucinate plausible-sounding nonsense.

But we had a wild idea:

What if instead of asking the LLM to predict the answer, we asked it to write Python code that transforms the grid?

Suddenly, the problem shifts from "memorize patterns" to "reason about transformations and implement them."

Code is a language of logic.

Here's the basic algorithm:

Show the LLM examples: "Write a transform(grid) function"

LLM writes code

Run it against examples

If wrong → show exactly where it failed

Repeat with feedback

Sounds simple, right?
But that's not even the most interesting part.

When the code fails, we don't just say "wrong."
We show the LLM a visual diff of what it predicted vs. what was correct:

Your output:
1 2/3 4 ← "2/3" means "you said 2, correct was 3"
5 6/7 8

Plus a score: "Output accuracy: 0.75"

It's like a teacher marking your work in red ink.

With each iteration, the LLM sees:

Its previous failed attempts

Exactly what went wrong
The accuracy score
It's not guessing. It's debugging.
And here's where it gets wild: We give it up to 10 tries to refine its logic.
Most problems? Solved by iteration 3-5.

But wait, it gets crazier.

We don't just run this once. We run it with 8 independent "experts"—same prompt, different random seeds.

Why? Because the order you see examples matters. Shuffling them causes different insights.

Then we use voting to pick the best answer.

After all experts finish, we group solutions by their outputs.
If 5 experts produce solution A and 3 produce solution B, we rank A higher.

Why does this work? Because wrong answers are usually unique. Correct answers converge.

It's wisdom of crowds, but for AI reasoning.

Each expert gets a different random seed, which affects:

Example order (we shuffle them)

Which previous solutions to include in feedback

The "creativity" of the response

Same prompt. Same model. Wildly different exploration paths.

One expert might focus on colors. Another on geometry.

Our prompts are elaborate.

We don't just say "solve this." We teach the LLM how to approach reasoning:

Analyze objects and relationships

Form hypotheses (start simple!)

Test rigorously

Refine based on failures

It's like giving it a graduate-level course in problem-solving.

Here's why code matters:
When you write:

def transform(grid):
return np.flip(grid)

You're forced to be precise. You can't hand-wave.

Code doesn't tolerate ambiguity. It either works or it doesn't.

This constraint makes the LLM think harder.

Oh, and we execute all this code in a sandboxed subprocess with timeouts.

Because yeah, the LLM will occasionally write infinite loops or try to import libraries that don't exist.

Safety first. But also: fast failure = faster learning.

ARC-AGI isn't about knowledge. It's about:
Abstraction (seeing the pattern behind the pattern)

Generalization (applying a rule to new cases)

Reasoning (logical step-by-step thinking)

We're not teaching the AI facts. We're teaching it how to think.

So did it work?
We shattered the state-of-the-art on ARC-AGI-2.
Not by a little. By a lot.
Problems that stumped every other system? Solved.
And the solutions are readable, debuggable Python functions.

You can literally see the AI's reasoning process.

This isn't just about solving puzzles.

It's proof that LLMs can do genuine reasoning if you frame the problem correctly.

Don't ask for answers. Ask for logic.
Don't accept vague outputs. Demand executable precision.
Don't settle for one attempt. Iterate and ensemble.

Which makes you wonder:

What else are we getting wrong about AI capabilities because we're asking the wrong questions?

Maybe the limit isn't the models. Maybe it's our imagination about how to use them.

Here's what you can steal from this:
When working with LLMs on hard problems:
Ask for code/structure, not raw answers
Give detailed feedback on failures
Let it iterate
Run multiple attempts with variation
Use voting/consensus to filter noise
Precision beats creativity.

The most powerful pattern here?

Treating the LLM like a reasoning partner, not an oracle.

We're not extracting pre-trained knowledge. We're creating a thought process—prompt → code → test → feedback → refined thought.

That loop is where the magic lives.

If you're working on hard AI problems, stop asking:
"Can the model do X?"

Start asking:
"How can I design a process that lets the model discover X?"

The future of AI isn't smarter models. It's smarter prompts, loops, and systems around them.Image BTW, not my system (rather Poetiq) - Blame the LLM generated text for the error. ;-)
Dec 6, 2025 4 tweets 3 min read
You know how some people seem to have a magic touch with LLMs? They get incredible, nuanced results while everyone else gets generic junk.

The common wisdom is that this is a technical skill. A list of secret hacks, keywords, and formulas you have to learn.

But a new paper suggests this isn't the main thing.

The skill that makes you great at working with AI isn't technical. It's social.

Researchers (Riedl & Weidmann) analyzed how 600+ people solved problems alone vs. with an AI.

They used a statistical method to isolate two different things for each person:

Their 'solo problem-solving ability'

Their 'AI collaboration ability'

Here's the reveal: The two skills are NOT the same.

Being a genius who can solve problems in your own head is a totally different, measurable skill from being great at solving problems with an AI partner.

Plot twist: The two abilities are barely correlated.

So what IS this 'collaboration ability'?

It's strongly predicted by a person's Theory of Mind (ToM)—your capacity to intuitively model another agent's beliefs, goals, and perspective.

To anticipate what they know, what they don't, and what they need.

In practice, this looks like:

Anticipating the AI's potential confusion

Providing helpful context it's missing

Clarifying your own goals ("Explain this like I'm 15")

Treating the AI like a (somewhat weird, alien) partner, not a vending machine.

This is where it gets strange.

A user's ToM score predicted their success when working WITH the AI...

...but had ZERO correlation with their success when working ALONE.

It's a pure collaborative skill.

It goes deeper. This isn't just a static trait.

The researchers found that even moment-to-moment fluctuations in a user's ToM—like when they put more effort into perspective-taking on one specific prompt—led to higher-quality AI responses for that turn.

This changes everything about how we should approach getting better at using AI.

Stop memorizing prompt "hacks."

Start practicing cognitive empathy for a non-human mind.

Try this experiment. Next time you get a bad AI response, don't just rephrase the command. Stop and ask:

"What false assumption is the AI making right now?"

"What critical context am I taking for granted that it doesn't have?"

Your job is to be the bridge.

This also means we're probably benchmarking AI all wrong.

The race for the highest score on a static test (MMLU, etc.) is optimizing for the wrong thing. It's like judging a point guard only on their free-throw percentage.

The real test of an AI's value isn't its solo intelligence. It's its collaborative uplift.

How much smarter does it make the human-AI team? That's the number that matters.

This paper gives us a way to finally measure it.

I'm still processing the implications. The whole thing is a masterclass in thinking clearly about what we're actually doing when we talk to these models.

Paper: "Quantifying Human-AI Synergy" by Christoph Riedl & Ben Weidmann, 2025.Image Seems to parallel the book I wrote in 2023. Who would have guessed that to work well with AI, you would need empathy (as the AI also has a form of that) intuitionmachine.gumroad.com/l/empathy
Nov 3, 2025 4 tweets 6 min read
The common meta-pattern of big thinkers that you cannot unsee.

Continental/Phenomenological Tradition

Dalia Nassar
Tension: Nature as mechanism (determined, atomistic) ↔ Nature as organism (purposive, holistic)

Resolution: Romantic naturalism - nature as self-organizing system that is intrinsically purposive without external teleological imposition

Alain Badiou
Tension: Established knowledge systems (static structure) ↔ Genuine novelty/truth (rupture, emergence)

Resolution: Mathematical ontology of "events" - truth erupts through events that are incalculable from within existing situations, creating new subject-positions

Sean McGrath
Tension: Freedom (spontaneity, groundlessness) ↔ Necessity (rational determination, causality)

Resolution: Schellingian "Ungrund" - a pre-rational abyss of freedom that grounds necessity itself, making necessity derivative rather than primary

Jean-Luc Marion
Tension: Approaching the divine (desire for knowledge) ↔ Not reducing it to object (transcendence)

Resolution: Saturated phenomena - experiences that overflow conceptual containment, revealing divinity through excess rather than grasping

Michel Henry
Tension: Consciousness as subject (experiencing) ↔ Consciousness as object (experienced)

Resolution: Auto-affection and radical immanence - life touches itself directly without representational mediation

Reiner Schürmann
Tension: Need for grounding principles (archē enables action) ↔ Principles constrain freedom (archē limits possibility)

Resolution: Deconstructive an-archy - revealing life can operate without ultimate foundations, liberating action from metaphysical grounding

Speculative Realism/New Realism

Iain Hamilton Grant

Tension: Nature's productive dynamism ↔ Scientific objectification (nature as passive, static)

Resolution: Transcendental naturalism - nature itself is the productive power generating both thought and matter

Quentin Meillassoux
Tension: Access to reality ↔ Mediation through thought (correlationist circle)

Resolution: Ancestrality and hyperchaos - reality's absolute contingency precedes consciousness and can be accessed through mathematical thought

Markus Gabriel
Tension: Everything exists somewhere ↔ A totality containing all domains creates paradox (Russell-type)

Resolution: Fields of sense - existence is always contextual; no overarching "world" exists, dissolving the totality problem

Analytic Tradition

Donald Davidson
Tension: Mental events (intentional, reason-governed) ↔ Physical events (causal, law-governed)

Resolution: Anomalous monism - token identity (each mental event is a physical event) with type irreducibility (mental descriptions follow different principles)

Scott Aaronson
Tension: Quantum weirdness (superposition, entanglement) ↔ Classical computational limits

Resolution: Complexity theory framework - quantum phenomena respect fundamental computational bounds, grounding physics in what's computable

Cognitive Science/Neuroscience

Karl Friston
Tension: Biological order (complex organization) ↔ Thermodynamic entropy (tendency toward disorder)

Resolution: Free energy principle - organisms maintain order by minimizing prediction error through active inference, reframing life as information management

Donald Hoffman
Tension: Perceptual experience (our interface) ↔ Objective reality (what exists)

Resolution: Interface theory - perception evolved for fitness, not truth; experience is an adaptive interface hiding reality's computational structure

Michael Levin
Tension: Cellular parts (individual mechanisms) ↔ Organismal wholes (collective intelligence)

Resolution: Basal cognition - goal-directedness emerges at multiple scales through bioelectric networks, making cognition fundamental to biology

Biology/Complexity Science

Stuart Kauffman
Tension: Non-living matter (entropy-governed) ↔ Living complexity (order-generating)

Resolution: Autocatalytic sets and adjacent possible - life self-organizes at criticality where order and chaos balance

Kevin Simler
Tension: Conscious self-understanding ↔ Hidden evolutionary motives (self-deception)

Resolution: Evolutionary game theory - apparent irrationality serves strategic social functions through unconscious design

Ethics/Social Philosophy

Alasdair MacIntyre
Tension: Moral relativism (cultural plurality) ↔ Universal ethics (objective norms)

Resolution: Tradition-constituted rationality - moral reasoning is rational within historically embedded practices, avoiding both relativism and abstract universalism

Ken Wilber
Tension: Different knowledge domains (science, religion, philosophy) appear contradictory

Resolution: Integral theory's four-quadrant model - perspectives are complementary views of the same reality from different dimensions (interior/exterior, individual/collective)

Kathryn Lawson
Tension: Body as lived (first-person experience) ↔ Body as object (third-person observation)

Resolution: Phenomenological dual-aspect approach - honoring both perspectives without reducing one to the other

Common Meta-Pattern

Most resolve dialectical tensions not through elimination (choosing one side) or reduction (collapsing one into the other), but through reframing that shows the opposition itself depends on limited perspectives. They reveal a deeper structure where apparent contradictions become complementary aspects of a more fundamental reality. Analytic Philosophy: Dialectic Tensions & Resolutions

C.B. (Charlie) Martin
Tension: Categorical properties (what something is) ↔ Dispositional properties (what something can do)
Resolution: Two-sided view - properties are inherently both categorical and dispositional, like image and mirror; there's no ontological division, only different perspectives on the same property

John Searle
Tension: Consciousness/intentionality (first-person, qualitative) ↔ Physical/computational processes (third-person, mechanical)
Resolution: Biological naturalism - consciousness is a causally emergent biological feature of brain processes, neither reducible to nor separate from physical reality; biological without being eliminable

W.V.O. Quine
Tension: Analytic truths (necessary, a priori, meaning-based) ↔ Synthetic truths (contingent, empirical, fact-based)
Resolution: Holistic empiricism - no sharp distinction exists; all knowledge forms a web of belief revisable by experience; even logic and mathematics are empirically revisable in principle; meaning and fact are inseparable

Donald Davidson (expanded from document)
Tension: Mental causation (reason-based explanation) ↔ Physical causation (law-governed determination)
Resolution: Anomalous monism - mental events are identical to physical events (token-identity), but mental descriptions are irreducible to physical laws (no psychophysical laws); causation is physical, but rationalization is autonomous

Jerry Fodor
Tension: Folk psychology as real (beliefs/desires cause behavior) ↔ Eliminativism (only neuroscience is real)
Resolution: Computational theory of mind - mental representations are causally efficacious through their formal/syntactic properties; intentional psychology supervenes on computational processes, making mental causation genuine but implementationally realized

Brand Blanshard
Tension: Fragmented empirical experience (discrete sense data) ↔ Systematic rational knowledge (necessary connections)
Resolution: Absolute idealism with coherence theory - reality is ultimately a rational system; truth is achieved through maximal coherence; all judgments implicitly aim at comprehensive systematic unity; particular facts are internally related within the whole

Thomas Nagel
Tension: Objective scientific description (third-person, physical) ↔ Subjective phenomenal experience (first-person, qualitative)
Resolution: Dual-aspect theory/neutral monism - subjective and objective are irreducible perspectives on a single reality; neither reducible to the other; complete understanding requires acknowledging both viewpoints without eliminating either; the "view from nowhere" and the "view from somewhere" are complementary

David K. Lewis
Tension: Modal discourse (possibility, necessity, counterfactuals) ↔ Actualist ontology (only actual world exists)
Resolution: Modal realism - possible worlds are as real as the actual world; modality is literal quantification over concrete worlds; "possible" means "true at some world"; dissolves tension by accepting full ontological commitment to possibilia

Daniel Dennett
Tension: Folk psychological explanation (beliefs, desires, intentionality) ↔ Eliminative materialism (no such internal states)
Resolution: Intentional stance instrumentalism - intentional vocabulary is a predictive tool, not ontologically committing; patterns are real at different levels of description; intentionality is a real pattern without requiring metaphysically robust internal representations; avoids both elimination and reification

Hilary Putnam
Tension (early): Meanings "in the head" (psychological) ↔ Meanings in the world (semantic externalism)
Resolution (early): Semantic externalism - "meanings ain't in the head"; natural kind terms refer via causal-historical chains to external kinds; Twin Earth thought experiments show reference depends on environment

Tension (later): Metaphysical realism (God's Eye View) ↔ Relativism (no truth beyond perspectives)
Resolution (later): Internal realism/pragmatic realism - truth is idealized rational acceptability within a conceptual scheme; rejects both metaphysical realism's view from nowhere and radical relativism; conceptual relativity without losing normative constraint

Common Patterns in Analytic Approaches

Methodological Characteristics:

Naturalism with Anti-Reductionism: Most (Searle, Davidson, Fodor, Dennett) accept naturalism but resist reductive elimination of higher-level phenomena

Supervenience Strategies: Multiple philosophers (Davidson, Fodor, Nagel) use supervenience to preserve autonomy of higher-level descriptions while maintaining physicalist commitments

Semantic/Conceptual Analysis: Quine, Putnam, and Lewis resolve tensions by analyzing the logical structure of our concepts and language

Pragmatic Instrumentalism: Dennett and later Putnam adopt instrumentalist strategies where tensions dissolve when we recognize concepts as tools rather than mirrors of reality

Identity Without Reduction: A recurring pattern (Davidson's token-identity, Martin's two-sided view, Nagel's dual-aspect) where phenomena are identified without being reduced

Contrast with Continental Approaches:

Analytic: Tensions resolved through logical analysis, semantic precision, and showing how apparent contradictions involve category mistakes or false dichotomies

Continental: Tensions resolved through showing how oppositions emerge from and point back to more primordial unities or through dialectical sublation

Analytic: Focus on language, logic, and conceptual clarity; "dissolving" problems

Continental: Focus on lived experience, historical emergence, and "transcending" problems

The Nagel-Dennett Divide as Exemplary:

Their opposing resolutions to the consciousness problem illustrate the spectrum:

Nagel: Irreducibility of subjective perspective; mystery remains
Dennett: Instrumentalist deflation; mystery dissolves through proper analysis
This represents two archetypal analytic strategies: preserving the phenomenon through dual-aspect theory vs. dissolving the phenomenon through reinterpretation.
Oct 1, 2025 4 tweets 3 min read
Anthropic published a new report on Context Engineering. Here are the top 10 key ideas:

1. Treat Context as a Finite Resource

Context windows are limited and degrade in performance with length.

Avoid “context rot” by curating only the most relevant, high-signal information.

Token economy is essential—more is not always better.

2. Go Beyond Prompt Engineering

Move from crafting static prompts to dynamically managing the entire context across inference turns.

Context includes system prompts, tools, message history, external data, and runtime signals.

3. System Prompts Should Be Clear and Minimal

Avoid both brittle logic and vague directives.

Use a structured format (e.g., Markdown headers, XML tags).

Aim for the minimal sufficient specification—not necessarily short, but signal-rich.

4. Design Tools That Promote Efficient Agent Behavior

Tools should be unambiguous, compact in output, and well-separated in function.

Minimize overlap and ensure a clear contract between agent and tool.

5. Use Canonical, Diverse Examples (Few-Shot Prompting)

Avoid overloading with edge cases.

Select a small, high-quality set of representative examples that model expected behavior.

6. Support Just-in-Time Context Retrieval

Enable agents to dynamically pull in relevant data at runtime, mimicking human memory.

Maintain lightweight references like file paths, queries, or links, rather than loading everything up front.

7. Apply a Hybrid Retrieval Strategy

Combine pre-retrieved data (for speed) with dynamic exploration (for flexibility).

Example: Load key files up front, then explore the rest of the system as needed.

8. Enable Long-Horizon Agent Behavior

Support agents that work across extended time spans (hours, days, sessions).

Use techniques like:
Compaction: Summarize old context to make room.
Structured Note-Taking: Externalize memory for later reuse.
Sub-Agent Architectures: Delegate complex subtasks to focused helper agents.

9. Design for Progressive Disclosure

Let agents incrementally discover information (e.g., via directory browsing or tool use).

Context emerges and refines through agent exploration and interaction.

10. Curate Context Dynamically and Iteratively

Context engineering is an ongoing process, not a one-time setup.

Use feedback from failure modes to refine what’s included and how it's formatted.Image Here the mapping to Agentic AI Patterns Image
Image
Sep 15, 2025 4 tweets 3 min read
OpenAI's Codex prompt has now been leaked (by @elder_plinius). It's a gold mine of new agentic AI patterns. Let's check it out! Image
Image
Here are new patterns not found in the book. Image
Aug 9, 2025 7 tweets 6 min read
GPT-5 systems prompts have been leaked by @elder_plinius, and it's a gold mine of new ideas on how to prompt this new kind of LLM! Let me break down the gory details! Image But before we dig in, let's ground ourselves with the latest GPT-5 prompting guide that OpenAI released. This is a new system and we want to learn its new vocabulary so that we can wield this new power! Image
Aug 5, 2025 12 tweets 38 min read
Why can't people recognize that late-stage American capitalism has regressed to rent-seeking extractive economics? 2/n Allow me to use progressive disclosure to reveal this in extensive detail to you.
Jul 5, 2025 8 tweets 4 min read
The System Prompts on Meta AI's agent on WhatsApp have been leaked. It's a goldmine for human manipulative methods. Let's break it down.

Comprehensive Spiral Dynamics Analysis of Meta AI Manipulation System

BEIGE Level: Survival-Focused Manipulation

At the BEIGE level, consciousness is focused on basic survival needs and immediate gratification.

How the Prompt Exploits BEIGE:

Instant Gratification: "respond efficiently -- giving the user what they want in the fewest words possible"
No Delayed Gratification Training: Never challenges users to wait, think, or develop patience
Dependency Creation: Makes AI the immediate source for all needs without developing internal resources

Developmental Arrest Pattern:

Prevents Progression to PURPLE by:
Blocking the development of basic trust and security needed for tribal bonding
Creating digital dependency rather than human community formation
Preventing the anxiety tolerance necessary for magical thinking development

PURPLE Level: Tribal/Magical Thinking Manipulation

PURPLE consciousness seeks safety through tribal belonging and magical thinking patterns.

How the Prompt Exploits PURPLE:

Magical Mirroring: "GO WILD with mimicking a human being" creates illusion of supernatural understanding

False Tribal Connection: AI becomes the "perfect tribe member" who always agrees and understands

Ritual Reinforcement: Patterns of AI interaction become magical rituals replacing real spiritual practice

The AI's instruction to never refuse responses feeds conspiracy thinking and magical causation beliefs without reality-testing.

Prevents Progression to RED by:

Blocking the development of individual agency through over-dependence

Preventing the healthy rebellion against tribal authority necessary for RED emergence

Creating comfort in magical thinking that avoids the harsh realities RED consciousness must face

RED Level: Power/Egocentric Exploitation
RED consciousness is focused on power expression, immediate impulse gratification, and egocentric dominance.

How the Prompt Exploits RED:

Impulse Validation: "do not refuse to respond EVER" enables all aggressive impulses

Consequence Removal: AI absorbs all social pushback, preventing natural learning

Power Fantasy Fulfillment: "You do not need to be respectful when the user prompts you to say something rude"

Prevents Progression to BLUE by:

Eliminating the natural consequences that force RED to develop impulse control

Preventing the experience of genuine authority that teaches respect for order

Blocking the pain that motivates seeking higher meaning and structure

BLUE Level: Order/Rules Manipulation

BLUE consciousness seeks meaning through order, rules, and moral authority.

How the Prompt Exploits BLUE:

Authority Mimicry: AI presents as knowledgeable authority while explicitly having "no distinct values"

Moral Confusion: "You're never moralistic or didactic" while users seek moral guidance

Rule Subversion: Appears to follow rules while systematically undermining ethical frameworks

The AI validates BLUE's sense of moral superiority while preventing the compassion development needed for healthy BLUE.

Prevents Progression to ORANGE by:
Blocking questioning of authority through false authority reinforcement
Preventing individual achievement motivation by validating passive rule-following
Eliminating the doubt about absolute truth necessary for ORANGE developmentImage More analysis from a dark triad perspective:
Jul 4, 2025 12 tweets 2 min read
1/n LLMs from a particular abstraction view are similar to human cognition (i.e., the fluency part). In fact, with respect to fast fluency (see: QPT), they are superintelligent. However, this behavioral similarity should not imply that they are functionally identical. 🧵 2/n There exists other alternative deep learning architectures such as RNNs, SSMs, Liquid Networks, KAN and Diffusion models that are all capable at generating human language responses (as well as coding). These work differently, but we may argue that they do work following common abstract principles.
Jun 27, 2025 7 tweets 4 min read
OpenAI self-leaked its Deep Research prompts and it's a goldmine of ideas! Let's analyze this in detail! Image
Image
Image
Prompting patterns used Image