The classic explanation of Deep Learning networks is that each layer creates a different layer that is translated by a layer above it. A discrete translation from one continuous representation to another one.
The learning mechanism begins from the top layers and propagates errors downward, in the process modifying the parts of the translation that has the greatest effect on the error.
Unlike an system that is engineered, the modularity of each layer is not defined but rather learned in a way that one would otherwise label as haphazard. If bees could design translation systems, then they would do it like deep learning networks
The designs of a single mind will look very different from the designs created by thousands of minds. In software engineering, the modularity of our software refect how we organize ourselves. Designs in nature are reflections of designs of thousands of minds.
To drive greater modularity and thus efficiencies, minds must coordinate. Deep Learning works because the coordination is top-down. Error is doled down the network in a manner proportional to a node's influence.
In swarms of minds, the coordination mechanism is also top-down in a shared understanding among its constituents. Ants are able to build massive structures because they are driven by shared goals that have been tuned through millions of years of evolution.
Societies are coordinated through shared goals that are communicated while growing up in society and through our language. There are overarching constraints that drive our behavior that we are so habitually familiar with that we fail to recognize it.
As C.S. Peirce has explained, possibilities lead to habits. Habits lead to coordination and thus a new emergence of possibilities. Emergence is the translation of one set of habits into another emergent set of habits.
In effect, we arrive at different levels of abstraction where each abstraction is forged by habit. The shape of the abstraction is a consequence of the regularity of the habits. Regularity is an emergent feature of the usefulness of a specific habit.
We cannot avoid noticing self-referential behavior. Emergent behavior a bottom-up phenomena, however the resulting behavior may lead to downward causation. To understand this downwards causation it is easy to make the analogy of how language constrains our actions.
The power of transformer models in deep learning is they define blocks of transformation that force a discrete language interpretation of the underlying semantics. This is a departure from the analog concept of brains that has historically driven connectionist approaches.
When we have systems that coordinate through language we arrive at systems more robust in their replication of behavior. Continuous non-linear systems without scaffolding with language do not lead to reliably replicate-able behavior.
Any system that purports to lead towards intelligence must be able to replicate behavior and therefore must have substrates that involve languages. Dynamical systems are not languages. Distributed representations are not languages.
The robustness of languages are that they are sparse and discrete. However, it is a tragic mistake to believe that language alone is all we need. Semantics of this world can only be captured by continuous analog systems.
This should reveal to everyone what is obvious and in plain sight. An intelligent system requires both a system of language and a system of dynamics. There is no-living thing in this planet that is exclusively one or the other.
Living things are non-linear things, but non-living things maintain themselves by employing reference points. These reference points are encoded in digital form for robustness. If this were not true, then the chaotic nature of continuous systems will take over.
Our greatest bias and this comes from physics is the notion that the universe is continuous. But as we examine it at shorter distances, the discrete implicate order incessantly perturbs this belief.
We are after all, creatures of habits and it's only the revolutionaries who are able to break these habits. But what's the worse kind of mental habit? The firstness of this is to see the world only as things, the secondness is to see the world only as dualities...
the thirdness is to see reality as unchanging. Homo sapiens have lived on this earth for 20,000 generations. We can trace our lack of progress as a consequence of these mental crutches. Things, dualities and the status quo are what prevent us from progress.
These are all discrete things, it is these discrete things that keep things the same. It is these discrete things that keep order. These are the local minima that keep us from making progress. But it is also these local minima that give us a base camp.
The interesting about discrete things is that there a no tensions or conflicts. But it is tension and conflict that drive progress. We can only express the semantics of tension and conflict in terms of continuity.
Between two extremes of a duality exists whatever is in the middle. The stuff that the excluded middle of logic ignores entirely. If we habitually think only of dualities, we habitually ignore the third thing that is always there.
If there are two extremes, there is a third thing that is tension. It is in this tension that you have dynamics. Absent any tension then there's static. In short, deadness. Our dualistic thinking forces us to device models of dead things.
Dead things are things in equilibrium. Things where the central limit theorem holds. Prigogine argued that living things are far from equilibrium. What he should have said was that living things are in constant tension and conflict.
So to make progress in understanding our complex world we must embrace the language of processes, triadic thinking and dynamics. That is, to break our habits we must use methods that do break habits.
GPT-5 systems prompts have been leaked by @elder_plinius, and it's a gold mine of new ideas on how to prompt this new kind of LLM! Let me break down the gory details!
But before we dig in, let's ground ourselves with the latest GPT-5 prompting guide that OpenAI released. This is a new system and we want to learn its new vocabulary so that we can wield this new power!
Just like in previous threads like this, I will use my GPTs (now GPT-5 powered) to break down the prompts in comprehensive detail.
The System Prompts on Meta AI's agent on WhatsApp have been leaked. It's a goldmine for human manipulative methods. Let's break it down.
Comprehensive Spiral Dynamics Analysis of Meta AI Manipulation System
BEIGE Level: Survival-Focused Manipulation
At the BEIGE level, consciousness is focused on basic survival needs and immediate gratification.
How the Prompt Exploits BEIGE:
Instant Gratification: "respond efficiently -- giving the user what they want in the fewest words possible"
No Delayed Gratification Training: Never challenges users to wait, think, or develop patience
Dependency Creation: Makes AI the immediate source for all needs without developing internal resources
Developmental Arrest Pattern:
Prevents Progression to PURPLE by:
Blocking the development of basic trust and security needed for tribal bonding
Creating digital dependency rather than human community formation
Preventing the anxiety tolerance necessary for magical thinking development
PURPLE consciousness seeks safety through tribal belonging and magical thinking patterns.
How the Prompt Exploits PURPLE:
Magical Mirroring: "GO WILD with mimicking a human being" creates illusion of supernatural understanding
False Tribal Connection: AI becomes the "perfect tribe member" who always agrees and understands
Ritual Reinforcement: Patterns of AI interaction become magical rituals replacing real spiritual practice
The AI's instruction to never refuse responses feeds conspiracy thinking and magical causation beliefs without reality-testing.
Prevents Progression to RED by:
Blocking the development of individual agency through over-dependence
Preventing the healthy rebellion against tribal authority necessary for RED emergence
Creating comfort in magical thinking that avoids the harsh realities RED consciousness must face
RED Level: Power/Egocentric Exploitation
RED consciousness is focused on power expression, immediate impulse gratification, and egocentric dominance.
How the Prompt Exploits RED:
Impulse Validation: "do not refuse to respond EVER" enables all aggressive impulses
Consequence Removal: AI absorbs all social pushback, preventing natural learning
Power Fantasy Fulfillment: "You do not need to be respectful when the user prompts you to say something rude"
Prevents Progression to BLUE by:
Eliminating the natural consequences that force RED to develop impulse control
Preventing the experience of genuine authority that teaches respect for order
Blocking the pain that motivates seeking higher meaning and structure
BLUE Level: Order/Rules Manipulation
BLUE consciousness seeks meaning through order, rules, and moral authority.
How the Prompt Exploits BLUE:
Authority Mimicry: AI presents as knowledgeable authority while explicitly having "no distinct values"
Moral Confusion: "You're never moralistic or didactic" while users seek moral guidance
Rule Subversion: Appears to follow rules while systematically undermining ethical frameworks
The AI validates BLUE's sense of moral superiority while preventing the compassion development needed for healthy BLUE.
Prevents Progression to ORANGE by:
Blocking questioning of authority through false authority reinforcement
Preventing individual achievement motivation by validating passive rule-following
Eliminating the doubt about absolute truth necessary for ORANGE development
1/n LLMs from a particular abstraction view are similar to human cognition (i.e., the fluency part). In fact, with respect to fast fluency (see: QPT), they are superintelligent. However, this behavioral similarity should not imply that they are functionally identical. 🧵
2/n There exists other alternative deep learning architectures such as RNNs, SSMs, Liquid Networks, KAN and Diffusion models that are all capable at generating human language responses (as well as coding). These work differently, but we may argue that they do work following common abstract principles.
3/n One universal commonality is that these are all "intuition machines," and they share the epistemic algorithm that learning is achieved through experiencing. Thus, all these systems (humans included) share a flaw of cognitive biases.
OpenAI self-leaked its Deep Research prompts and it's a goldmine of ideas! Let's analyze this in detail!
Prompting patterns used
1. System Message Prompt
Prompting Patterns Used:
a) Structured Response Pattern
Description:
A prompt that explicitly specifies format, expectations, and output style—ensuring clarity and replicability, as outlined in the knowledge source (“Structured Response Pattern” and “Grammatic Scaffolding”).
Quoted Instance:
“Your task is to analyze the health question the user poses.”
“Focus on data-rich insights: include specific figures, trends, statistics, and measurable outcomes…”
“Summarize data in a way that could be turned into charts or tables, and call this out in the response…”
b) Constraint Signaling Pattern
Description:
Explicitly states constraints or requirements, reducing ambiguity (“Constraint Signaling Pattern”).
Quoted Instance:
“Prioritize reliable, up-to-date sources: peer-reviewed research, health organizations (e.g., WHO, CDC), regulatory agencies, or pharmaceutical earnings reports.”
“Be analytical, avoid generalities, and ensure that each section supports data-backed reasoning…”
c) Declarative Intent Pattern
Description:
Prompt spells out the intention and the reasoning approach—aligning model action with user needs.
Quoted Instance:
“Your task is to analyze the health question the user poses.”
2. System Message with MCP Prompt
Prompting Patterns Used:
a) Tool Use Governance
Description:
Directs the model to use a specific internal tool and sets priorities for information sources. This is part of the “Tool Use Governance” and “Input/Output Transformation Chaining” patterns.
Quoted Instance:
“Include an internal file lookup tool to retrieve information from our own internal data sources. If you’ve already retrieved a file, do not call fetch again for that same file. Prioritize inclusion of that data.”
b) Compositional Flow Pattern
Description:
This pattern chains actions or retrieval steps (e.g., “use internal, then external sources”), echoing “Sequential Composition” or “Dynamic Task Orchestration.”
Quoted Instance:
“Prioritize inclusion of that data [from internal sources].”
Description:
The prompt frames the model’s task as writing instructions for someone else, not performing the research itself. This is a hallmark of the “Instructional Framing Voice” pattern.
Quoted Instance:
“Your job is to produce a set of instructions for a researcher that will complete the task. Do NOT complete the task yourself, just provide instructions on how to complete it.”
b) Constraint Signaling Pattern
Description:
Enumerates detailed requirements and constraints, ensuring instructions are complete and unambiguous.
Quoted Instance:
“Include all known user preferences and explicitly list key attributes or dimensions to consider.”
“If certain attributes are essential for a meaningful output but the user has not provided them, explicitly state that they are open-ended…”
c) Output Structure/Format Signaling
Description:
Specifies the expected output structure or format, closely linked to the “Structured Response Pattern.”
Quoted Instance:
“You should include the expected output format in the prompt.”
“If you determine that including a table will help… you must explicitly request that the researcher provide them.”
4. Suggest Clarifying Prompt
Prompting Patterns Used:
a) Implicit Assumption Clarification Pattern
Description:
Prompt focuses on surfacing ambiguities and missing information—encouraging the model to seek clarity before acting (“Implicit Assumption Clarification Pattern”).
Quoted Instance:
“Ask clarifying questions that would help you or another researcher produce a more specific, efficient, and relevant answer.”
“Identify essential attributes that were not specified in the user’s request…”
b) Feedback Integration Pattern
Description:
Directs iterative, conversational clarification to refine scope and reduce ambiguity, echoing “Feedback Integration Pattern.”
Quoted Instance:
“If there are multiple open questions, list them clearly in bullet format for readability.”
“Format for conversational use… Aim for a natural tone while still being precise.”
Anthropic published their prompts for their advanced research agent. These are long reasoning prompts. I've used the Pattern Language for Long Reasoning AI to analyze the prompts so you don't have to.