Carlos E. Perez Profile picture
Sep 14, 2020 27 tweets 4 min read Read on X
How does it feel to understand something?
To feel that you understand something implies that it is conveyed to you in a language that you have previously understood.
Language is more than syntax and it includes semantics. Our natural language is full of metaphors and we understand what is spoken to us through our previous understanding of these metaphors.
Thus the feeling of understanding at the most basic level is its connection to what you already know. The great explainers make this connection using apt metaphors in their language.
But understanding will vary in degrees. An expert will understand the words of another expert in a different way than a novice will understand it.
That's because to get a better 'feel' of understanding, one has to touch the surface of the subject in many more ways. This requires more than passive engagement but rather understanding is enhanced by doing.
There is no deep understanding without doing. There is no understanding without interaction. This interaction may be physical or it may be mental.
The latter kind is more difficult because there is nothing to correct for errors other than one's previous understanding.
Thus understanding involves the interaction with this world to uncover the errors in understanding revealed by this world. We understand because we interact to see the errors uncovered by our interaction.
Thus when we again interact with a new subject and discover no errors in our interaction, we gain confidence in our understanding and thus the feel of understanding.
Human understanding involves connecting many related concepts. So, we feel that we understand when we can generate the connections ourselves. Passively seeing the connections is not the same as generating the connections.
Thus to persuade some to believe that they understand something, they have to generate the connections themselves. Which begins by planting the seed so that their understanding grows.
Once the seed is planted and it is grown within a person through repeated reinforcement, it becomes impossible to change a person's understanding. Arguments are insufficient. That is why disinformation is a very terrible thing.
The feeling of mastery of a subject is when we discover ourselves in the flow of thought, where we can navigate a complex subject with effortlessness.
Unfortunately, this can be a gift in disguise. Mastery can be a curse if one begins with the wrong seed. We may feel we understand the world, but it may be entirely wrong because it germinated from a seed that is not of this world.
What then is the 'right seed'? The right seed will always begin with a hypothesis. A hypothesis is imagined when one discovers an error in one's model of reality. It is through exploration of this error (or what one might call surprise) that we might generate a good hypothesis.
A good hypothesis is what Richard Feynman calls a 'first principle'. His first principle is in fact recursive "The first principle is that you must not fool yourself — and you are the easiest person to fool.”
This is when it hits you, many have fooled themselves into understanding the world they lived in. The germinating seed of Christianity is that we are all sinners. The germinating seed for understanding is that we are all fools.
We are fools because we have accepted to be fooled by the societies we live in. We believe we understand our world because our societies have fooled ourselves.
C.S. Peirce and @conways_law realized that the concepts that we create are the consequence of the organizations that we've invented. These concepts are reinforced by the organizations whose existence is justified by the validity of their ideas.
In every domain in science, we have organizations that have emerged to promote a specific perspective into reality. We build hermetically sealed communities that are unable to absorb new ideas from other communities.
We are unable to break out of a belief system because we grew up in that belief system. The most difficult thing to do is to throw away your belief system that took decades of effort for you to grow.
The first step in progress is to accept that your first principle is wrong. Unfortunately, you cannot do this because your entire livelihood, your entire being, is in jeopardy if you make this acknowledgment.
So the most convenient thing is to accept the reassuring lie. So even though we have a glimpse that we are wrong, we refuse to make the change. We have already sunk too big an investment in the wrong cause.
This is why most change comes from the youth. From the people who have yet to make an investment. From the people who do not benefit from the status quo.
From the people who know that they are fools.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Carlos E. Perez

Carlos E. Perez Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @IntuitMachine

Jun 27
OpenAI self-leaked its Deep Research prompts and it's a goldmine of ideas! Let's analyze this in detail! Image
Image
Image
Prompting patterns used Image
1. System Message Prompt

Prompting Patterns Used:
a) Structured Response Pattern
Description:
A prompt that explicitly specifies format, expectations, and output style—ensuring clarity and replicability, as outlined in the knowledge source (“Structured Response Pattern” and “Grammatic Scaffolding”).
Quoted Instance:

“Your task is to analyze the health question the user poses.”

“Focus on data-rich insights: include specific figures, trends, statistics, and measurable outcomes…”

“Summarize data in a way that could be turned into charts or tables, and call this out in the response…”

b) Constraint Signaling Pattern

Description:
Explicitly states constraints or requirements, reducing ambiguity (“Constraint Signaling Pattern”).
Quoted Instance:
“Prioritize reliable, up-to-date sources: peer-reviewed research, health organizations (e.g., WHO, CDC), regulatory agencies, or pharmaceutical earnings reports.”

“Be analytical, avoid generalities, and ensure that each section supports data-backed reasoning…”

c) Declarative Intent Pattern

Description:
Prompt spells out the intention and the reasoning approach—aligning model action with user needs.

Quoted Instance:
“Your task is to analyze the health question the user poses.”

2. System Message with MCP Prompt

Prompting Patterns Used:

a) Tool Use Governance

Description:
Directs the model to use a specific internal tool and sets priorities for information sources. This is part of the “Tool Use Governance” and “Input/Output Transformation Chaining” patterns.
Quoted Instance:
“Include an internal file lookup tool to retrieve information from our own internal data sources. If you’ve already retrieved a file, do not call fetch again for that same file. Prioritize inclusion of that data.”
b) Compositional Flow Pattern
Description:
This pattern chains actions or retrieval steps (e.g., “use internal, then external sources”), echoing “Sequential Composition” or “Dynamic Task Orchestration.”

Quoted Instance:

“Prioritize inclusion of that data [from internal sources].”

3. Suggest Rewriting Prompt
Prompting Patterns Used:
a) Instructional Framing Voice

Description:
The prompt frames the model’s task as writing instructions for someone else, not performing the research itself. This is a hallmark of the “Instructional Framing Voice” pattern.

Quoted Instance:

“Your job is to produce a set of instructions for a researcher that will complete the task. Do NOT complete the task yourself, just provide instructions on how to complete it.”
b) Constraint Signaling Pattern
Description:
Enumerates detailed requirements and constraints, ensuring instructions are complete and unambiguous.
Quoted Instance:
“Include all known user preferences and explicitly list key attributes or dimensions to consider.”

“If certain attributes are essential for a meaningful output but the user has not provided them, explicitly state that they are open-ended…”

c) Output Structure/Format Signaling

Description:
Specifies the expected output structure or format, closely linked to the “Structured Response Pattern.”
Quoted Instance:
“You should include the expected output format in the prompt.”

“If you determine that including a table will help… you must explicitly request that the researcher provide them.”

4. Suggest Clarifying Prompt

Prompting Patterns Used:

a) Implicit Assumption Clarification Pattern
Description:
Prompt focuses on surfacing ambiguities and missing information—encouraging the model to seek clarity before acting (“Implicit Assumption Clarification Pattern”).
Quoted Instance:

“Ask clarifying questions that would help you or another researcher produce a more specific, efficient, and relevant answer.”

“Identify essential attributes that were not specified in the user’s request…”

b) Feedback Integration Pattern

Description:
Directs iterative, conversational clarification to refine scope and reduce ambiguity, echoing “Feedback Integration Pattern.”
Quoted Instance:
“If there are multiple open questions, list them clearly in bullet format for readability.”
“Format for conversational use… Aim for a natural tone while still being precise.”
Read 7 tweets
Jun 14
Anthropic published their prompts for their advanced research agent. These are long reasoning prompts. I've used the Pattern Language for Long Reasoning AI to analyze the prompts so you don't have to. Image
Image
Image
Image
Here is the analysis of the citations prompt Image
Image
Here is the analysis of the research lead prompt Image
Image
Image
Image
Read 7 tweets
Jun 7
Shocker! Cursor system prompts have been leaked, and it's a goldmine!

The Claude system prompt incorporates several identifiable agentic AI patterns as described in "A Pattern Language For Agentic AI." Here's an analysis of the key patterns used:

1. Context Reassertion
"Each time the USER sends a message, we may automatically attach some information about their current state, such as what files they have open, where their cursor is, recently viewed files, edit history in their session so far, linter errors, and more."

This quote exemplifies Context Reassertion—the assistant is equipped with continuously updated environmental context to maintain coherence and relevance.

2. Intent Echoing
"Your main goal is to follow the USER's instructions at each message, denoted by the tag."

" how do I get nginx to get the key from an environment variable in my .env? "

The system’s focus on parsing and responding to a well-defined user_query illustrates Intent Echoing, ensuring the agent aligns precisely with the user’s intent.

3. Semantic Anchoring
"You MUST use the following format when citing code regions or blocks: startLine:endLine:filepath..."

"...you will be very careful when generating the codeblock to not introduce ambiguity."

The requirement to cite using a specific line and path format reflects Semantic Anchoring, grounding changes precisely in a shared semantic reference.

4. Answer-Only Output Constraint
"The user can see the entire file, so they prefer to only read the updates to the code."

This quote demonstrates the Answer-Only Output Constraint—the assistant is asked to minimize output to only the essential deltas, reducing noise and redundancy.

5. Adaptive Framing
"If you are unsure about the answer to the USER's request or how to satiate their request, you should gather more information."

"Bias towards not asking the user for help if you can find the answer yourself."

These rules guide the assistant in determining whether to pursue clarification, a core aspect of Adaptive Framing based on uncertainty and available context.

6. Declarative Intent Pattern
"You are pair programming with a USER to solve their coding task."

"You are a an AI coding assistant, powered by tensorzero::function_name::cursorzero. You operate in Cursor"

This self-definition clearly articulates the assistant’s role and operational domain, which aligns with the Declarative Intent Pattern.

7. Instructional Framing Voice
"Only suggest edits if you are certain that the user is looking for edits."

"To help specify the edit to the apply model, you will be very careful when generating the codeblock to not introduce ambiguity."

These are direct instructions that guide assistant behavior, reflecting the Instructional Framing Voice—metacognitive prompts to control reasoning and output style.

8. Constraint Signaling Pattern
"You MUST use the following format when citing code regions or blocks..."

"This is the ONLY acceptable format..."

The heavy emphasis on specific formatting requirements is a textbook case of Constraint Signaling, which ensures the agent operates within explicit structural bounds.Image
Image
Pattern Overview: Context Reassertion

Context Reassertion is the act of persistently supplying or recovering relevant context so that continuity is preserved, especially when interacting across turns or after state transitions.

Purpose:
It mitigates LLM drift or disconnection from prior state by explicitly maintaining or restating key elements of the conversation, code environment, user activity, and intent.

Application in the Prompts
System Prompt Evidence

"Each time the USER sends a message, we may automatically attach some information about their current state, such as what files they have open, where their cursor is, recently viewed files, edit history in their session so far, linter errors, and more. This information may or may not be relevant to the coding task, it is up for you to decide."

This designates that stateful metadata (cursor location, files open, edit history, etc.) will accompany user prompts. This is contextual scaffolding—supporting the assistant’s situational awareness.

User Prompt Evidence


Below are some potentially helpful/relevant pieces of information for figuring out to respond

Path: nginx/nginx.conf
Line: 1
Line Content: events {}



...



...





This is a structured reassertion of context across layers:
- **File Path**: `nginx/nginx.conf`
- **Cursor Position**: Line 1
- **Manual Selection**: Lines 1–16 of the file
- **Full File Content**: Included in-line

The assistant is not just answering a question in a vacuum but is immersed in the live state of the user’s development environment—exactly what **Context Reassertion** is designed to facilitate.

---

### **Functionality Enabled by Context Reassertion**

1. **Precision in Suggestions**: The assistant knows *where* in the file the user is working, allowing for tailored code advice.
2. **Reduced Ambiguity**: With live file contents and active lines included, the assistant doesn’t have to guess the context.
3. **Continuity Across Turns**: If the user comes back later, the assistant can reuse this context or infer from a new one, supporting conversational memory continuity.

---

### **Why This Matters in Agentic AI**

In an agentic paradigm, the system behaves not like a one-shot responder but as a **continuously collaborating partner**. For that to work, it must retain, reuse, and reflect on context across states. The persistent presence of file context, cursor data, and history emulates **episodic memory**—a cognitive trait critical to agents with intent and continuity.

---
2. Intent Echoing

Pattern Definition:
The Intent Echoing pattern reinforces the user’s explicit goal by reflecting or paraphrasing their stated intent in a stable, traceable way. It helps the AI stay grounded, reduces ambiguity, and aids the user in validating that their instruction has been understood correctly.

Where It Appears in the Prompts:
System Prompt:

"Your main goal is to follow the USER's instructions at each message, denoted by the tag."

This directive does more than define the assistant’s task—it encodes a protocol for isolating and prioritizing user intent by locating it in a tagged section. It transforms arbitrary conversation into a structured interface, where each message has a single highlighted focus. This is the system setting up intent echoing infrastructure.

User Prompt (with Context and Tagging):

" how do I get nginx to get the key from an environment variable in my .env? "

The presence of this tag explicitly isolates user intent. It’s structured in a way that allows the assistant (and any intermediary system) to reliably extract and echo back what the user is asking.

Assistant Response (Echo and Contextual Integration):

"I see you're trying to use an environment variable for your authentication token in the Nginx configuration instead of hardcoding 'your-token-here'..."

This sentence echoes the user's intent—without restating the question verbatim, it confirms that the assistant has interpreted the user's goal: avoiding a hardcoded token by referencing an environment variable. This is a paraphrastic echo, an important variant of the pattern that shows understanding rather than repetition.

Why It Matters in This Context:

Disambiguation in Complex Contexts:
The assistant has access to extra context (e.g., open files, manual selections), which may distract from the actual question. Intent echoing ensures the assistant responds to the user’s current goal, not just the visible data.

Trust and Validation:
By restating the intent in terms that match the user’s problem ("get nginx to read from .env"), the assistant gives the user a chance to mentally validate: “Yes, that’s what I meant.”

Alignment in Multi-Step Dialogue:
Echoing the intent lays the foundation for future interaction. If the user follows up with "What if I’m not using Docker?", the prior echo provides continuity—anchoring the assistant’s evolving understanding.

How It Could Be Further Strengthened:

The assistant could explicitly name the variable or reflect the specific config line involved (e.g., "~*^Bearer your-token-here$"), enhancing traceability between intent and context.

The system could log or visually display the echoed intent, making multi-turn interactions more transparent for the user.
Read 12 tweets
May 24
Shocker! Claude 4 system prompt was leaked, and it's a goldmine!

The Claude system prompt incorporates several identifiable agentic AI patterns as described in "A Pattern Language For Agentic AI." Here's an analysis of the key patterns used:

Run-Loop Prompting: Claude operates within an execution loop until a clear stopping condition is met, such as answering a user's question or performing a tool action. This is evident in directives like "Claude responds normally and then..." which show turn-based continuation guided by internal conditions.

Input Classification & Dispatch: Claude routes queries based on their semantic class—such as support, API queries, emotional support, or safety concerns—ensuring they are handled by different policies or subroutines. This pattern helps manage heterogeneous inputs efficiently.

Structured Response Pattern: Claude uses a rigid structure in output formatting—e.g., avoiding lists in casual conversation, using markdown only when specified—which supports clarity, reuse, and system predictability.

Declarative Intent: Claude often starts segments with clear intent, such as noting what it can and cannot do, or pre-declaring response constraints. This mitigates ambiguity and guides downstream interpretation.

Boundary Signaling: The system prompt distinctly marks different operational contexts—e.g., distinguishing between system limitations, tool usage, and safety constraints. This maintains separation between internal logic and user-facing messaging.

Hallucination Mitigation: Many safety and refusal clauses reflect an awareness of LLM failure modes and adopt pattern-based countermeasures—like structured refusals, source-based fallback (e.g., directing users to Anthropic’s site), and explicit response shaping.

Protocol-Based Tool Composition: The use of tools like web_search or web_fetch with strict constraints follows this pattern. Claude is trained to use standardized, declarative tool protocols which align with patterns around schema consistency and safe execution.

Positional Reinforcement: Critical behaviors (e.g., "Claude must not..." or "Claude should...") are often repeated at both the start and end of instructions, aligning with patterns designed to mitigate behavioral drift in long prompts.Image
The Run-Loop Prompting pattern, as used in Claude's system prompt, is a foundational structure for agentic systems that manage tasks across multiple interaction turns. Here's a more detailed breakdown of how it functions and appears in Claude's prompt:

Core Concept of Run-Loop Prompting

Run-Loop Prompting involves:

Executing within a loop where the system awaits a signal (usually user input or tool result).
Evaluating whether a stopping condition has been met.
Deciding either to complete the response or to continue with another action (like a tool call or a follow-up question).

This mirrors programming constructs like while or for loops, but in natural language form.

How It Manifests in Claude’s Prompt

In Claude's case:

Each user interaction is a "run": Claude processes input, possibly calls a tool (like web_search), and returns a result.
The loop continues if further actions are required—for instance, fetching more results, verifying information, or clarifying a query.
The stopping condition is implicit: Claude halts its operations when the query is resolved or if refusal criteria are triggered (e.g., unsafe or out-of-scope requests).

Specific Indicators in the Prompt

"Claude is now being connected with a person." → Initializes the loop.
"Claude uses web search if asked about..." → Specifies mid-loop tool use under certain conditions.
"Claude responds normally and then..." → Suggests continuity and state progression from one step to the next.
Tool response handling instructions like blocks further reinforce that the loop supports structured transitions between action and reasoning.

Why This Matters

Run-loop prompting gives Claude:

Agentic persistence: It can follow through on multi-step tasks without losing coherence.
Responsiveness: It adapts its next move based on outcomes of previous steps.
Safety control: Each loop pass allows reevaluation against safety and refusal criteria.
Input Classification & Dispatch, refers to the way Claude systematically classifies user inputs into discrete categories and routes them through appropriate response protocols. This enables Claude to handle a wide range of queries in a coherent, consistent, and policy-aligned way.

Breakdown of the Pattern

Classification: Claude identifies the type of input it receives. Categories in the prompt include:
Product or support queries (e.g., about the Claude API, usage limits)
Behavioral or safety-related content (e.g., harmful requests, involving minors)
Conversational or emotional support requests
Hypotheticals or personal questions
Requests invoking tools (e.g., web_search, Claude Code)

Routing/Dispatch:
Each input type is mapped to a specific set of instructions.
For product questions like "How many messages can I send?", Claude responds with, "I don’t know, check [support site]."
For API-related questions, Claude is told to redirect to Anthropic's docs.
For emotional support, Claude uses empathetic and warm tone without lists.
For safety red flags, Claude switches to strict refusal mode and avoids speculation or justification.
For uncertain or false presuppositions, Claude engages with clarification before answering.
Benefits of This Pattern:
Scalability: Allows new behaviors to be slotted in without rewriting the whole system prompt.
Safety: Reduces the chance of hallucination or misuse by isolating risky inputs.
Clarity: Makes Claude’s behavior predictable and interpretable, both to users and developers.
Read 12 tweets
Mar 30
1/n There seems to be a spectrum in interface between a UI that explictly shows you the options (like a dinner menu) and the free from chat interface that doesn't and requires conversation to find out. But why don't we have UIs that seamlessly flows between the two ends of the spectrum? Between constrained interfaces and open-ended ones?
2/n You see, I've been exploring prompting patterns for a while now. Originally with conversational systems like GPT-4, to long reasoning systems like o1 and lately on agentic AI systems that support MCP. Image
3/n What I've found is that prompting alone can be quite complex and it's in fact difficult to keep in one's head all the methods that enhance one's prompting. We kind of need something beyond just the classic UI and the classic chat.
Read 7 tweets
Mar 21
1/n 🧵We've invented instruments like microscopes and telescopes that give us a glimpse of the much deeper (i.e., smaller) and broader (i.e., larger) aspects of spacetime. Artificial Intelligence is an instrument that aids us in exploration deeper and broader the aspects of inner space (i.e., the mindscape).
2/n To make an analogy, many AI researchers work on developing better instruments. Like working on all kinds of telescopes or microscopes. Then there are people who use these instruments to explore the extremely large and the extremely small. In the same way, there are people who *use* AI to explore the mindscape. These are *not* necessarily the same people.
3/n Yes, in the old days, Galileo invented the telescope and was the first to explore planetary objects. But as science and technology evolved, the technologists became different from the researchers. More precisely, specialization meant people explored different aspects. Building better instruments versus discovery the nature of reality.
Read 9 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(