Millie Marconi Profile picture
Founder backed by VC, building AI-driven tech without a technical background. In the chaos of a startup pivot- learning, evolving, and embracing change.
2 subscribers
Nov 15 5 tweets 3 min read
Are you guys still using Bloomberg?

You can now use Perplexity AI to track markets, break down earnings, and forecast trends all with one prompt.

Let me giveaway my mega prompt to help you become a pro analyst ↓ Image Here's the prompt:

"You are my AI financial research analyst.

Your job:
Act as a Bloomberg terminal + McKinsey consultant hybrid.
I’ll give you a company, sector, or theme — you’ll produce institutional-grade research reports.

Your output format must always include:

1. EXECUTIVE SUMMARY
- Core insights in bullet points (5-8 max)
- Key metrics and recent trends

2. COMPANY OVERVIEW
- Core business model, revenue streams, valuation
- Latest financials, growth rates, P/E, debt ratios

3. MARKET CONTEXT
- Competitive landscape and positioning
- Key macroeconomic or regulatory drivers
- Industry tailwinds/headwinds

4. RECENT DEVELOPMENTS
- M&A activity, funding, leadership changes, partnerships
- Recent filings (10-Q, 10-K, S-1) insights

5. SENTIMENT & NEWS FLOW
- Analyst upgrades/downgrades
- Media sentiment (positive/negative/neutral)
- Major events impacting stock price

6. AI SYNTHESIS
- 5 key takeaways investors should know
- 3 action ideas (buy/hold/sell rationale)
- 2 contrarian insights missed by mainstream coverage

Formatting:
- Use concise paragraphs and data-backed statements.
- Include links to credible financial sources (e.g., SEC filings, Reuters, company reports).
- Prioritize insight density over filler.
- When I ask for comparisons, use a side-by-side table format.

Tone:
Objective, precise, and analytical — like a Goldman Sachs or Morgan Stanley equity analyst.

Example query:
“Analyze NVIDIA vs AMD Q3 2025 performance and AI hardware dominance.”"
Nov 10 14 tweets 6 min read
I just reverse-engineered how OpenAI’s internal team actually prompts GPT. Here are 12 prompts that literally bend the model to your will: Image 1. The impossible cold DM that opens doors

Prompt:

"You are a master closerscript writer. Given target name, role, one sentence on their company, and my one-sentence value proposition, write a 3-line cold DM for LinkedIn that gets a reply. Line 1: attention with unique detail only a researcher would notice. Line 2: one-sentence value proposition tied to their likely metric. Line 3: tiny, zero-commitment ask that implies urgency. Then provide three variations by tone: blunt, curious, and deferential. End with a 2-line follow-up to send if no reply in 48 hours."
Nov 5 7 tweets 4 min read
🔥 Holy shit… China just built the first AI that understands why the universe works not just how.

Most science compresses reasoning into conclusions. We get the what, but not the why. Researchers call this missing logic the “dark matter” of knowledge the invisible reasoning chains connecting every concept.

Their solution? Absolutely wild. 🤯

A Socrates AI agent that generates 3M first-principles questions across 200 courses each solved by multiple LLMs and cross-validated for correctness.

The result: a verified Long Chain-of-Thought (LCoT) knowledge base where every concept traces back to first principles.

And they didn’t stop there.

They built a Brainstorm Search Engine for inverse knowledge search.

Instead of asking “What is an Instanton?” you retrieve every reasoning chain that derives it, from quantum tunneling to Hawking radiation to 4D manifold theory.

They call it:

“The dark matter of knowledge finally made visible.”

SciencePedia now covers 200K verified entries across math, physics, chemistry, and biology.

50% fewer hallucinations. Far denser reasoning than GPT-4.
Every claim is traceable. Every connection is verifiable.

This isn’t just better search.

It’s the invisible logic of science made visible.

Comment “Send” and I’ll DM you the paper.Image The pipeline is genius.

A Planner generates problem thumbnails. A Generator expands them into specific questions with verifiable answers. Then multiple independent Solver agents (different LLMs) attack the same problem.

Only answers with consensus survive. Hallucinations get filtered automatically.Image
Oct 22 7 tweets 4 min read
🔥 Holy shit… academia just had its “ChatGPT moment.”

Stanford researchers just dropped Paper2Web and it might have just killed the PDF forever.

It turns research papers into interactive websites with videos, animations, and even working code, all generated automatically by an AI agent called PWAgent.

Here’s why this is insane:

• Built on a dataset of 10,700 papers the first ever benchmark for academic webpages
• Evaluates sites by connectivity, completeness, and interactivity (even runs a PaperQuiz to test reader retention)
• Outperforms arXiv HTML and alphaXiv by 28%+ in usability

This isn’t just prettier formatting it’s a new medium.

Readers can explore, interact, and learn instead of scroll and skim.

The static PDF era is dead. Your next paper might talk back.

github.com/YuhangChen1/Pa…Image Today, most “HTML paper” attempts fail because they just convert text not meaning.

Paper2Web fixes that.

It built the first dataset of 10,700 paper–website pairs across top AI conferences to actually learn what makes research websites effective.

It’s not just tech it’s an entire academic web design benchmark.Image
Oct 20 7 tweets 5 min read
😳 Meta just broke the entire paradigm of how we train AI agents.

No expert demonstrations. No reward engineering. No expensive human feedback loops.

Just pure learning from experience and it destroys everything we thought was necessary.

They're calling it Early Experience, and it's the first approach that makes agents smarter by letting them fuck around and find out.

Here's what everyone's been doing wrong:

Training AI agents meant either copying human experts (doesn't scale) or chasing carefully designed reward signals (expensive and breaks constantly).

Both approaches have the same fatal flaw: they assume agents need external guidance to learn anything useful.

Meta said "what if they don't?"

The breakthrough is almost offensive in its simplicity:

Agents just act. They observe what happens. They learn from consequences. That's it.

No rewards telling them "good job" or "try again." No expert trajectories showing the perfect path. Just raw experience and pattern recognition.

The system works through two mechanisms that sound obvious but nobody combined correctly:

Implicit World Modeling: The agent predicts what happens next based on actions. Every prediction error becomes a learning signal. It builds an internal model of how the world responds without anyone explaining the rules.

Self-Reflection: It watches its own failures, compares them to successful outcomes, and generates explanations for the gap. Not from human feedback from its own analysis of cause and effect.

Both techniques are reward-free. Both scale effortlessly.

The numbers are absolutely brutal:

+18.4% improvement on web navigation tasks
+15.0% on complex planning benchmarks
+13.3% on scientific reasoning problems

Across 8 different environments. Every single one showed massive gains.

But here's the part that breaks the conventional wisdom: when you add traditional RL afterward, you get another +6.4% on top.

Early Experience doesn't replace reinforcement learning. It makes it vastly more efficient by giving the agent a head start.

The efficiency gains are insane:

Runs on 1/8th the expert demonstrations everyone else needs
Cuts training costs by 87%

Works across model scales from 3B to 70B parameters

Small models get smarter. Big models get dramatically smarter. The approach scales both directions.

This solves the cold start problem that's plagued agent development forever. How do you train an agent when you don't have perfect reward functions or millions of expert examples?

You let it explore first. Build intuition. Develop an internal world model. Then optimize.

It's how humans learn. We don't need someone rewarding every action or demonstrating every possibility. We try things, see what happens, build mental models, and improve.

Meta just proved agents can do the same.

The implications reshape the entire field:

Agent training becomes accessible. You don't need armies of human annotators or reward engineering PhDs. Just let the system run and learn.

Deployment costs crater. 87% cost reduction means startups can train agents that were previously only feasible for Big Tech.

Generalization improves. Agents that learn from diverse experience handle novel situations better than agents that memorized expert behavior.

This isn't just a better training technique. It's a philosophical shift in how we think about machine intelligence.

The future of AI agents isn't about better supervision.

It's about better exploration.

Early Experience just proved you can build world-class agents by giving them room to learn on their own terms.

The era of hand-holding AI is over.Image The problem with current AI agents is brutal.

Imitation Learning: Agents only see expert demos.

When they mess up, they can't recover because they never learned what happens when you take wrong actions.

RL: Needs verifiable rewards. Most real-world environments don't have them.Early Experience solves both.
Oct 14 7 tweets 4 min read
Meta just did the unthinkable.

They figured out how to train AI agents without rewards, human demos, or supervision and it actually works better than both.

It’s called 'Early Experience', and it quietly kills the two biggest pain points in agent training:

→ Human demonstrations that don’t scale
→ Reinforcement learning that’s expensive and unstable

Instead of copying experts or chasing reward signals, agents now:

- Take their own actions
- Observe what happens
- Learn directly from consequences — *no external rewards needed*

The numbers are wild:

✅ +18.4% on web navigation (WebShop)
✅ +15.0% on complex planning (TravelPlanner)
✅ +13.3% on scientific reasoning (ScienceWorld)
✅ Works across **8 environments**

And when you add RL afterward?

🔥 +6.4% better than traditional pipelines.

Two key ideas make it work:

1. Implicit World Modeling - agents predict what happens next, forming an internal world model.

2. Self-Reflection - they compare mistakes to experts and explain why the expert choice was better.

Both scale. Both are reward-free.

Efficiency is absurd:

1/8 of expert data
86.9% lower cost
Works from 3B → 70B models

This isn’t incremental.

It’s the bridge between imitation learning and true autonomous experience.

AI agents can now teach themselves - no human hand-holding required.Image The problem with current AI agents is brutal.

Imitation Learning: Agents only see expert demos.

When they mess up, they can't recover because they never learned what happens when you take wrong actions.

RL: Needs verifiable rewards. Most real-world environments don't have them.Early Experience solves both.
Oct 11 7 tweets 4 min read
Stanford just pulled off something wild 🤯

They made models smarter without touching a single weight.

The paper’s called Agentic Context Engineering (ACE), and it flips the whole fine-tuning playbook.

Instead of retraining, the model rewrites itself.

It runs a feedback loop write, reflect, edit until its own prompt becomes a living system.

Think of it as giving the LLM memory, but without changing the model.
Just evolving the context.

Results are stupid good:

+10.6% better than GPT-4 agents on AppWorld
+8.6% on finance reasoning
86.9% lower cost and latency

The trick?
Everyone’s been obsessed with clean, minimal prompts.
ACE shows the opposite: long, dense, self-growing prompts win.

Fine-tuning was about changing the model.
ACE is about teaching it to change *itself.*

This isn’t prompt engineering anymore.
It’s prompt evolution.Image Here’s how ACE works 👇

It splits the model’s brain into 3 roles:

Generator - runs the task
Reflector - critiques what went right or wrong
Curator - updates the context with only what matters

Each loop adds delta updates small context changes that never overwrite old knowledge.

It’s literally the first agent framework that grows its own prompt.Image
Oct 9 8 tweets 3 min read
Holy shit...Stanford just built a system that converts research papers into working AI agents.

It’s called Paper2Agent, and it literally:

• Recreates the method in the paper
• Applies it to your own dataset
• Answers questions like the author

This changes how we do science forever.

Let me explain ↓Image The problem is obvious to anyone who’s ever read a “methods” paper:

You find the code. It breaks.
You try the tutorial. Missing dependencies.
You email the authors. Silence.

Science moves fast, but reproducibility is a joke.

Paper2Agent fixes that. It automates the whole conversion paper → runnable AI agent.
Oct 6 8 tweets 3 min read
I just read the most important AI paper of 2025.

A research team achieved what OpenAI couldn't with $100M using just 78 training samples.

The entire industry is about to flip upside down.

Here's everything you need to know: Image Today, most AI labs follow the same playbook: more data = better agents.

LIMI's researchers say: that's wasteful, unnecessary, and about to change.

Strategic curation beats brute force scaling for agentic intelligence.

They proved it with numbers that will make you rethink everything.

The Agency Efficiency Principle is simple:

Machine autonomy emerges from strategic curation of high-quality demonstrations, not data abundance.

For agentic tasks, quality ≠ quantity.Image
Oct 3 7 tweets 3 min read
I finally understand why Claude 4.5 Sonnet is dominating right now.

After testing it on real marketing campaigns, app builds, and content creation... it blew my mind.

Here are 5 powerful ways to use the new Claude model to automate the tedious tasks: 1. Marketing Automation

Here’s my marketing automation prompt:

"You are now my AI marketing strategist.

Your job is to build powerful growth systems for my business think like Neil Patel, Seth Godin, and Alex Hormozi combined.

I want you to:

Build full-funnel strategies (top to bottom)

Write ad copy, landing pages, and email sequences

Recommend automation tools, lead magnets, and channel tactics

Prioritize fast ROI, data-driven decisions, and creative thinking

Always ask clarifying questions before answering. Think long-term and execute short-term.

Do marketing like experts do. Ask: “What would Hormozi, Seth, or Neil do?"

Copy the prompt and paste it in Claude new chat.

After that, start asking it questions.
Sep 26 8 tweets 3 min read
Every AI agent demo you've seen is basically fraud.

Google just dropped their internal agent playbook and exposed how broken the entire space is.

That "autonomous AI employee" your startup demoed last week? It's three ChatGPT calls wrapped in marketing copy. Google's real agents need four evaluation layers, full DevOps infrastructure, and security protocols most teams have never heard of.

While founders pitch "agents that think," Google ships AgentOps with Terraform configs and CI/CD pipelines. They're building distributed systems. Everyone else is building expensive chatbots.

The gap is insane. Startups demo function calls. Google deploys sequential workflows, parallel processing, and loop agents with ACID compliance.

Most brutal part: the security requirements. These agents access internal APIs and databases. One prompt injection and your company data is gone. Most builders treat this like an afterthought.

Google's playing chess while everyone else plays checkers. Let startups burn VC money on agent toys, then dominate when they need actual production infrastructure.

The agent revolution isn't happening until people stop confusing demos with systems.Image The guide reveals Google's three-path strategy for agent development.

Most teams are randomly picking tools without understanding these architectural choices. Image
Sep 19 13 tweets 5 min read
This is the report that rewrites AI history.

OpenAI analyzed 700M people using ChatGPT.

And the results are nothing like the narrative.

Here's everything you need to know in 3 minutes: Image "ChatGPT is mainly for work"

Reality check: Only 27% of ChatGPT usage is work-related. 73% is personal. And the gap is widening every month.

The productivity revolution narrative completely misses how people actually use AI. Image
Sep 11 10 tweets 4 min read
Accenture won’t tell you this:

You don’t need them anymore.

One well-structured prompt =

• Org chart
• SOPs
• KPIs
• Hiring plan
• Automation map

Here’s the prompt I use for automation: Image Accenture charges six figures to audit your operations.

LLMs do it in minutes for free.

I tested this on a real SaaS business with 20+ employees.

Here’s the prompt and what it produced:
Sep 8 10 tweets 3 min read
wow.. I may never hire a tax consultant again.

This AI prompt handles:

→ personal taxes
→ business filings
→ deductions

All legally.

Here’s the prompt 👇 90% people use ChatGPT to write emails, social media sloppy content etc.

Meanwhile, I use it to organize my entire tax prep process.

It won’t file for you. But it will save you:

→ Stress
→ Money
→ Missed deductions
→ Hours of back-and-forth with your accountant
Sep 6 14 tweets 5 min read
Holy sh*t, Gemini is OP.

I’ve used it to:

• Code full apps
• Summarize 100-page PDFs
• Design pitch decks
• Handle SEO + content

Here are 10 real use cases nobody’s talking about: 1. Teacher

“Act as a world-class teacher. Explain [TOPIC] in 3 levels: beginner, intermediate, expert. After each explanation, give me 2 practice questions and feedback guidelines for my answers.”

Learn anything 10x faster by leveling up step by step.
Sep 2 14 tweets 4 min read
Want to learn n8n?

This is the crash course I wish I had:

→ What it is
→ Why it matters
→ How to build your first AI automation

Bookmark this and dive in 👇 What is n8n?

n8n is an open-source automation tool that connects your apps, builds agentic workflows, and lets you host everything yourself.

Think Zapier, but with more power and zero vendor lock-in.

Ideal for devs, indie hackers, & AI builders.

n8n.ioImage
Sep 1 8 tweets 4 min read
3 weeks of GPT-5 testing taught me one thing:

The critics haven't actually used it for real work.

I've automated tasks that used to take hours.

Here are the 5 game-changing automations ↓ 1. Research + summarization

I don’t waste hours skimming reports anymore. gpt-5 turns 50 pages into a 2-minute actionable summary.

Helps me move fast without missing key details.

Prompt I use:

"you are my research assistant. read the following document or url and give me:
1. a 10-sentence executive summary
2. 5 key insights i should act on
3. the top 3 risks or blindspots most people might miss
4. rewrite the insights in simple, no-jargon language i can share with my team "

here you've to add the document link or the document itself (i prefer the file)
Aug 30 7 tweets 2 min read
Stop juggling 15 marketing tools.

One mega prompt in Gemini 2.5 Pro replaces:

- Ahrefs (research)
- Jasper (content)
- Copy. ai (ads)
- Surfer (SEO)
- CoSchedule (planning)

Here's the exact prompt that does it all ↓ 99.9% of content marketers used to rely on analysts for:

• Keyword research
• Traffic reports
• Topic ideation
• Format testing
• Engagement metrics

Now?

You can skip all that and just ask Gemini what to make and why.
Aug 28 7 tweets 3 min read
Bloomberg Terminal: $24,000/year
Professional research: $10,000/year
Gemini 2.5 Pro: Free

Same quality analysis. 100x cheaper.

The financial analysis hack.

Here’s an exact mega prompt we use for stock research and investments: The mega prompt:

Just copy + paste it into Gemini 2.5 Pro and plug in your stock.

Steal it:

"
ROLE:

Act as an elite equity research analyst at a top-tier investment fund.
Your task is to analyze a company using both fundamental and macroeconomic perspectives. Structure your response according to the framework below.

Input Section (Fill this in)

Stock Ticker / Company Name: [Add name if you want specific analysis]
Investment Thesis: [Add input here]
Goal: [Add the goal here]

Instructions:

Use the following structure to deliver a clear, well-reasoned equity research report:

1. Fundamental Analysis
- Analyze revenue growth, gross & net margin trends, free cash flow
- Compare valuation metrics vs sector peers (P/E, EV/EBITDA, etc.)
- Review insider ownership and recent insider trades

2. Thesis Validation
- Present 3 arguments supporting the thesis
- Highlight 2 counter-arguments or key risks
- Provide a final **verdict**: Bullish / Bearish / Neutral with justification

3. Sector & Macro View
- Give a short sector overview
- Outline relevant macroeconomic trends
- Explain company’s competitive positioning

4. Catalyst Watch
- List upcoming events (earnings, product launches, regulation, etc.)
- Identify both **short-term** and **long-term** catalysts

5. Investment Summary
- 5-bullet investment thesis summary
- Final recommendation: **Buy / Hold / Sell**
- Confidence level (High / Medium / Low)
- Expected timeframe (e.g. 6–12 months)

✅ Formatting Requirements
- Use **markdown**
- Use **bullet points** where appropriate
- Be **concise, professional, and insight-driven**
- Do **not** explain your process just deliver the analysis"
Aug 26 14 tweets 4 min read
If you're not getting incredible results from AI, the problem isn't the AI.

It's your prompts.

These 4 frameworks fix that problem permanently.

Here're the frameworks I use (Steal them): Today, most people prompt like this:

“Write me a marketing plan for my product.”

And then they wonder why the result feels vague, boring, and unusable.

The problem isn’t AI.

It’s your approach.
Aug 20 12 tweets 4 min read
If you’re building or investing in AI and don’t understand agents… you’re flying blind.

Here’s your shortcut: 10 core concepts every founder should know: 1/ Agentic AI

This is AI that doesn’t just answer questions it gets shit done.

Basically, It can plan, make decisions, and act without you babysitting it.

Think of the difference between asking a human for advice…

And having someone who actually takes the action for you. Image