Aakash Gupta Profile picture
Apr 16, 2023 13 tweets 5 min read Read on X
Replying to replies won’t boost your post 75x.

99% of people who copied my thread botched it. I've actually worked on algorithms for 15 years at places like Google.

And I've studied the changes to the algorithm just yesterday. Here's how to ATTACK the latest algorithm: Image
1. The algorithm constructs the feed INDIVIDUALLY for each user

Your tweet does NOT have individual stats.

It has stats on a "per-reader level."

The 75x boost only applies IF the system predicts a 100% probability the SPECIFIC reader will reply AND the author replies to that. Image
2. So here's your point of LEVERAGE:

Increase the probability replies are replied to.

Case 1: The probability a person replies to a specific tweet is 10%, but 0% probability you reply.
→ 10% * 0% * 75x = 0x

Case 2: 100% you reply.
→ 10% * 100% * 75x = 7.5x. Image
3. There is NO direct link penalty

The old de-boost that penalized their existence is GONE.

But, there are still ways links are penalized:

1. Users interact with them less, so their predicted scores are low
2. They reduce your likelihood of getting the 2-min bonus Image
4. The algorithm cares DEEPLY about time spent

As the probability increases that a user spends 2 minutes, you get up to a 10x boost.

It takes 400 words to drive 2 minutes of reading.

So, either:

· Write threads
· Write long tweets
· Or having interesting replies Image
5. Nothing beats QUALITY

The way you are really going to bend the trajectory of your tweet is by getting people who don't normally like or reply to do so.

That will increase the probability of those boosts to other people who are stingy with their love. Image
6. Retweets are still HUGE

The retweet 1x boost shouldn't fool you. It doesn't mean retweets aren't valuable.

They are very valuable.

Yes, you only get a 1x boost if the system predicts a user will retweet. But, the retweet broadcasts your tweet to many more users. Image
7. The algorithm does like videos people WATCH

The .005x boost seems insignificant.

But it can add up over time and on the margin.

It's one of the few "characteristics" that you can easily control to push up your numbers. Image
8. You CAN'T report someone out of the algorithm

The -74x and -369x penalties look ominous.

But if you piss off someone who does lots of reporting, that's not gonna affect the probabilities for others.

It's only if you piss off someone not easily pissed off. Image
The algorithm is CONSTANTLY changing. Just take a look at these commits.

No one else on Twitter is following the GitHub repo as closely as me. Or producing guides as easy to read.

Follow me for weekly updates: @aakashg0 Image
Here was my original thread that went viral (and 99% of copycats botched the read-out of):
And here's last week's update that 90% of those people never even saw, so they were operating on old information.

You can expect a similar update next week:
How to ATTACK the latest Twitter algorithm: Image

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Aakash Gupta

Aakash Gupta Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @aakashg0

Jul 14
Context engineering is the new prompt engineering.

Here's everything you need to know: Image
1. Etymology

It's unclear who coined the term, but two folks have been particularly important in its rise:

Andrej Karpathy and Dex Horthy.
2. What It Is

It's all about thinking beyond the prompt - thinking about ALL of the tools you have to drive success.

5 are the most important to understand:

a. RAG
b. Memory
c. State/ History
d. Prompt Engineering
e. Structured Outputs
Read 8 tweets
Jul 11
Which is it: use LLMs to improve the prompt, or is that over-engineering?

By now, we've all seen a 1000 conflicting prompt guides.

So, I wanted to understand:

• What do actual studies say?
• What do experts at OpenAI, Anthropic, & Google say?

Here are the answers: Image
I spent the past month in Google Scholar, figuring it out.

I firmed up the learnings with Miqdad Jaffer at OpenAI.

Some of my favorite takeaways from the research:
1. It's not just revenue, but cost

You have to realize that APIs charge by number of input and output tokens.

An engineered prompt can deliver the same quality with 76% cost reduction.

We're talking $3,000 daily vs $706 daily for 100k calls.
Read 10 tweets
Jul 5
If you're preparing for PM interviews in 2025, there's one question type you cannot afford to mess up: Metrics.

Here's the history of how it over took product hiring and why it's the silent killer of PM dreams: Image
PMs have been getting hit with questions like these...

And while they’re not as sexy as product sense or design...

They’ve quietly become non-negotiable in most interviews. Image
So, what happened?

In the late 2000s, Big Tech needed a way to simulate real PM work...

- Something they could ask in 45 minutes
- That showed judgment under pressure
- And gave them clean signals across a large candidate pool.

Metrics interviews were perfect.
Read 9 tweets
Jun 26
Jira Product Discovery (JPD) just launched their biggest update yet.

It will take them from 18K → 36K customers.

Here’s why I think so: Image
1. From Teams → Teams of Teams

I covered Jira Product Discovery (JPD) earlier this year.

They were really good for a product team.

Now, with this month’s launch of Premium, they are really good for multiple product teams.

This is a huge unlock…
2. The Lack of Standardization Problem

For a Head of Product or CPO that has multiple things they are taking care of, it’s hard to get a central view of everything.

Everyone has a slightly different roadmap template.

JPD allows you to centralize all of that!
Read 9 tweets
Apr 21
AI prototyping has changed what it means to be a PM, designer, and engineer in forward-thinking organizations.

Here's how: Image
The Old Way

Here’s what most product development lifecycles look like:

1. Ideation

Most teams barely prototype at the idea stage.

A rare few exceptional designers and PMs do (~5%)
2. Planning

Here, more teams use prototypes, but it still is an exception few (~15%), while sketches and mockups are much more common (>75%)

3. Discovery

In more empowered companies, many teams would test prototypes in the discovery phase (~50%)
Read 10 tweets
Apr 17
OpenAI released that there will be 5 levels of AGI.

If you want to build the future of AI, you should deeply understand it.

We are just crossing step 2 of 5 to AGI.

Yet, somehow, teams are still building like we are in levels 1 or 3.

Let me explain: Image
LEVEL 1: CONVERSATIONAL AI

Remember those awkward chatbots from 2019?

They sounded human...until they didn’t.

You’d ask for help…
They’d return gibberish.
That’s Level 1.

Might be useful but it can't be your strategic moat.
LEVEL 2: REASONING AI

This is where we are right now and it’s the real unlock.

Today’s top models (like GPT-4.1, released this week) can:

→ Break down complex problems
→ Think like PhDs
→ Make sense of ambiguity
→ Power analytics, personalization & decision support
Read 7 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(