Principal Component Analysis (PCA) is the gold standard in dimensionality reduction.

But almost every beginner struggles understanding how it works (and why to use it).

In 3 minutes, I'll demolish your confusion: Image
1. What is PCA?

PCA is a statistical technique used in data analysis, mainly for dimensionality reduction. It's beneficial when dealing with large datasets with many variables, and it helps simplify the data's complexity while retaining as much variability as possible. Image
2. How PCA Works:

PCA has 5 steps; Standardization, Covariance Matrix Computation, Eigen Vector Calculation, Choosing Principal Components, and Transforming the data.
3. Standardization:

The first step in PCA is to standardize the data. Since the scale of the data influences PCA, standardizing the data (giving it mean of 0 and variance of 1) ensures that the analysis is not biased towards variables with greater magnitude. Image
4. Covariance Matrix Computation:

PCA looks at the variance and the covariance of the data. Variance is a measure of the variability of a single feature, and covariance is a measure of how much two features change together. The covariance matrix is a table where each element represents the covariance between two features.Image
5. Eigenvalue and Eigenvector Calculation:

From the covariance matrix, eigenvalues and eigenvectors are calculated. Eigenvectors are the directions of the axes where there is the most variance (i.e., the principal components), and eigenvalues are coefficients attached to eigenvectors that give the amount of variance carried in each Principal Component.Image
6. Principal Components:

The eigenvectors are sorted by their eigenvalues in descending order. This gives the components in order of significance. Here, you decide how many principal components to keep. This is often based on the cumulative explained variance ratio, which is the amount of variance explained by each of the selected components.Image
7. Transforming Data:

Finally, the original data is projected onto the principal components (eigenvectors) to transform the data into a new space. This results in a new dataset where the variables are uncorrelated and where the first few variables retain most of the variability of the original data.Image
8. Evaluation:

Each PCA component accounts for a certain amount of the total variance in a dataset. The cumulative proportion of variance explained is just the cumulative sum of each PCA's variance explained. Often this is plotted on a Scree plot with Top N PCA components. Image
9. EVERY DATA SCIENTIST NEEDS TO LEARN AI IN 2025.

99% of data scientists are overlooking AI.

I want to help.
On Wednesday, August 6th, I'm sharing one of my best AI Projects: Customer Segmentation Agent with AI

👉Register here (500 seats): learn.business-science.io/ai-registerImage
That's a wrap! Over the next 24 days, I'm sharing the 24 concepts that helped me become an AI data scientist.

If you enjoyed this thread:

1. Follow me @mdancho84 for more of these
2. RT the tweet below to share this thread with your audience

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with 🔥 Matt Dancho (Business Science) 🔥

🔥 Matt Dancho (Business Science) 🔥 Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @mdancho84

Feb 5
OpenAI, Google, and Anthropic just published guides on:

• Prompt engineering
• Building agents
• AI in business
• 601 AI use cases

9 of the best guides you can't miss: Image
1. AI in the Enterprise by OpenAI

Grab the PDF: cdn.openai.com/business-guide…Image
2. A practical guide to building agents by OpenAI

Download here: cdn.openai.com/business-guide…
Read 14 tweets
Feb 2
🚨McKinsey just dropped how to build agentic AI (that works)

Here's everything you need to know in 2 minutes: Image
1. Stop building agents; Start fixing workflows

The mistake every organization makes: falling in love with your new AI agent.

The solution: Identify the pain points in your process. Then use agents to connect analytics and gen AI into 1 seamless process.
2. Not everything needs an Agent

Stop agent-ifying everything.

Ask: "Is this a problem that actually needs solving with agents?"

Alternatives to Agents:

- Automation
- NLP
- Basic Gen AI
- Predictive Analytics
Read 10 tweets
Jan 22
This 277-page PDF unlocks the secrets of Large Language Models.

Here's what's inside: 🧵 Image
Chapter 1 introduces the basics of pre-training.

This is the foundation of large language models, and common pre-training methods and model architectures will be discussed here. Image
Chapter 2 introduces generative models, which are the large language models we commonly refer to today.

After presenting the basic process of building these models, you explore how to scale up model training and handle long texts. Image
Read 10 tweets
Jan 21
RIP BI Dashboards.

Tools like Tableau and PowerBI are about to become extinct.

This is what's coming (and how to prepare): Image
I've never been a fan of Tableau and PowerBI.

Static dashboards don't answer dynamic business questions.

That's why a new breed of analytics is coming: AI Analytics. Image
AI + Data Science is the future:

AI tools like:

- LangChain
- LangGraph
- OpenAI API

Are being combined with:

- SQL Databases
- Machine Learning
- Prediction

And the results are exactly what businesses need: real-time predictive insights. Image
Read 7 tweets
Jan 18
A Research Scientist at Google DeepMind just dropped a 58 page paper on building agents that specialize in game theory.

Here are the most important parts: Image
The problem with existing agents - You need to prompt the LLM to generate actions.

But this doesn't work in games that have perfect or imperfect information.

Instead, they implement a trick:
To describe a complex "world model", researchers simplify as a Partially Observed Stochastic Game.

It's represented below as a causal graph. Image
Read 9 tweets
Jan 18
Stanford just dropped a 457 page report on AI.

It's packed with data on: cost drops, efficiency, benchmarks, adoption.

This report is a cheat code for your career in 2026.

I pulled the most important charts + what they mean for your career: 🧵 Image
First: this isn’t “AI hype.”

It’s measured trends on what’s getting cheaper, what’s getting better, and what’s spreading across the economy and regulation.

(Bookmark this. You’ll reuse it.)
1. Cost + efficiency

The quiet story of 2025: AI is getting dramatically cheaper + more efficient.

The report estimates price-performance improved ~30% per year and energy efficiency improved ~40% annually.

That’s why AI is moving from “demo” to “default.”
Read 16 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(