Matt Dancho (Business Science) Profile picture
Feb 17, 2024 1 tweets 3 min read Read on X
Principal Component Analysis (PCA) is the gold standard in dimensionality reduction with uses in business. In 5 minutes, I'll teach you what took me 5 weeks. Let's go!

1. What is PCA?: PCA is a statistical technique used in data analysis, mainly for dimensionality reduction. It's beneficial when dealing with large datasets with many variables, and it helps simplify the data's complexity while retaining as much variability as possible.

2. How PCA Works: PCA has 5 steps; Standardization, Covariance Matrix Computation, Eigen Vector Calculation, Choosing Principal Components, and Transforming the data.

3. Standardization: The first step in PCA is to standardize the data. Since the scale of the data influences PCA, standardizing the data (giving it mean of 0 and variance of 1) ensures that the analysis is not biased towards variables with greater magnitude.

4. Covariance Matrix Computation: PCA looks at the variance and the covariance of the data. Variance is a measure of the variability of a single feature, and covariance is a measure of how much two features change together. The covariance matrix is a table where each element represents the covariance between two features.

5. Eigenvalue and Eigenvector Calculation: From the covariance matrix, eigenvalues and eigenvectors are calculated. Eigenvectors are the directions of the axes where there is the most variance (i.e., the principal components), and eigenvalues are coefficients attached to eigenvectors that give the amount of variance carried in each Principal Component.

6. Choosing Principal Components: The eigenvectors are sorted by their eigenvalues in descending order. This gives the components in order of significance. Here, you decide how many principal components to keep. This is often based on the cumulative explained variance ratio, which is the amount of variance explained by each of the selected components.

7. Transforming Data: Finally, the original data is projected onto the principal components (eigenvectors) to transform the data into a new space. This results in a new dataset where the variables are uncorrelated and where the first few variables retain most of the variability of the original data.

8. Evaluation: Each PCA component accounts for a certain amount of the total variance in a dataset. The cumulative proportion of variance explained is just the cumulative sum of each

===

PCA is a powerful tool. But, there’s a lot more to learning Data Science for Business.

I’d like to help.

I put together a free on-demand workshop that covers the 10 skills that helped me make the transition to Data Scientist:

And if you'd like to speed it up, I have a live workshop where I'll share how to use ChatGPT for Data Science:

If you like this post, please reshare ♻️ it so others can get value.learn.business-science.io/free-rtrack-ma…
learn.business-science.io/registration-c…Image

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Matt Dancho (Business Science)

Matt Dancho (Business Science) Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @mdancho84

Feb 24
RIP Data Scientists.

The Generative AI Data Scientist is NOW what companies want.

This is actually good news. Let me explain: Image
Companies are sitting on mountains of unstructured data.

PDF
Word docs
Meeting notes
Emails
Videos
Audio Transcripts

This is useful data. But it's unusable in its existing form. Image
The AI data scientist builds the systems to analyze information, gain business insights, and automates the process.

- Models the system
- Use AI to extract insights
- Drives predictive business insights Image
Read 5 tweets
Feb 22
Understanding probability is essential in data science.

In 4 minutes, I'll demolish your confusion.

Let's go! Image
1. Statistical Distributions:

There are 100s of distributions to choose from when modeling data. Choices seem endless. Use this as a guide to simplify the choice. Image
2. Discrete Distributions:

Discrete distributions are used when the data can take on only specific, distinct values. These values are often integers, like the number of sales calls made or the number of customers that converted.
Read 11 tweets
Feb 21
Stanford just dropped a 457 page report on AI.

It's packed with data on: cost drops, efficiency, benchmarks, adoption.

This report is a cheat code for your career in 2026.

I pulled the most important charts + what they mean for your career: 🧵 Image
First: this isn’t “AI hype.”

It’s measured trends on what’s getting cheaper, what’s getting better, and what’s spreading across the economy and regulation.

(Bookmark this. You’ll reuse it.)
1. Cost + efficiency

The quiet story of 2025: AI is getting dramatically cheaper + more efficient.

The report estimates price-performance improved ~30% per year and energy efficiency improved ~40% annually.

That’s why AI is moving from “demo” to “default.”
Read 15 tweets
Feb 21
This 277-page PDF unlocks the secrets of Large Language Models.

Here's what's inside: 🧵 Image
Chapter 1 introduces the basics of pre-training.

This is the foundation of large language models, and common pre-training methods and model architectures will be discussed here. Image
Chapter 2 introduces generative models, which are the large language models we commonly refer to today.

After presenting the basic process of building these models, you explore how to scale up model training and handle long texts. Image
Read 8 tweets
Feb 20
🚨 BREAKING: Microsoft launches a free Python library that converts ANY document to Markdown

Introducing Markitdown. Let me explain. 🧵 Image
1. Document Parsing Pipelines

MarkItDown is a lightweight Python utility for converting various files to Markdown for use with LLMs and related text analysis pipelines. Image
2. Supported Documents

MarkItDown supports:

- PDF
- PowerPoint
- Word
- Excel
- Images (EXIF metadata and OCR)
- Audio (EXIF metadata and speech transcription)
- HTML
- Text-based formats (CSV, JSON, XML)
- ZIP files (iterates over contents)
- Youtube URLs
- EPubs Image
Read 8 tweets
Feb 20
RIP BI Dashboards.

Tools like Tableau and PowerBI are about to become extinct.

This is what's coming (and how to prepare): Image
I've never been a fan of Tableau and PowerBI.

Static dashboards don't answer dynamic business questions.

That's why a new breed of analytics is coming: AI Analytics. Image
AI + Data Science is the future:

AI tools like:

- LangChain
- LangGraph
- OpenAI API

Are being combined with:

- SQL Databases
- Machine Learning
- Prediction

And the results are exactly what businesses need: real-time predictive insights. Image
Read 6 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(