Matt Dancho (Business Science) Profile picture
Feb 17, 2024 1 tweets 3 min read Read on X
Principal Component Analysis (PCA) is the gold standard in dimensionality reduction with uses in business. In 5 minutes, I'll teach you what took me 5 weeks. Let's go!

1. What is PCA?: PCA is a statistical technique used in data analysis, mainly for dimensionality reduction. It's beneficial when dealing with large datasets with many variables, and it helps simplify the data's complexity while retaining as much variability as possible.

2. How PCA Works: PCA has 5 steps; Standardization, Covariance Matrix Computation, Eigen Vector Calculation, Choosing Principal Components, and Transforming the data.

3. Standardization: The first step in PCA is to standardize the data. Since the scale of the data influences PCA, standardizing the data (giving it mean of 0 and variance of 1) ensures that the analysis is not biased towards variables with greater magnitude.

4. Covariance Matrix Computation: PCA looks at the variance and the covariance of the data. Variance is a measure of the variability of a single feature, and covariance is a measure of how much two features change together. The covariance matrix is a table where each element represents the covariance between two features.

5. Eigenvalue and Eigenvector Calculation: From the covariance matrix, eigenvalues and eigenvectors are calculated. Eigenvectors are the directions of the axes where there is the most variance (i.e., the principal components), and eigenvalues are coefficients attached to eigenvectors that give the amount of variance carried in each Principal Component.

6. Choosing Principal Components: The eigenvectors are sorted by their eigenvalues in descending order. This gives the components in order of significance. Here, you decide how many principal components to keep. This is often based on the cumulative explained variance ratio, which is the amount of variance explained by each of the selected components.

7. Transforming Data: Finally, the original data is projected onto the principal components (eigenvectors) to transform the data into a new space. This results in a new dataset where the variables are uncorrelated and where the first few variables retain most of the variability of the original data.

8. Evaluation: Each PCA component accounts for a certain amount of the total variance in a dataset. The cumulative proportion of variance explained is just the cumulative sum of each

===

PCA is a powerful tool. But, there’s a lot more to learning Data Science for Business.

I’d like to help.

I put together a free on-demand workshop that covers the 10 skills that helped me make the transition to Data Scientist:

And if you'd like to speed it up, I have a live workshop where I'll share how to use ChatGPT for Data Science:

If you like this post, please reshare ♻️ it so others can get value.learn.business-science.io/free-rtrack-ma…
learn.business-science.io/registration-c…Image

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Matt Dancho (Business Science)

Matt Dancho (Business Science) Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @mdancho84

Apr 27
🚨 BREAKING: Microsoft launches a free Python library that converts ANY document to Markdown

Introducing Markitdown. Let me explain. 🧵 Image
1. Document Parsing Pipelines

MarkItDown is a lightweight Python utility for converting various files to Markdown for use with LLMs and related text analysis pipelines. Image
2. Supported Documents

MarkItDown supports:

- PDF
- PowerPoint
- Word
- Excel
- Images (EXIF metadata and OCR)
- Audio (EXIF metadata and speech transcription)
- HTML
- Text-based formats (CSV, JSON, XML)
- ZIP files (iterates over contents)
- Youtube URLs
- EPubs Image
Read 7 tweets
Apr 22
Understanding regression models is essential in data science.

In 4 minutes, I'll demolish your confusion. Let's go: Image
1. The 6 Diagnostic Checks Every Data Scientist Should Run

Once you've built a regression model, your job isn't done. These 6 checks will tell you whether your model can actually be trusted.
2. Posterior Predictive Check

Ask yourself: do the model-predicted lines resemble the observed data line? If your model is a good fit, simulated data from it should look similar to your actual data. When they diverge wildly, your model is missing something important. Image
Read 9 tweets
Apr 19
Understanding probability is essential in data science.

In 4 minutes, I'll demolish your confusion.

Let's go! Image
1. Statistical Distributions:

There are 100s of distributions to choose from when modeling data. Choices seem endless. Use this as a guide to simplify the choice. Image
2. Discrete Distributions:

Discrete distributions are used when the data can take on only specific, distinct values. These values are often integers, like the number of sales calls made or the number of customers that converted.
Read 10 tweets
Apr 18
RIP manual research workflows.

Someone just open-sourced what comes after Karpathy’s AutoResearch.

It’s called AutoResearchClaw.

And this thing is insane. Image
A few weeks ago, Karpathy showed where research was heading:

AI agents running the experiment loop.

That was already a big signal.

AutoResearchClaw takes it even further.

It doesn’t just help with research.

It tries to automate the entire scientific method end-to-end.
You give it a raw idea.

One CLI command.

Then it runs.

Not just “brainstorming.”

Not just “summarizing.”

Actually running the workflow.
Read 8 tweets
Apr 15
These 7 statistical analysis concepts have helped me as an AI Data Scientist.

Let's go: 🧵 Image
Step 1: Learn These Descriptive Statistics

Mean, median, mode, variance, standard deviation. Used to summarize data and spot variability. These are key for any data scientist to understand what’s in front of them in their data sets. Image
2. Learn Probability

Know your distributions (Normal, Binomial) & Bayes’ Theorem. The backbone of modeling and reasoning under uncertainty. Central Limit Theorem is a must too. Image
Read 9 tweets
Apr 7
🚨 BREAKING: Microsoft launches a free Python library that converts ANY document to Markdown

Introducing Markitdown. Let me explain. 🧵 Image
1. Document Parsing Pipelines

MarkItDown is a lightweight Python utility for converting various files to Markdown for use with LLMs and related text analysis pipelines. Image
2. Supported Documents

MarkItDown supports:

- PDF
- PowerPoint
- Word
- Excel
- Images (EXIF metadata and OCR)
- Audio (EXIF metadata and speech transcription)
- HTML
- Text-based formats (CSV, JSON, XML)
- ZIP files (iterates over contents)
- Youtube URLs
- EPubs Image
Read 7 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(