Principal Component Analysis (PCA) is the gold standard in dimensionality reduction.
But almost every beginner struggles understanding how it works (and why to use it).
In 3 minutes, I'll demolish your confusion:
1. What is PCA?
PCA is a statistical technique used in data analysis, mainly for dimensionality reduction. It's beneficial when dealing with large datasets with many variables, and it helps simplify the data's complexity while retaining as much variability as possible.
2. How PCA Works:
PCA has 5 steps; Standardization, Covariance Matrix Computation, Eigen Vector Calculation, Choosing Principal Components, and Transforming the data.
3. Standardization:
The first step in PCA is to standardize the data. Since the scale of the data influences PCA, standardizing the data (giving it mean of 0 and variance of 1) ensures that the analysis is not biased towards variables with greater magnitude.
4. Covariance Matrix Computation:
PCA looks at the variance and the covariance of the data. Variance is a measure of the variability of a single feature, and covariance is a measure of how much two features change together. The covariance matrix is a table where each element represents the covariance between two features.
5. Eigenvalue and Eigenvector Calculation:
From the covariance matrix, eigenvalues and eigenvectors are calculated. Eigenvectors are the directions of the axes where there is the most variance (i.e., the principal components), and eigenvalues are coefficients attached to eigenvectors that give the amount of variance carried in each Principal Component.
6. Principal Components:
The eigenvectors are sorted by their eigenvalues in descending order. This gives the components in order of significance. Here, you decide how many principal components to keep. This is often based on the cumulative explained variance ratio, which is the amount of variance explained by each of the selected components.
7. Transforming Data:
Finally, the original data is projected onto the principal components (eigenvectors) to transform the data into a new space. This results in a new dataset where the variables are uncorrelated and where the first few variables retain most of the variability of the original data.
8. Evaluation:
Each PCA component accounts for a certain amount of the total variance in a dataset. The cumulative proportion of variance explained is just the cumulative sum of each PCA's variance explained. Often this is plotted on a Scree plot with Top N PCA components.
9. EVERY DATA SCIENTIST NEEDS TO LEARN AI IN 2025.
99% of data scientists are overlooking AI.
I want to help.
On Wednesday, August 6th, I'm sharing one of my best AI Projects: Customer Segmentation Agent with AI
🚨BREAKING: New Python library for Bayesian Marketing Mix Modeling and Customer Lifetime Value
It's called PyMC Marketing.
This is what you need to know: 🧵
1. What is PyMC Marketing?
PyMC-Marketing is a state-of-the-art Bayesian modeling library that's designed for Marketing Mix Modeling (MMM) and Customer Lifetime Value (CLV) prediction.
2. Benefits
- Incorporate business logic into MMM and CLV models
- Model carry-over effects with adstock transformations
- Understand the diminishing returns
- Incorporate time series and decay
- Causal identification
In 3 minutes, I'll share 3 weeks of research on Random Forest.
Let's go:
1. What is a Random Forest?
Random Forest builds multiple decision trees and merges them together to get a more accurate and stable prediction. Each tree in the random forest gives a prediction, and the most voted prediction is considered as the final result.
2. Bagging (Bootstrap Aggregations):
Each tree is trained on a random subset of the data (sampling of data points) instead of the entire training dataset. This technique is called "bootstrap aggregating" or "bagging".
Bayes' Theorem is a fundamental concept in data science.
But it took me 2 years to understand its importance.
In 2 minutes, I'll share my best findings over the last 2 years exploring Bayesian Statistics. Let's go.
1. Background:
"An Essay towards solving a Problem in the Doctrine of Chances," was published in 1763, two years after Bayes' death. In this essay, Bayes addressed the problem of inverse probability, which is the basis of what is now known as Bayesian probability.
2. Bayes' Theorem:
Bayes' Theorem provides a mathematical formula to update the probability for a hypothesis as more evidence or information becomes available. It describes how to revise existing predictions or theories in light of new evidence, a process known as Bayesian inference.
Understanding P-Values is essential for improving regression models.
In 2 minutes, I'll crush your confusion.
Let's go:
1. The p-value:
A p-value in statistics is a measure used to assess the strength of the evidence against a null hypothesis.
2. Null Hypothesis (H₀):
The null hypothesis is the default position that there is no relationship between two measured phenomena or no association among groups. For example, under H₀, the regressor does not affect the outcome.
🚨BREAKING: New Python library for agentic data processing and ETL with AI
Introducing DocETL.
Here's what you need to know:
1. What is DocETL?
It's a tool for creating and executing data processing pipelines, especially suited for complex document processing tasks.
It offers:
- An interactive UI playground
- A Python package for running production pipelines
2. DocWrangler
DocWrangler helps you iteratively develop your pipeline:
- Experiment with different prompts and see results in real-time
- Build your pipeline step by step
- Export your finalized pipeline configuration for production use