The 10 types of clustering that all data scientists need to know.
Let's dive in:
1. K-Means Clustering:
This is a centroid-based algorithm, where the goal is to minimize the sum of distances between points and their respective cluster centroid.
2. Hierarchical Clustering:
This method creates a tree of clusters. It is subdivided into Agglomerative (bottom-up approach) and Divisive (top-down approach).
3. DBSCAN (Density-Based Spatial Clustering of Applications with Noise):
This algorithm defines clusters as areas of high density separated by areas of low density.
4. Mean Shift Clustering:
It is a centroid-based algorithm, which updates candidates for centroids to be the mean of points within a given region.
5. Gaussian Mixture Models (GMM):
This method uses a probabilistic model to represent the presence of subpopulations within an overall population without requiring to assign each data point to a cluster.
6. Spectral Clustering:
It uses the eigenvalues of a similarity matrix to reduce dimensionality before applying a clustering algorithm, typically K-means.
7. OPTICS (Ordering Points To Identify the Clustering Structure):
Similar to DBSCAN, but creates a reachability plot to determine clustering structure.
8. Affinity Propagation:
It sends messages between pairs of samples until a set of exemplars and corresponding clusters gradually emerges.
9. BIRCH (Balanced Iterative Reducing and Clustering using Hierarchies):
Designed for large datasets, it incrementally and dynamically clusters incoming multi-dimensional metric data points.
10. CURE (Clustering Using Representatives):
It identifies clusters by shrinking each cluster to a certain number of representative points rather than the centroid.
EVERY DATA SCIENTIST NEEDS TO LEARN AI IN 2025.
99% of data scientists are overlooking AI.
I want to help.
On Wednesday, August 21st, I'm sharing one of my best AI Projects:
How I built an AI Customer Segmentation Agent with Python:
🚨 BREAKING: IBM launches a free Python library that converts ANY document to data
Introducing Docling. Here's what you need to know: 🧵
1. What is Docling?
Docling is a Python library that simplifies document processing, parsing diverse formats — including advanced PDF understanding — and providing seamless integrations with the gen AI ecosystem.
2. Document Conversion Architecture
For each document format, the document converter knows which format-specific backend to employ for parsing the document and which pipeline to use for orchestrating the execution, along with any relevant options.
Type 1 and Type 2 errors are confusing. In 3 minutes, I'll demolish your confusion. Let's dive in. 🧵
1. Type 1 Error (False Positive):
This occurs when the pregnancy test tells Tom, the man, that he is pregnant. Obviously, Tom cannot be pregnant, so this result is a false alarm. In statistical terms, it's detecting an effect (in this case, pregnancy) when it actually doesn't exist.
2. Type 2 Error (False Negative):
This happens when Lisa, who is actually pregnant, takes the test, and it tells her that she's not pregnant. The test failed to detect the real condition of pregnancy. In statistical terms, it's failing to detect a real effect (pregnancy) that is there.
Boxplots are one of the most useful tools in my Data Science arsenal.
In 6 minutes, I'll eviscerate your confusion.
Let's dive in.
1. What is a boxplot?
A boxplot is a standardized way of displaying the distribution of data based on a five-number summary: minimum, first quartile (Q1), median, third quartile (Q3), and maximum.
2. Invention:
The boxplot was invented in 1969 by John Tukey, as part of his pioneering work in data visualization. Tukey's EDA emphasized the importance of using simple graphical and numerical methods to start understanding the data before making any assumptions about its underlying distribution or applying complex statistical models. The boxplot emerged from this philosophy. Tukey's boxplot was designed to be a quick and easy way to visualize the distribution of data.