R-squared is one of the most commonly used metrics to measure performance.
But it took me 2 years to figure out the mistakes that were killing my regression models.
In 2 minutes, I'll share how I fixed 2 years of mistakes (and made 50% more accurate models than my peers). Let's go:
1. R-squared (R2):
R2 is a statistical measure used in regression models that provides a measure of how well the observed outcomes are replicated by the model, based on the proportion of total variation of outcomes explained by the model.
2. Range (0 to 1):
R2 ranges from 0 to 1. A higher R2 value indicates a better fit between the prediction and the actual data. For example, an R2 value of 0.70 suggests that 70% of the variance in the dependent variable is predictable from the independent variable(s).
Understanding probability is essential in data science.
In 4 minutes, I'll demolish your confusion.
Let's go!
1. Statistical Distributions:
There are 100s of distributions to choose from when modeling data. Choices seem endless. Use this as a guide to simplify the choice.
2. Discrete Distributions:
Discrete distributions are used when the data can take on only specific, distinct values. These values are often integers, like the number of sales calls made or the number of customers that converted.
K-means is an essential algorithm for Data Science.
But it's confusing for beginners.
Let me demolish your confusion:
1. K-Means
K-means is a popular unsupervised machine learning algorithm used for clustering. It's a core algorithm used for customer segmentation, inventory categorization, market segmentation, and even anomaly detection.
2. Unsupervised:
K-means is an unsupervised algorithm used on data with no labels or predefined outcomes. The goal is not to predict a target output, but to explore the structure of the data by identifying patterns, clusters, or relationships within the dataset.
These 7 statistical analysis concepts have helped me as an AI Data Scientist.
Let's go: 🧵
Step 1: Learn These Descriptive Statistics
Mean, median, mode, variance, standard deviation. Used to summarize data and spot variability. These are key for any data scientist to understand what’s in front of them in their data sets.
2. Learn Probability
Know your distributions (Normal, Binomial) & Bayes’ Theorem. The backbone of modeling and reasoning under uncertainty. Central Limit Theorem is a must too.