Today we'll focus on the Simple Linear Regression Cost Function! ๐๐ป
0๏ธโฃ ๐ฅ๐๐๐๐ฃ
In Simple Linear Regression, we use one independent variable to predict a dependent one.
It takes advantage of a line to calculate the slope (A) and intercept (B)
We need:
- A dependent and an independent variable.
- A linear dependency between them.
1๏ธโฃ ๐ช๐๐๐ง ๐๐ฆ ๐ ๐๐ข๐ฆ๐ง ๐๐จ๐ก๐๐ง๐๐ข๐ก?
A cost function helps us work out the optimal values for A and B.
Understand it as a way to find the optimal values for our predictor.
2๏ธโฃ ๐๐ข๐ช ๐๐ข ๐ช๐ ๐ข๐๐ง๐๐๐ก ๐๐ง ๐ ๐๐ง๐๐๐ ๐๐ง๐๐๐๐๐๐ฌ?
In linear regression, this cost function is Mean Squared Errors (MSE).
It is the average of the squared errors.
โ ๐๐ข๐ก๐จ๐ฆ
To find our optimal solutions, we use the gradient descent.
It is one of the optimization algorithms that optimizes the cost function.
To obtain the optimal solution, we need to reduce MSE for all data points.
Iteratively we get closer to the optimal solution.
3๏ธโฃ ๐๐ฉ๐๐๐จ๐๐ง๐๐ข๐ก
The most used metrics are:
- Coefficient of Determination or R-Squared (R2)
- Root Mean Squared Error (RMSE)
4๏ธโฃ ๐๐ฆ๐ฆ๐จ๐ ๐ฃ๐ง๐๐ข๐ก๐ฆ ๐ง๐ข ๐๐ฃ๐ฃ๐๐ฌ ๐๐ง
Linear Regression isn't just about drawing lines.
It assumes certain conditions like linearity, independence, and normal distribution of residuals.
Ensuring these make our model more reliable.
And this is all for now... I'll be posting the whole theory part next Sunday, so stay tuned! ๐ค
Linear Regression is more than just a statistical method.
It's the simplest tool that helps us predict and understand our world better.
And that's all for now
If you liked this thread, I am sharing Data Science and AI content.
So don't forget to follow me to get more content like this! (@rfeers)
How to make your LLMs smarter and more efficient explained!๐๐ป
(Don't forget to bookmark for later ๐)
Creating an LLM demo is a breeze.
But... refining it for production? That's where the real challenge begins! ๐ ๏ธ
Teams often grapple with LLMs lacking deep knowledge or delivering inaccurate outputs.
How do we fix this?
Optimization isn't a one-size-fits-all. Approach it along two axes:
๐ง ๐๐ผ๐ป๐๐ฒ๐ ๐ ๐ข๐ฝ๐๐ถ๐บ๐ถ๐๐ฎ๐๐ถ๐ผ๐ป: Is the model missing the right info?
โ๏ธ ๐๐๐ ๐ข๐ฝ๐๐ถ๐บ๐ถ๐๐ฎ๐๐ถ๐ผ๐ป: Is the model's output off-target? ๐ฏ
Simple Linear Regression exemplified for dummies๐๐ป
(Don't forget to bookmark for later! ๐)
1๏ธโฃ ๐๐๐ง๐ ๐๐๐ง๐๐๐ฅ๐๐ก๐ ๐ฃ๐๐๐ฆ๐
We're using height and weight - a classic duo often assumed to have a linear relationship.
But assumptions in data science? No way! ๐ง
Let's find out:
- Do height and weight really share a linear bond?
Do you like this post?
Then join my DataBites newsletter to get all my content right to your mail every Sunday! ๐งฉ
Linear regression is the simplest statistical regression method used for predictive analysis.
It can be performed with multiple variables.... but today we'll focus on a single one.
Also known as Simple Linear Regression.
1๏ธโฃ ๐ฆ๐๐ ๐ฃ๐๐ ๐๐๐ก๐๐๐ฅ ๐ฅ๐๐๐ฅ๐๐ฆ๐ฆ๐๐ข๐ก
In Simple Linear Regression, we use one independent variable to predict a dependent one.
The main goal? ๐ฏ
Finding a line of best fit.
It's simple yet powerful, revealing hidden trends in data.
Linear regression is the simplest statistical regression method used for predictive analysis.
It can be performed with multiple variables.... but today we'll focus on a single one.
Also known as Simple Linear Regression.
1๏ธโฃ ๐ฆ๐๐ ๐ฃ๐๐ ๐๐๐ก๐๐๐ฅ ๐ฅ๐๐๐ฅ๐๐ฆ๐ฆ๐๐ข๐ก
In Simple Linear Regression, we use one independent variable to predict a dependent one.
The main goal? ๐ฏ
Finding a line of best fit.
It's simple yet powerful, revealing hidden trends in data.
The Encoder is the part responsible for processing input tokens through self-attention and feed-forward layers to generate context-aware representations.
๐ Itโs the powerhouse behind understanding sequences in NLP models.
Are you enjoying this post?
Then join my newsletter DataBites to get all my content right to your mail every week! ๐งฉ