In Time-Series Data, we have to observe which model fits the nature of the current data.
Two types of Models are:
🔸Additive Models
🔸Multiplicative Models
Let's discuss in brief 👇
ADDITIVE MODELS
🔹Synthetically it is a model of data in which the effects of the individual factors are differentiated and added to model the data.
It can be represented by:
𝘆(𝘁) = 𝗟𝗲𝘃𝗲𝗹 + 𝗧𝗿𝗲𝗻𝗱 + 𝗦𝗲𝗮𝘀𝗼𝗻𝗮𝗹𝗶𝘁𝘆 + 𝗡𝗼𝗶𝘀𝗲
🔹An additive model is optional for Decomposition procedures and for the Winters' method.
🔹An additive model is optional for two-way ANOVA procedures. Choose this option to omit the interaction term from the model.
MULTIPLICATIVE MODEL
🔹In this model, the trend and seasonal components are multiplied and then added to the error component.
🔹It is not linear, can be exponential or quadratic
𝙮(𝙩) = 𝙇𝙚𝙫𝙚𝙡 * 𝙏𝙧𝙚𝙣𝙙 * 𝙎𝙚𝙖𝙨𝙤𝙣𝙖𝙡𝙞𝙩𝙮 * 𝙉𝙤𝙞𝙨𝙚
🔹This model assumes that as the data increase, so does the seasonal pattern. Most time series plots exhibit such a pattern.
How to choose?
Choose the multiplicative model when the magnitude of the seasonal pattern in the data depends on the magnitude of the data.
In other words, the magnitude of the seasonal pattern increases as the data values increase and decrease as the data values decrease.
Choose the additive model when the magnitude of the seasonal pattern in the data does not depend on the magnitude of the data.
In other words, the magnitude of the seasonal pattern does not change as the series goes up or down.
Choosing a model is one of the very first steps, so we have to make sure we do it right!
Hope this helps!👍
• • •
Missing some Tweet in this thread? You can try to
force a refresh
I've had a lot of trouble understanding different convolutions
What do different convolutions do anyway❓
Without the correct intuition, I found defining any CNN architecture very unenjoyable.
So, here's my little understanding (with pictures)🖼👇
The Number associated with the Convolution signifies two things:
🔸The number of directions the filter moves in and,
🔸The dimensions of the output
Each convolution expects different shapes of inputs and results in output equal to the dimensions it allows the filter to move in.
In 1⃣D-Conv, the kernel moves along a single axis.
It is generally applied over the inputs that also vary along a single dimension, ex: electric signal.
The input could be a 1D array and a small 1D kernel can be applied over it to get another 1D array as output.
If you just focus on the left side, it seems to make sense.
The training loss going down, the validation loss going up.
Clearly, seems to be an overfitting problem? Right?
But the graphs on the right don't seem to make sense in terms of overfitting.
The training accuracy is high, which is fine, but why is that validation accuracy is going up if the validation loss is getting worse, shouldn't it go down too?
I had never seriously read a research paper 📃 before and I certainly didn't plan to write one, until I had to.
But I ended up finishing one that got accepted in a conference, it wasn't revolutionary but I was glad that I decided to do it and was able to finish
Here's how:👇
I was lucky to get past the first barrier quickly, choosing a subject or topic of research.
I was exposed to an image processing problem during my internship, which I really liked so I ended up pursuing the same for my research.
But if you're lost about the topic or what to choose, I suggest you check out the most recent papers, and see what interests you and move forward with that.
You are looking to get into Machine Learning? You most certainly can
Because I believe that if an above-average student like me was able to do it, you all certainly can as well
Here's how I went from knowing nothing about programming to someone working in Data Science👇
The path that I took wasn't the most optimal way to get a good grip on Machine Learning because...
when I started out, I knew nobody that worked or had knowledge of Data Science which made me try all sorts of different things that were not actually necessary.
I studied C programming as my first language during my freshman year in college. And before the start of my second year, I started learning python just because I knew C is not the way to go.
I learned it out of curiosity and I had no idea about Machine Learning at this point.
Learning rate is one of the most important parameter in Machine Learning Algorithms.📈
You must have seen learning rates something like 0.01, 0.001, 0.0001....
In other words, always in the logarithmic scale. Why?
What happens if we just take random values between 0 and 1?
If we take random values between 0 and 1, we would have a probability of only 10% to get the values between 0 an 0.1, rest 90% of the values would be between 0.1 and 1.
Here are the links for all the notes that I have from the Andrew NG Machine Learning Course that I made back in 2016
This was my first exposure to #MachineLearning They helped me a lot and I hope anyone who's just starting out and prefers handwritten notes can reference these 👇