Prashant Profile picture
30 Mar, 13 tweets, 3 min read
I had never seriously read a research paper 📃 before and I certainly didn't plan to write one, until I had to.

But I ended up finishing one that got accepted in a conference, it wasn't revolutionary but I was glad that I decided to do it and was able to finish

Here's how:👇
I was lucky to get past the first barrier quickly, choosing a subject or topic of research.

I was exposed to an image processing problem during my internship, which I really liked so I ended up pursuing the same for my research.
But if you're lost about the topic or what to choose, I suggest you check out the most recent papers, and see what interests you and move forward with that.

One good place to start is @paperswithcode
Initially, there was a fear that I am not actually good enough to be able to understand or implement it.

I looked up how other people started with reading research papers and there was a lot of common advice that I ended up taking.
On the first read I,
🔹Skimmed through the paper
🔹Skipped the maths
🔹Tried to understand the problem statement

Re-read parts of it until I was able to understand the complete scenario
Then I tried to understand the mathematics behind it, I didn't understand every bit though.
Before starting my own research I had to get a working base code.
I looked for implementations of the past versions of the same research
And I did find some skeleton code that people had put up for similar kinds of problems.
From there onwards, I started changing things as per the requirements of the latest paper.

I went through some of the past papers or references, checked what has already been done, just to make sure I don't reinvent the wheel.
I built my code up in parts, suppose if there's a specific loss function proposed and I wasn't able to implement it, I checked if that function has been used by someone else in a different problem and if it can fit in my scenario.
Now that I had established a working version of the latest research paper, the next step was to think about what are the things I could change, maybe for better maybe for worse.

In my case, I ended up changing the architecture of the neural network to be used in the problem.
If you're having trouble with this part, try mixing different researches. Suppose there's a new paper on data augmentation technique that came out recently, try adding that to your implementation.

Checking different combinations of things will give you more perspective.
Once done with that, what left was just compiling all that I tried, the comparative study of the experiments and put that on a word file for submission.

To my surprise, a conference did accept it and I got an opportunity to present as well.

You can try the same.
P.S. I had to do it for my final year in engineering and I had only a couple of months for this.

So I am sure this is not the best way for research.

But what'd you expect, I am an engineer.
I can't always tell you the ideal way, but I can tell you what works.
This worked for me and I hope it helps you in someway!

Thanks! if you managed to read it till the end.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Prashant

Prashant Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @capeandcode

31 Mar
How would you interpret the situation if you train a model and see your graphs like these? 📈📉

#machinelearning Image
If you just focus on the left side, it seems to make sense.
The training loss going down, the validation loss going up.
Clearly, seems to be an overfitting problem? Right?
But the graphs on the right don't seem to make sense in terms of overfitting.

The training accuracy is high, which is fine, but why is that validation accuracy is going up if the validation loss is getting worse, shouldn't it go down too?

Is it still overfitting?

YES!
Read 9 tweets
28 Mar
You are looking to get into Machine Learning? You most certainly can
Because I believe that if an above-average student like me was able to do it, you all certainly can as well

Here's how I went from knowing nothing about programming to someone working in Data Science👇
The path that I took wasn't the most optimal way to get a good grip on Machine Learning because...

when I started out, I knew nobody that worked or had knowledge of Data Science which made me try all sorts of different things that were not actually necessary.
I studied C programming as my first language during my freshman year in college. And before the start of my second year, I started learning python just because I knew C is not the way to go.
I learned it out of curiosity and I had no idea about Machine Learning at this point.
Read 15 tweets
27 Mar
Learning rate is one of the most important parameter in Machine Learning Algorithms.📈

You must have seen learning rates something like 0.01, 0.001, 0.0001....

In other words, always in the logarithmic scale. Why?
What happens if we just take random values between 0 and 1?
If we take random values between 0 and 1, we would have a probability of only 10% to get the values between 0 an 0.1, rest 90% of the values would be between 0.1 and 1.

But why do we want between 0 and 0.1?
Read 5 tweets
26 Mar
Here are the links for all the notes that I have from the Andrew NG Machine Learning Course that I made back in 2016

This was my first exposure to #MachineLearning They helped me a lot and I hope anyone who's just starting out and prefers handwritten notes can reference these 👇
Read 8 tweets
24 Mar
Gradient Descent is great but there are a whole bunch of problems associated with it.
Getting stuck in the local minima while browsing the solution space is one of the major issues.

A possible Solution?

SIMULATED ANNEALING

Here's a little something about it 🧵👇
The method of Simulated Annealing in Optimization is analogical to the process of Annealing in Metallurgy ⚗️🔥, hence the name.
We get stuck in the local minima because we tend to always accept a solution that seems best in shortsight. We just move in the downwards direction ⬇️ (negative gradient) and not upwards⬆️

So once we reach a point which is low but not the lowest, we may end up getting stuck.
Read 9 tweets
11 Feb
Ever heard of Autoencoders?

The first time I saw a Neural Network with more output neurons than in the hidden layers, I couldn't figure how it would work?!

#DeepLearning #MachineLearning
Here's a little something about them: 🧵👇
Autoencoders are unsupervised neural networks whose architecture you can picture as two funnels connect from the narrow ends.

These networks are primary focus for compression tasks of data in Machine Learning.
We feed them the data so that they can learn the most important features, a smaller representation while keep the integrity of the data.

Later when someone needs, can just take that small representation and recreate the original, just like a zip file.📥
Read 10 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!