, 11 tweets, 5 min read
My Authors
Read all threads
@WalterReade 1/x - Seriously speaking, my recipe is probably very (too?) tailored on my needs but here it is. Considering I take a lot to go seriously through a paper (understanding and trying to replicate it) I select the ones to read & try very carefully (no more than 2-3 per *year*).
@WalterReade 2/x - But I skim through a awful lot, though (abstract, start, conclusion, modus). Keep in mind that inside the ML field I'm mainly interested in NNs (DeepNN and SNNs) and a more vast field of "cortical algorithms" (more biologically plausible). I am familiar/work with tools
@WalterReade 3/x like SVMs/RFT and bayesian methods / classical stats. But I don't follow the advancements for these topics. So my interests are pretty narrow, in the end. This being said I use a software called Zim-wiki. Every few days I go through arxiv¹/bioRxiv/google scholar/etc to check
@WalterReade 4/x for new preprints or papers (sci-hub and libgen are very naughty websites no honest citizen should ever use -spread the word!). In ML I go for these categories: ML, AI, NeuralX, etc. I read as a guilty pleasure (but don't bother trying to replicate) papers in Fintech: they're
@WalterReade 5/x 9 times out of 10 non reproducible or plain straight absurd, at least according to my experience. Mostly, though, I check for neuroscience stuff (dendrites, glia, columns, neocortex, isocortex, koniocortex, etc). This being said, when I'm interested in something I just write
@WalterReade 6/x the url to my zim-wiki (using hashtags and urgency features) and I quickly skim through it. Some day after I might decide what to do with it: read it a bit more carefully (30% of the times) or just go seriously through it (1% of chances). In the end if it is really important
@WalterReade 7/x at some point I'll hear again about it, I'll check the hashtag(s) in zim and if it is already there → up to the list of urgency. Now two considerations, though:
1) I'm no researcher - I'm convinced though that the amount of noise in ML research is intolerable. Every little
@WalterReade 8/x thing makes news. Every little incremental (spurious?) finding gets its pretty pre-print. I'm not sure this is helpful.
2) A very important procedure in other scientific fields is to repeat and validate results of a previous work. I don't recall an example of this in ML.
@WalterReade 9/x 3) Imho the greatest advancement in ML are not done in ML. They're either done in Maths/Statistics or in Neurology/Neuroscience/Biology. (Imho).
4) If one wants to build a good model for something in the real world (!= research, of course) the Bishop "Pattern Rec and ML"
@WalterReade 10/x (published in 2006!) together with the "Deep Learning Book" (Goodfellow and Bengio) are still fine, imho.

Also, twitter's ML community can produce an anxious fear of missing out. Reality is way more chillaxed: relevant concepts are scarce by definition.
@WalterReade 11/11 I get your feelings, though. (Sorry for the long thread, it exploded from a couple of sentences into a long rant, lol)
Missing some Tweet in this thread? You can try to force a refresh.

Enjoying this thread?

Keep Current with Raffaele Abate

Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!