Discover and read the best of Twitter Threads about #DeepLearning

Most recents (24)

In this week's newsletter:

- iOS Pentesting | ChatGPT my Teacher | Recon -

(thread) Image
1. How I'm using ChatGPT as a virtual teacher. And of course, how you can use it too.
2. My greatest pentesting challenge for this week.
Read 5 tweets
We all think we're one of a kind.

But sometimes, we come across someone who looks just like us!

A @CellReports study tested the DNA of "fake twins".

Guess what:

They also share ๐ŸงฌDNA variants related to facial features & behavior ๐Ÿคฏ

Surprised or not really? Letโ€™s dig in๐Ÿงต๐Ÿ‘‡
First, letโ€™s see why this study might NOT surprise you.

Monozygotic twins share almost identical facial traits & the same DNA sequence. Therefore, looking-alike strangers could follow a similar pattern.

Still, looking-alike strangers are not twins! So we canโ€™t know for sure if:
a. they share more of their genome than random people

b. if yes to (a), how much they share & what would be the functional role of the genes on which such SNVs are

c. how about #multiomics similarity, such as DNA methylation or microbiome (different in monozygotic twins)?
Read 19 tweets
๐™๐™ง๐™–๐™ฃ๐™จ๐™›๐™ค๐™ง๐™ข๐™ž๐™ฃ๐™œ ๐™€๐™€๐™‚ ๐˜ฝ๐™ง๐™–๐™ž๐™ฃ ๐™’๐™–๐™ซ๐™š๐™จ ๐™ž๐™ฃ๐™ฉ๐™ค ๐™Ž๐™ฅ๐™ค๐™ ๐™š๐™ฃ ๐™’๐™ค๐™ง๐™™๐™จ ๐™ช๐™จ๐™ž๐™ฃ๐™œ ๐˜ฟ๐™š๐™š๐™ฅ ๐™‡๐™š๐™–๐™ง๐™ฃ๐™ž๐™ฃ๐™œ

Machine Learning can change a life.

#artificialintelligence #deeplearning #machinelearning #datascience #eeg #neuroscience

Read more-
New research which was published recently from the University of California has given a paralyzed man the ability to communicate by converting his brain EEG signals into computer-generated writing.
This is a significant milestone toward restoring communication for people who have lost their ability to speak.
Read 4 tweets
Skip connections are one of the most important innovations in the history of deep learning.

You need to understand the problems it solves and how it helps us build deep networks.

Let me break it down for you in this thread ๐Ÿ‘‡๐Ÿฝ๐Ÿงต

#deeplearning #machinelearning
Skip connections are a common feature in modern CNN architectures. They create an alternative path for the gradient to flow through, which can help the model learn faster.
In a neural network, the gradient measures how much a change in one part of the network affects the output. We use the gradient to update the network during training to recognize data patterns better.
Read 8 tweets
The convolution operation q fundamental concepts in deep learning.

Most people know how to perform it.

But far fewer understand what it means.

These five ๐Ÿ–๐Ÿฝ resources will help you grok one of the most important concepts in deep learning ๐Ÿ‘‡๐Ÿฝ

#deeplearning #machinelearning
1) Convolution vs cross correlation

You must understand the difference between convolution and cross-correlation in order to understand backpropagation in CNNs.

This will help.โ€ฆ
2) Convolutions in image processing

This 30ish minute lecture is from the MIT course on computational thinking.

Oh, and itโ€™s taught by Grant from @3blue1brown

Read 7 tweets
Here's how I would study deep learning if I had to do it all over again.

#deeplearning #machinelearning
๐Ÿ‘‡๐Ÿฝ ๐Ÿงต
1) Skip the math

Iโ€™d ignore the math when first starting out.

Looking at equations will demotivate you.

Instead, look for applications of deep learning.

Clone the
Yolov5 repo and run the usage on the command line.

See the magic happen.

Get inspired.
2) Go through @AndrewGlassnerโ€™s DL crash course.

Itโ€™s 3.5 hours long but will give you an intuition for how it all works under the hood.

Great return on time investment.

Read 12 tweets
Back propagation is the secret sauce for training neural networks.

You need to understand how it works.

I break it down for you in this thread ๐Ÿ‘‡๐Ÿฝ๐Ÿงต

Here's the lowdown: the output of a neural network is calculated with the ๐Ÿ‹๐Ÿฝ weights ๐Ÿ‹๐Ÿฝ of the edges that connect the nodes in the network.

So, you gotta find the optimal values of weights to minimize the final error on training data examples.
1๏ธโƒฃ We start by assigning random values to all weights in the network

2๏ธโƒฃ Then, for every input sample, we perform a feedforward operation to calculate the final output and the prediction error.
Read 10 tweets
You shouldn't evaluate the performance of a deep learning model solely on accuracy.

You're missing the whole picture if that's all you're looking at.

There are 3 other factors you should consider when evaluating the performance of your model ๐Ÿ‘‡๐Ÿฝ

#deeplearning #ai
Flops (floating point operations)

This measures of the amount of computation required to train and run a model.

More complex models often require more flops, which can make them more expensive to use.

The number of parameters in a model can also impact its performance.

Models with more parameters may fit the training data better, but they may also be more prone to overfitting and may not generalize well to new data.
Read 6 tweets
Learn SQL projects for Data Analysis and add to your portfolio for free ๐Ÿฅณ

Projects for the portfolio are very important. These projects will improve your skills

A thread ๐Ÿงต๐Ÿ‘‡
Read 9 tweets
This is the story of an embodied multi-modal agent crafted over 4 papers and told in 4 posts

The embodied agent is able to perceive, manipulate the world, and react to human instructions in a 3D world
Work done by the Interactive Team at @deepmind between 2019 and 2022
Imitating Interactive Intelligence
The case for training the agent using Imitation Learning is outlined
The environment "The Playroom" is generated
The general multi-modal architecture is crafted
At the end, an auxiliary simil-GAIL loss is crucial
Interactive Agents with IL & SSL
In the end it's all about scale and simplicity
The agent was hungry for data, so it was fed more
A simpler contrastive cross-modal loss replaced GAIL
A hierarchical 8-step action was introduced
New agent code name: MIA
Read 6 tweets
Thanks ICP for publishing this essay on #Ritamic Decision Policy. Into year-3 of this series, this post summarizes manthan, research and study of the Vedic origins of sustainable decision policy and strategy, with contemporary examples all can relate to.
Post focuses on sustainable policy to make not one, but a series of interconnected decisions over time.

Can ideas of Vedanta be applied here?
short ans: Y.

Content is of interest to those making gov or private policy, designers, startups, #Ganita/stem students, young parents.
The post is divided into 7 sections with links to each at the top. Those who simply want a easy-to-remember idea of Ritamic decision policy can go to the examples- read how Air India altered its DEL-SFO route in harmony with Ritam.โ€ฆ
Read 38 tweets
ChatGPT for Robotics?
@Deepmind latest work: A general AI agent that can perform any task from human instructions!

Or at least those allowed in "the playhouse"

The cherry on top of this agent is its RL fine-tuning from human feedback, or RLHF. As in ChatGPT
The base layer of the agent is trained with imitation learning and conditioned on language instructions

Initially, the agent had mediocre abilities

However, when it was fine-tuned with Reinforcement Learning and allowed to act independently, its abilities ๐Ÿ†™ significantly

The authors structured the RL problem by training a Reward Model on human feedback, and then using this RW model to optimize the agent with online RL

The RW model, also called Inter-temporal Bradley-Terry (IBT), is trained to predict the preferences of sub-trajectories

Read 9 tweets
Machine learning and Python go hand in hand.

Ready to take the first step towards a rewarding career in machine learning?

These 4 resources will help you learn Python and get started ๐Ÿ‘‡๐Ÿฝ๐Ÿงต

#100DaysOfCode #66DaysOfData #DeepLearning
1/ Python Principles

I've never seen anything like this course.

This is a text based course with an interactive coding environment that will teach you all the basics of Python.

There's lots of challenges and exercises, too.

This should take 2 weeks.
2/ CognitiveClass' Python for Data Science

Spend 1 hour a day and you'll be done in a week.โ€ฆ
Read 7 tweets
The GPT of Robotics? RT-1

RT-1 is a 2y effort to bring the power of open-ended task-agnostic training with a high-capacity architecture to the Robotic world.

The magic sauce? A big and diverse robotic dataset + an efficient Transformer-based architecture
RT-1 learn to take decisions in order to complete a task via imitation from a dataset of 130k episodes, about 700 general tasks, acquired over the course of 17mo.
The architecture of RT-1 is made of:
- A Vision-Language CNN-based architecture that encode the task instruction and image into 81 tokens
- A TokenLearner that attends over the 81 tokens and compress them to 8
- A Decoder-only Transformer that predicts the next action
Read 7 tweets
Last week @DeepMindโ€™s research on AlphaCode - a competative programming system - has been published in Science. AlphaCode has been able to beat 54% of humans on a competative coding challenges, putting it on par with many junior-level developers.

The original announcement from DeepMind came out in February, which in the fast-paced world of AI is already ancient history.

The explosive rise of generative AI over the past few months will most certainly have a major impact, if it already hasnโ€™t, on the future versions of AlphaCode and similar AI-enabled coding resources.

Read 4 tweets
PyTorch 2.0 is out! This major release upgrade brings about many new features, but the main improvements are under the hood.

1/6 Image
The three main principles behind PyTorch

1. High-Performance eager execution
2. Pythonic internals
3. Good abstractions for Distributed, Autodiff, Data loading, Accelerators, etc.

PyTorch 2.0 is fully backward compatible with the previous versions of PyTorch.

The main new feature is torch.compile, "a feature that pushes PyTorch performance to new heights and starts the move for parts of PyTorch from C++ back into Python."

Read 6 tweets
In today's exciting release๐Ÿ˜Ž

Use @nixtlainc's StatsForecast to beat a WIDE variety of #DeepLearning models with:

- interpretable methods ๐Ÿ“ˆ
- in under 10 min โฒ๏ธ
- $0.5c in AWS ๐Ÿ”ฅ
- and a few lines of #python code ๐Ÿคฏ


๐Ÿงต 1/5
Building upon the great work of @spyrosmakrid et al, we fitted an ensemble of simple statistical models:

AutoARIMA, Exponential Smoothing (@robjhyndman etal),

Complex Exponential Smoothing (@iSvetunkov etal)

and the DOT method (@fotpetr etal)

The results were great!๐Ÿ”ฅ
The simple statistical ensemble:

- outperforms most #DeepLearning models ๐Ÿ”ฅ

- is 25,000 faster โšก๏ธ

- slightly less accurate than an #DeepLearning ensemble ๐Ÿค”
Read 5 tweets
๐Ÿค‘๐Ÿ“ˆ Top 15 Free Data Science Courses to Kick Start your Data Science Journey!

Bookmark this thread

A thread๐Ÿงต๐Ÿ‘‡
1. Introduction to AI and ML

โ€œThe AI revolution is here โ€“ are you prepared to integrate it into your skillset? How can you leverage it in your current role? What are the different facets of AI and ML?โ€โ€ฆ
2. Introduction to Python

Do you want to enter the field of Data Science? Are you intimidated by the coding you would need to learn? Are you looking to learn Python to switch to a data science career?โ€ฆ
Read 8 tweets
Stanford University is offering free online courses.

No application or fee is required.

Here are 5 FREE courses you don't want to miss:
1. Computer Science 101
2. Supervised Machine Learning
Read 7 tweets
Learn Data Science in 180 days๐Ÿค‘๐Ÿ“ˆ and start your data science career.

Bookmark this thread

A thread๐Ÿงต๐Ÿ‘‡
First Month ๐Ÿ—“๏ธ
Day 1 to 15 - Learn Python for Data Science
Day 16 to 30 - Learn Statistics for Data Science
Second Month ๐Ÿ—“๏ธ
Day 31 to 45 - Explore Python Packages( Numpy, Pandas, Matplotlib, Seaborn, Scikit-Learn)
Day 16 to 30 - Implement EDA on real-world datasets.
Read 9 tweets
Pretty wild that there have been at least 4-5 antibody-specific structure prediction tools this year, all based on #deeplearning

1. DeepAb/IgFold (@jeffruffolo)
2. ABLooper/ImmuneBuilder (@brennanaba)
3. Equifold (@jaehyeon_lee_ml)
4. t-AbFold

What does this mean? ๐Ÿงต(1/6)
First, it's pretty crazy we even have antibody-specific tools, since #AlphaFold2, #ESMFold, #OmegaFold, all do a decent job at antibody modelling. However, antibody-specific tools have -some- feature that's necessary (e.g. being MSA-free) (2/6)
The demand is likely due to interest from pharma & biotech, but we don't have anywhere near the same level of interest for other polymorphic proteins like TCRs and MHCs (๐Ÿค”). Regardless, with such interest, I think an antibody-specific CASP should be resurrected! (3/6)
Read 7 tweets
Time Series analysis and forecasting is a really valuable skill to have in #DataScience.

Here is WHY๐Ÿงต๐Ÿ‘‡
:one: All companies are interested in making money. Time series is really powerful in #finance! ๐Ÿ“ˆ๐Ÿ“‰

There will always be demand for someone who can analyse and forecast financial data. Plus it can bring you a lot of money if you can increase the profit of a company! ๐Ÿ’ฐ๐Ÿ’ฐ๐Ÿ’ฐ
:two: There are multiple applications for Time Series: forecasting sales, unemployment rate, COVID cases, petrol price, temperatures...

There is a demand for Data Scientists with this skill everywhere! You are not restricted to a particular field ๐Ÿ”ญ๐Ÿ’Š๐Ÿงฌ๐Ÿ“ก or location ๐Ÿ‡ช๐Ÿ‡บ๐Ÿ‡บ๐Ÿ‡ธ๐Ÿ‡ฎ๐Ÿ‡ณ!
Read 6 tweets
Multilingual pre-training is really useful for improving the performance of deep networks on low resource languages (i.e., those that lack sufficient training data). But, whether multilingual pre-training is damaging for high resource languages is currently unclear. ๐Ÿงต[1/5]
For BERT-style models like XLM-R ( by @alex_conneau), models pre-trained over multilingual corpora (given proper tuning) can match the performance of monolingual models for high resources languages like English/French on GLUE. [2/5] Image
Recent research in large language models ( by @Fluke_Ellington), however, indicates that multilingual pre-training significantly damages zero-shot generalization performance of LLMs in English. [3/5] Image
Read 5 tweets
#GraphNeuralNetworks are way too cool to be left unexplored!

In a nutshell, GNNs are an exciting merger between graph theory (math) & #DeepLearning (coding).

Here's my detailed resource stack of best GNN theory explainers, videos & coding tutorials I used for my own learning.
1. This is a great place to start if you either: want to learn the basics, or enjoy reading about basic concepts explained in a well structured way.

It walks us through graphs in real world, what graphs & GNNs consist of, and how GNNs do prediction.
2. Further, this next tutorial walks us through graphs & GNNs in an intuitive manner, while also going quite deep into the specific mathematical terminology of the field.

I like this one a lot because it also includes hands-on PyTorch code at every step.โ€ฆ
Read 15 tweets

Related hashtags

Did Thread Reader help you today?

Support us! We are indie developers!

This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!