Discover and read the best of Twitter Threads about #machinelearning

Most recents (24)

To LiDAR or not to LiDAR - Thoughts? 🧠
LIDAR vs. Camera —
Which Is The Best for Self-Driving Cars?

Link: is.gd/ku1Xsn

#technology #data #science #machinelearning #ml #artificialintelligence #ai #3D #autonomous #tesla #waymo #selfdrivingcars #lidar
Breakthrough #Lidar Technology
Gives @argoai the Edge
in Autonomous Delivery and Ride-Hail Services
With the introduction of Argo Lidar, the global self-driving technology company Argo AI has overcome the limitations preventing most competitors from commercializing autonomous delivery and ride-hail services.
Read 5 tweets
Neuroscience-inspired perception-action in robotics:
applying active inference for state estimation,
control and self-perception

🔎📚 Paper: arxiv.org/pdf/2105.04261
👨‍🎓 👩🏾‍🎓 Credit: Pablo Lanillos, Marcel van Gerven
Unlike robots, humans learn, adapt and perceive their bodies by interacting with the world.
Discovering how the brain represents the body and generates actions is of major importance for robotics and artificial intelligence.
Read 8 tweets
Design principles for
a hybrid intelligence decision support system
for business model validation

🔎📚 Paper: lnkd.in/dDKKvaw

#technology #data #machinelearning #ml #artificialintelligence #ai #startups #businessmodels #tensorflow #SQL #entrepreneurs Image
👨‍🎓 👩🏾‍🎓 Credit: Dominik Dellermann, Nikolaus Lipusch, Philipp Ebel, Jan Marco Leimeister
One of the most critical tasks for startups is to validate their business model.

Therefore, entrepreneurs try to collect information such as feedback from other actors to assess the validity of their assumptions and make decisions.
Read 5 tweets
Hierarchical Graph Neural Networks

🔎📚 Paper: lnkd.in/ddWAgJF
👨‍🎓 👩🏾‍🎓 Credit: Stanislav Sobolevsky

#technology #data #machinelearning #ml #artificialintelligence #ai #GNN #CNN #NN #neuralnetworks #graphicneuralnetworks #deeplearning Image
Over the recent years, Graph Neural Networks have become increasingly popular in network analytic and beyond.

With that, their architecture noticeable diverges from the classical multi-layered hierarchical organization of the traditional neural networks.
At the same time, many conventional approaches in network science efficiently utilize the hierarchical approaches to account for the hierarchical organization of the networks, and recent works emphasize their critical importance.
Read 8 tweets
Jerk Control of Floating Base Systems with
Contact-Stable Parametrised Force Feedback

🔎📚 Paper: lnkd.in/dDtzx-y

#technology #data #science #machinelearning #ml #artificialintelligence #ai #robotics #robots #computervision #cv #PatternRecognition #humanoidrobot
👨‍🎓 👩🏾‍🎓 Credit: Ahmad Gazar, Gabriele Nava, Francisco Javier Andrade Chavez, Daniele Pucci
Nonlinear controllers for floating base systems incontact with the environment are often framed as quadratic programming #QP optimization problems.
Read 6 tweets
Perguruan Tinggi dan Narasi Keilmuan di Media Sosial

Slide ini saya presentasikan di acara soft-launching Website baru @univ_indonesia. Di depan civitas academica UI tadi pagi.

Bonus: analisis ttg BRIN dan Babi Ngepet di bagian akhir slides (spt saya janjikan minggu lalu).😅

>
Soft Launching website baru UI.
PERTANYAAN

1/ Perlu kah perguruan tinggi dan civitas academica aktif membangun jejaring dan narasi tentang ilmu pengetahuan yang menjadi fokusnya di media sosial?
Read 39 tweets
Data Scientists Are in Constant Demand!

If you're an enthusiast or in the data sector looking for a path to specialize on, then feel free to grab a coffee or cappuccino because this thread is for you! 🧵
#DataScience #MachineLearning #100DaysOfCode
Data science experts are needed in almost every field, from the government to security. Millions of businesses also rely on big data to succeed and better serve their customers. Data science careers are in high demand and this trend will not be slowing down any time soon, if ever
Anyways let's dive into the leading data science careers you can break into, your roles, and standard average salaries.
Read 20 tweets
C'est le week-end !

Peut-être aurez vous le temps de lire mes dernières publications.

Au programme :
> Régression Logistique
> Matrice de confusion
> Binary tree : Gini vs Entropy
> Transformers et self Attention
> Les réseaux à convolution

Bonne lecture !

🔽🔽 Thread
[Régression Logistique]

Voir différemment cet algorithme et tout comprendre grâce à la géométrie

#datascience #machinelearning #ia

[Matrice de Confusion]

Plus jamais confus (!) par la matrice de confusion grâce à ce truc très simple à retenir

#datascience #machinelearning #iA

Read 6 tweets
1. Hello. Aujourd'hui, je m'attaque à un gros morceau

Les transformers

en particulier la partie self-attention qui en constitue la partie la plus intéressante

Après avoir lu ce thead, j'espère que vous aurez compris les mécanismes en jeu

Ready?

#MachineLearning #DataScience
2. Je vais détailler le fonctionnement des transformers dans le contexte du NLP, qui est le domaine où le premier papier a été publié en 2017 ("Attention is all you need")

A noter que les transformers s'attaquent désormais à d'autres domaines (Vision, Time Series, ...)
3. First things first

Rappelons que dans le NLP, les algorithmes ne comprennent pas "directement" les mots

Il faut que ces mots soient transformés en nombres.

C'est le boulot des algorithmes de "word embedding", qui donc transforment les mots en vecteurs de nombres
Read 37 tweets
1. Hello les copains

Etes-vous confus devant une matrice de confusion ?

Vous n'arrivez pas à retenir ce que sont les indicateurs "precision", "recall", "accuracy" ?

Je pense que ce thread devrait vous aider.

🔽🔽 Thread

#datascience #MachineLearning #iA
2. Personnellement, ces notions autour de la matrice de confusion, j'ai mis un bon bout de temps avant de les retenir une fois pour toute.

Et pour retenir tout ça, j'ai un super moyen mnémotechnique que je vais vous donner.

Ready?
3. D'abord de quoi parle-t-on ?

On parle de résultats d'une classification faite par un modèle de Machine Learning (Regression logistique, SVM, RF, KNN, Réseau de neurones, Naive Bayes ... et j'en passe)
Read 29 tweets
1. Hello les copains.

Aujourd'hui on va parler de réseaux de neurones, et en particulier de réseaux de neurones à convolutions.

On va se concentrer surtout sur les filtres à convolutions qui constituent les paramètres d'un #CNN

🔽🔽Thread

#datascience #machinelearning #ia
2. Ce tweet sera l'occasion de revoir les grands principes qu'il y a derrière un tel réseau de neurones.

C'est important de comprendre les rouages qu'il y a derrière tout cela.
3. Pour commencer, on peut dire que "l'hiver de l'IA" s'est terminé grâce aux progrès spectaculaires de cette dernière décennie permis grâce aux CNN.

C'est grâce à leur performance que le monde s'est de nouveau intéressé à ces technologies
Read 39 tweets
Hello,

pour vous y retrouver plus facilement, j'ai rangé ici les Tweets qui donnent accès aux différents threads publiés.

Au programme : tout plein de choses sur le #MachineLearning, la #data, la #datascience, l'#IA et la programmation #Python.

Merci pour vos Like ou vos RT !
La régression Logistique : une autre façon de bien comprendre comment cela fonctionne.

Read 6 tweets
1. Salut les copains

Aujourd'hui on va parler d'un modèle tellement important dans le Machine Learning - Les arbres binaires !

On va voir comment ils sont construits et on va voir également une interprétation géométrique

#datascience #ia #MachineLearning
2. Pour commencer, les arbres binaires sont vieux comme le Machine Learning

C'est un type de modèle qui a constamment évolué, et qui est à la base de modèles phare du moment

Comme les #RandomForest, les #GradientBoosting comme #AdaBoost, #CatBoost, #XGBoost, ...
3. Promis, on verra chacun de ces modèles dans le détail dans des messages dédiés
Read 37 tweets
Salut les copains.

Aujourd'hui, on va parler de régression logistique. Un modèle de ML que tout le monde connait.

Mais je vais faire une approche assez originale.

Ready?

🔽🔽Thread

#datascience #ia #MachineLearning
1/ Petit rappel : la régression logistique permet de faire de la classification entre 2 catégories.

C'est un modèle performant et TRES TRES utilisé à travers le monde.
2/ Exemple de cas d'usage :

> une banque donne un prêt (ou pas)

> le médecin détecte cette maladie (ou pas)

> le site ecommerce propose ce produit au client (ou pas)

> le client se désabonne du service (ou pas)
Read 41 tweets
Salut les copains

Je suis tout nouveau sur Twitter, et j'ai créé ce compte pour vous raconter un peu mes découvertes sur la Data Science.

Ca fait tout drôle ...

🔽🔽 Thread

#datascience #machinelearning #ia
1/ J'ai la quarantaine bien passée et j'ai une longue expérience dans l'IT.

Mon parcours en quelques mots : développeurs d'application d'Entreprises, chef de projet, DSI, puis ensuite une longue expérience de conseil en management ou j'ai aidé mes clients à gérer leurs projets.
2/ Dans cette expérience, pas trop de sujets concernant la Data Science.

Et puis voilà que dans ma boîte, il y a maintenant 5 ans, un petit pôle de data scientistes s'est créé.
Read 20 tweets
It's #TechWeek2021! Researchers @DIASAstronomy work with the advanced tech of the @LOFARIreland radio telescope, one of the world’s largest telescopes stretching from Birr right across Europe. Learn more about the technology behind I-LOFAR here: lofar.ie/technology/ Lots of wires inside the co...
Solar physicists @DIASAstronomy analyse data from @LOFARIreland in order to study our Sun and how it impacts the Earth. They use I-LOFAR in combination with data from space-based observatories to help better understand solar storms and space weather #TechWeek2021
Our researchers are involved with software development for the @ESASolarOrbiter space mission, with Prof @petertgallagher and Dr @samaloney co-investigators of the @stix_so Spectrometer/Telescope for Imaging X-rays onboard. #TechWeek2021 Artist impression of the So...
Read 8 tweets
@ThinkingAboutV @nanopore The applied omics market based on #NGS technology is still an incipient market if we compare it to more established #diagnostics markets. But we are not far away from a point in time where every newborn's genome is ...
@ThinkingAboutV @nanopore ... sequenced at high quality (long reads, maybe with PCR-free including epigenome marks), and kept as an #EHR in the health system for future use. From then onwards, there will be recurrent #LiquidBiopsy assays, maybe once a year, to screen for a multitude of conditions.
@ThinkingAboutV @nanopore From 40-45 yo onwards, mainly cancer screening of healthy individuals, based on a #MachineLearning cancer classifier such as shown already by @GrailBio / $GH and others, but also other classifiers will come soon, such as #epigenomic profiling of #Neurodegenerative conditions, ...
Read 7 tweets
#Google
I am very excited to announce that @MohsenAlif & I have won the Google Research Award!
Our project entitled "Is Economics From Afar Domain Generalizable?" falls under the Machine Perception category.
We are the only winners from Pakistan!
research.google/outreach/resea… Google Research Scholar Award (2021) winners from Pakistan
It's a good day for #Pakistan and a fantastic one for us.
Both @MohsenAlif & I are at @ITUPunjab which is where we got the chance to do this multidisciplinary & cutting edge work. It's not everyday when labs in computer vision & #Economics come together to create impact.
According to Google, The Research Scholar Program supports early-career faculty who are doing impactful research in fields relevant to Google and is intended to help to develop new collaborations and encourage long term relationships.
Read 10 tweets
Let's learn about Correlation the most important statistical measure that is also a key tool in machine learning.

#MachineLearning #100DaysOfCode #Python

👇🧵
Correlation, as the name suggests, gives the measure of the relationship between two variables.

In statistical terms, it is the measure using which statisticians figure out how much two things are related.

Two things could be related in different extend and ways. For eg:
1) If one variable increases, the other variable might increase(+ve correlation), decreases(-ve correlation) or remain unchanged/have no defined pattern( uncorrelated ).

2) Also, this behavior could remain the same for all the values(monotonic) or can vary with the values.
Read 13 tweets
Today we will summarize Vision Transformer (ViT) from Google. Inspired by BERT, they have implemented the same architecture for image classification tasks.

Link: arxiv.org/abs/2010.11929
Code: github.com/google-researc…

#MachineLearning #DeepLearning
The authors have taken the Bert architecture and applied it on an images with minimal changes.Since the compute increases with the length of the sequence, instead of taking each pixel as a word, they propose to split the image into some ’N’ patches and take each of them as token.
So first take each patch, flatten it (which will be of length P²C), and project it linearly to dimension D. And in the 0th position add a ‘D’ dimensional embedding which will be learnt. Add positional encoding to these embedding.
Read 9 tweets
Depending on the problem we are trying to solve, the loss function varies. Today we are going to learn about Triplet losses. You must have heard about it while reading about Siamese networks.

#MachineLearning #DeepLearning #RepresentationLearning
Triplet loss is an important loss function for learning a good “representation”. What’s a representation you ask? Finding similarity (or difference) between two images is hard if you just use pixels.
So what do we do about it - given three images cat1, cat2, dog, we use a neural network to map the images to vectors f(cat1), f(cat2), and f(dog).
Read 5 tweets
How would you interpret the situation if you train a model and see your graphs like these? 📈📉

#machinelearning
If you just focus on the left side, it seems to make sense.
The training loss going down, the validation loss going up.
Clearly, seems to be an overfitting problem? Right?
But the graphs on the right don't seem to make sense in terms of overfitting.

The training accuracy is high, which is fine, but why is that validation accuracy is going up if the validation loss is getting worse, shouldn't it go down too?

Is it still overfitting?

YES!
Read 9 tweets
"The most important step of all is the first step. Start Something."

And to start your journey, below are some of the best free resources for different technologies.

Bookmark this thread and retweet to help others as well.
Follow for more such content.

#100DaysOfCode

🧵 👇
For learning Python a few of my favorite websites:

By Google (only for python2 though):
developers.google.com/edu/python

Guided Course:
sololearn.com/learning/1073
(^ has many other guided courses too)

For Python and its applications:
automatetheboringstuff.com
inventwithpython.com/invent4thed/
For AI basics:

Guided Course: udacity.com/course/intro-t…

Just Videos by MIT: ocw.mit.edu/courses/electr…

Book/Reading Material:
Part I: course.elementsofai.com
Part II: buildingai.elementsofai.com
(^two of my favorites)

Checkout CSE courses on NPTEL:
nptel.ac.in/course.html

#AI
Read 8 tweets

Related hashtags

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!