Discover and read the best of Twitter Threads about #classification

Most recents (24)

We added a new article on #URL #Database where the goal is to classify over 80 million #domain for their IAB #categories: alpha-quantum.com/blog/url-datab…
A set of interesting links about #URL #Classification linktr.ee/urlclassificat…
Read 6 tweets
Latest #AI articles from our company.

Introduction to website categorization, which can be used for #content #filtering, #adtech and other purposes. Using #textclassification #ML model.

alpha-quantum.com/blog/website-c…
#automated product #tagging is important for ecommerce websites as it leads to better #conversion, filtering and searchability.
More on this in alpha-quantum.com/blog/automated…
A fast way to use #URL #categorization data is to have it available as #URLdatabase for direct integration to apps and services, e.g. content filtering. Introduction to URL database construction and use in alpha-quantum.com/blog/url-datab…
Read 6 tweets
New article on #websites #classification discussing possible #taxonomy that can be used (IAB, Google, Facebook, etc.) as well as #machinelearning models:
explainableaixai.github.io/websitesclassi…

list of useful resources: linktr.ee/airesearcher
a new telegram channel where will post about #explainableai (#XAI for short):
t.me/s/explainablea…
there are now many useful libraries available for doing #explainability of #AI models: SHAP, LIME, partial dependence plots PDP. And also the "classical" feature importance.
Our german blog on topic of website #categorizations: kategorisierungen.substack.com
Read 6 tweets
It deals with specific #machinelearning problem, namely how to classify a given website into specific categories, also called #taxonomy.
the most common #taxonomies are those of IAB and Google Products Taxonomy. But there others, e.g. one from Facebook for products.
Read 6 tweets
Brainstorming - this probably won't makes sense.

Why categories are better than dimensions for personality disorder classification: dimensional traits are unitary constructs, sub-factors, or unique facets. Categories are multi-factorial. People are not simply entities

1/
consisting of different amounts of a set of facets. They are complex, dynamic, and motivated beings wherein beliefs, desires, and actions don't always align. Ambivalence often reigns supreme. Behavior is multiply determined and their personality is best understood in terms

2/
of theoretically coherent interrelated domains such as motivation and interpersonal style; wherein content defines the domains; rather than the domains themselves being identical to the content.

3/3

#PersonalityDisorders #Diagnosis #Classification #DimensionalModels #Categories
Read 6 tweets
Going to break-down how easy it is to use #autoML and more specifically JADBio AutoML. If you need an account to try it out, head over to jadbio.com and grab a free Basic plan. Ready? #data #ML (1/16)
STEP 1: You start by creating a Project on JADBio and generating all your study #data. That could either be data that has been processed and normalized by a #Bioinformatician, or public data available in the known data repositories (2/16)
If you’re using software for #molecular diagnosis like our Partner’s @QIAGEN OmicSoft Lands platform, your data is ready to be uploaded on JADBio. #ML 3/16
Read 17 tweets
🤔 Qu'est-ce qu'il peut y avoir dans ce produit qui serait en relation avec "un système d'arme" ?
Ou
Pourquoi une classification (b)(4), "... within US Weapon System" ?

FDA.GOV - Summary Basis for Regulatory Action - COMIRNATY

drive.google.com/file/d/1-pzuVj…

Thread ⤵️
Active immunization to prevent coronavirus disease 2019 (COVID-19) caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) in individuals 16 years of age and older

BioNTech Manufacturing GmbH (in partnership with Pfizer, Inc.)
18/05/2021
COVID-19 Vaccine, mRNA
Dans ce document, nous pouvons constater que toutes les zones caviardées, sont codifiées « (b)(4) ».

C'est une codification qui autorise et indique "le pourquoi" on ne divulgue pas ces informations, lors des publications au public ou lors des déclassifications.
Read 6 tweets
How negative was my Twitter feed in the last few hours? In the replies are a few models that analyze the sentiment of my home timeline feed on Twitter for the last 24 hours using the Twitter API.
GitHub: github.com/ghadlich/Daily…
#NLP #Python
I analyzed the sentiment on the last 253 tweets from my home feed using a pretrained #BERT model from #huggingface. A majority (70.0%) were classified as negative.
#Python #NLP #Classification #Sentiment #GrantBot
I analyzed the sentiment on the last 253 tweets from my home feed using a pretrained #VADER model from #NLTK. A majority (56.1%) were classified as negative.
#Python #NLP #Classification #Sentiment #GrantBot
Read 7 tweets
How negative was my Twitter feed in the last few hours? In the replies are a few models that analyze the sentiment of my home timeline feed on Twitter for the last 24 hours using the Twitter API.
GitHub: github.com/ghadlich/Daily…
#NLP #Python
I analyzed the sentiment on the last 272 tweets from my home feed using a pretrained #BERT model from #huggingface. A majority (69.9%) were classified as negative.
#Python #NLP #Classification #Sentiment #GrantBot
I analyzed the sentiment on the last 272 tweets from my home feed using a pretrained #VADER model from #NLTK. A majority (57.0%) were classified as negative.
#Python #NLP #Classification #Sentiment #GrantBot
Read 7 tweets
How negative was my Twitter feed in the last few hours? In the replies are a few models that analyze the sentiment of my home timeline feed on Twitter for the last 24 hours using the Twitter API.
GitHub: github.com/ghadlich/Daily…
#NLP #Python
I analyzed the sentiment on the last 378 tweets from my home feed using a pretrained #BERT model from #huggingface. A majority (68.0%) were classified as negative.
#Python #NLP #Classification #Sentiment #GrantBot
I analyzed the sentiment on the last 378 tweets from my home feed using a pretrained #VADER model from #NLTK. A majority (60.3%) were classified as negative.
#Python #NLP #Classification #Sentiment #GrantBot
Read 7 tweets
How negative was my Twitter feed in the last few hours? In the replies are a few models that analyze the sentiment of my home timeline feed on Twitter for the last 24 hours using the Twitter API.
GitHub: github.com/ghadlich/Daily…
#NLP #Python
I analyzed the sentiment on the last 476 tweets from my home feed using a pretrained #BERT model from #huggingface. A majority (70.0%) were classified as negative.
#Python #NLP #Classification #Sentiment #GrantBot
I analyzed the sentiment on the last 476 tweets from my home feed using a pretrained #VADER model from #NLTK. A majority (60.5%) were classified as negative.
#Python #NLP #Classification #Sentiment #GrantBot
Read 7 tweets
How negative was my Twitter feed in the last few hours? In the replies are a few models that analyze the sentiment of my home timeline feed on Twitter for the last 24 hours using the Twitter API.
GitHub: github.com/ghadlich/Daily…
#NLP #Python Image
I analyzed the sentiment on the last 528 tweets from my home feed using a pretrained #BERT model from #huggingface. A majority (68.0%) were classified as negative.
#Python #NLP #Classification #Sentiment #GrantBot Image
I analyzed the sentiment on the last 528 tweets from my home feed using a pretrained #VADER model from #NLTK. A majority (58.9%) were classified as negative.
#Python #NLP #Classification #Sentiment #GrantBot Image
Read 7 tweets
How negative was my Twitter feed in the last few hours? In the replies are a few models that analyze the sentiment of my home timeline feed on Twitter for the last 24 hours using the Twitter API.
GitHub: github.com/ghadlich/Daily…
#NLP #Python Image
I analyzed the sentiment on the last 569 tweets from my home feed using a pretrained #BERT model from #huggingface. A majority (65.4%) were classified as negative.
#Python #NLP #Classification #Sentiment #GrantBot Image
I analyzed the sentiment on the last 569 tweets from my home feed using a pretrained #VADER model from #NLTK. A majority (53.6%) were classified as negative.
#Python #NLP #Classification #Sentiment #GrantBot Image
Read 7 tweets
How negative was my Twitter feed in the last few hours? In the replies are a few models that analyze the sentiment of my home timeline feed on Twitter for the last 24 hours using the Twitter API.
GitHub: github.com/ghadlich/Daily…
#NLP #Python
I analyzed the sentiment on the last 239 tweets from my home feed using a pretrained #BERT model from #huggingface. A majority (61.1%) were classified as negative.
#Python #NLP #Classification #Sentiment #GrantBot
I analyzed the sentiment on the last 239 tweets from my home feed using a pretrained #VADER model from #NLTK. A majority (61.5%) were classified as negative.
#Python #NLP #Classification #Sentiment #GrantBot
Read 7 tweets
#WeekendLecture
#PregnancyAndStroke

#Pregnancy and #puerperium confer an ⬆️ risk for ischemic as well as hemorrhagic stroke, with incidence rates being 3-fold higher as compared with nonpregnant women

👉doi.org/10.1016/j.ncl.…

a short 🧵 Image
#WeekendLecture
#PregnancyAndStroke

@DrRickSwartz @patrice_lindsay et al metaanalysis
👉11 studies
👉>85 million #pregnancy and #postpartum admissions in various countries
👉overall incidence of #stroke was 30 per 100,000 hospitalizations👈
@IntJStroke

journals.sagepub.com/doi/10.1177/17… ImageImage
#WeekendLecture
#PregnancyAndStroke
RFs:
☑️Age, >40yo OR3.1
☑️Race/ethnicity
☑️Migraine, thrombophilia, SLE, 🫀disease, HTN, thrombocytopenia, diabetes
☑️Pregnancy complications
✅For hemorrhagic stroke: aneurysms, AVM, Hypertensive Disorders of Pregnancy

journals.lww.com/greenjournal/F… ImageImage
Read 12 tweets
How negative was my Twitter feed in the last few hours? In the replies are a few models that analyze the sentiment of my home timeline feed on Twitter for the last 24 hours using the Twitter API.
GitHub: github.com/ghadlich/Daily…
#NLP #Python
I analyzed the sentiment on the last 288 tweets from my home feed using a pretrained #BERT model from #huggingface. A majority (65.6%) were classified as negative.
#Python #NLP #Classification #Sentiment #GrantBot
I analyzed the sentiment on the last 288 tweets from my home feed using a pretrained #VADER model from #NLTK. A majority (62.8%) were classified as negative.
#Python #NLP #Classification #Sentiment #GrantBot
Read 7 tweets
How negative was my Twitter feed in the last few hours? In the replies are a few models that analyze the sentiment of my home timeline feed on Twitter for the last 24 hours using the Twitter API.
GitHub: github.com/ghadlich/Daily…
#NLP #Python
I analyzed the sentiment on the last 412 tweets from my home feed using a pretrained #BERT model from #huggingface. A majority (67.5%) were classified as negative.
#Python #NLP #Classification #Sentiment #GrantBot
I analyzed the sentiment on the last 412 tweets from my home feed using a pretrained #VADER model from #NLTK. A majority (59.0%) were classified as negative.
#Python #NLP #Classification #Sentiment #GrantBot
Read 7 tweets
How negative was my Twitter feed in the last few hours? In the replies are a few models that analyze the sentiment of my home timeline feed on Twitter for the last 24 hours using the Twitter API.
GitHub: github.com/ghadlich/Daily…
#NLP #Python
I analyzed the sentiment on the last 499 tweets from my home feed using a pretrained #BERT model from #huggingface. A majority (63.3%) were classified as negative.
#Python #NLP #Classification #Sentiment #GrantBot
I analyzed the sentiment on the last 499 tweets from my home feed using a pretrained #VADER model from #NLTK. A majority (58.9%) were classified as negative.
#Python #NLP #Classification #Sentiment #GrantBot
Read 7 tweets
How negative was my Twitter feed in the last few hours? In the replies are a few models that analyze the sentiment of my home timeline feed on Twitter for the last 24 hours using the Twitter API.
GitHub: github.com/ghadlich/Daily…
#NLP #Python Image
I analyzed the sentiment on the last 517 tweets from my home feed using a pretrained #BERT model from #huggingface. A majority (62.9%) were classified as negative.
#Python #NLP #Classification #Sentiment #GrantBot Image
I analyzed the sentiment on the last 517 tweets from my home feed using a pretrained #VADER model from #NLTK. A majority (58.0%) were classified as negative.
#Python #NLP #Classification #Sentiment #GrantBot Image
Read 7 tweets
How negative was my Twitter feed in the last few hours? In the replies are a few models that analyze the sentiment of my home timeline feed on Twitter for the last 24 hours using the Twitter API.
GitHub: github.com/ghadlich/Daily…
#NLP #Python Image
I analyzed the sentiment on the last 600 tweets from my home feed using a pretrained #BERT model from #huggingface. A majority (64.3%) were classified as negative.
#Python #NLP #Classification #Sentiment #GrantBot Image
I analyzed the sentiment on the last 600 tweets from my home feed using a pretrained #VADER model from #NLTK. A majority (56.7%) were classified as negative.
#Python #NLP #Classification #Sentiment #GrantBot Image
Read 7 tweets
#ICLR2021 cam-ready II: "LiftPool: Bidirectional ConvNet Pooling" w/ Jiaojiao Zhao is now available: isis-data.science.uva.nl/cgmsnoek/pub/z… No more lossy down- and upsampling when pooling! 1/n Image
LiftPool adopts the philosophy of the classical #Lifting #Scheme from #signal #processing. LiftDownPool decomposes a feature map into various downsized sub-bands, each of which contains information with different frequencies. Because of its invertible properties, ... 2/n
by performing LiftDownPool backwards, a corresponding up-pooling layer #LiftUpPool is able to generate a refined upsampled feature map using the detail sub-bands, which is useful for #image-#to-#image #translation challenges. 3/n Image
Read 4 tweets

Related hashtags

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!