Discover and read the best of Twitter Threads about #federatedlearning

Most recents (3)

Dies ist ein Thread über Künstliche Intelligenz (KI) und Datenschutz!

Lasst uns eintauchen! #KI #Datenschutz #Thread 🧵👇🧵
KI-Systeme lernen aus großen Mengen an Daten, um Entscheidungen zu treffen und Vorhersagen zu treffen.

Obwohl dies großes Potenzial hat, sind Datenschutzbedenken ein wichtiges Thema, das wir beachten müssen. #KünstlicheIntelligenz #Datensicherheit
Das Sammeln von persönlichen Daten ist oft notwendig, um KI-Modelle zu trainieren.

Dabei müssen wir jedoch sicherstellen, dass die Privatsphäre der betroffenen Personen gewahrt bleibt.

Anonymisierung und Pseudonymisierung sind mögliche Ansätze. #Datenschutz #Privatsphäre
Read 10 tweets
Machine Learning Tools for Image-Based Glioma Grading and the Quality of Their Reporting: Challenges and Opportunities mdpi.com/1648502 #mdpicancers via @Cancers_MDPI @YaleRadiology
@MerkajSara @Ryan_Bahar are an outstanding team that wrote this exciting review on application of #ML to the study of #glioma imaging. After reviewing over 12,000 articles and extracting information from over 80 articles they published their systematic review in @FrontOncology.
But, there was so much more that needed to be discussed, so @MerkajSara & @Ryan_Bahar wrote an expert review of literature with focus on future directions. @YaleMed #Neurosurgery This review is a must read when starting out in #ML & #glioma imaging #Radiomics overview
Read 8 tweets
[Thread on #MachineLearning from #streaming #data]

Announcing our (@matthewnokleby, @haroonraja86, and myself) paper on "Scaling-up Distributed Processing of Data Streams for Machine Learning", which has been accepted by @ProceedingsIEEE (Preprint: arxiv.org/abs/2005.08854).

1/
It focuses on training of models from fast streaming data, with "fast" referring to the inability of a single machine to process each data sample in time before the next one arrives. Distributed training can help deal with this, but how many nodes, what minibatch size, etc.?

2/
The paper addresses these questions and shows that there is an appropriate regime for number of nodes and minibatch size per node where training can be near-optimal in terms of excess risk. Outside this regime, distributed processing / minibatching slows down learning.

3/
Read 7 tweets

Related hashtags

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!