Model serving = processes of operationalizing a machine learning model for production.

OR what most normal software developers call ‘deployment’. Read more about it.

Thread⬇️
thesequence.substack.com/p/edge12-the-c…
Model serving goes a bit beyond deployment, given the unique nature of the lifecycle of ML programs.

ML models operate in a circular lifecycle, where phases such as training and optimization are continuously repeated.
2/⬇️
Some of the most important aspects of any model serving pipeline:
+API interface
+real-time vs. batch execution
+versioning
+A/B testing
+scalability
3/⬇️
Model serving solutions try to create a consistent framework that abstracts the core capabilities needed to run ML models in production.

For instance, the architecture for models executed in real-time is fundamentally different from ones that are executed in batch modes.
4/⬇️
"TensorFlow-Serving: Flexible, High-Performance ML Serving" by @JeremiahHarmsen, @FangweiLi, @sukritiramesh, Christopher Olston, Noah Fiedel, Kiril Gorovoy, Li Lao, Vinu Rajashekhar, Jordan Soyke

outlined the architecture of a serving pipeline for @TensorFlow models
5/⬇️
TensorFlow Serving = the first mainstream model serving architecture in machine learning frameworks

Link: arxiv.org/abs/1712.06139

6/⬇️
TheSequence Edge covers:
+ML concept you should learn
+Review of an impactful research paper
+New ML framework or platform and how you can use it
thesequence.substack.com/subscribe
7/7

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with TheSequence

TheSequence Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @TheSequenceAI

22 May
PySyft = open-source framework for private deep learning that enables secure, private computations

Thread🧵👇
thesequence.substack.com/p/-edge30-priv…
❓PySyft combines several privacy techniques:
+federated learning
+ secured multiple-party computations
+differential privacy

into a single programming model integrated into different deep learning frameworks such as @PyTorch, Keras & @TensorFlow
2/⬇️
The core component of PySyft = abstraction called the SyftTensor

SyftTensors represent a state or transformation of the data and can be chained together
3/⬇️
Read 5 tweets
21 May
Deep dive into "Scalable Private Learning with PATE" by @NicolasPapernot @_kunal_talwar_ @UlfarEr Shuang Song, Ilya Mironov, Ananth Raghunathan

It presents a Private Aggregation of Teacher Ensembles (PATE) method to ensure privacy in training datasets
Thread👇🏼 🔎
Imagine that two different models, trained on two different datasets produce similar outputs

Then, their decision does not reveal information about any single training example

And this is another way to say it ensures the privacy of the training data
2/⬇️
PATE uses a perturbation technique that structures the learning process using an ensemble of teacher models communicating their knowledge to a student model
3/⬇️
Read 6 tweets
19 May
Parallel training Recap⬇️

1. The concept of parallel training
2. Impactful research paper
3. Open-source framework for parallel training

Thread👇
Paper proposing one of the 1st statistical metrics to effectively quantify the correct size of the training batch

3/⬇️
Read 4 tweets
8 May
AllenNLP @allen_ai = an Important Framework for NLU Researchers
Thread🧵👇
thesequence.substack.com/p/-edge22-mach…
❓AllenNLP:
+includes key building blocks for NLU
+offers state of the art NLU methods
+facilitates the work of researchers
thesequence.substack.com/p/-edge22-mach…
2/
AllenNLP is built on top of @PyTorch and designed with experimentation in mind

Key contribution = maintains implementations of new models:
+text generation,
+question answering,
+sentiment analysis
+& many others
3/
Read 5 tweets
7 May
Deep dive into "ZeRO: Memory Optimizations Toward Training Trillion Parameter Models" by Samyam Rajbhandari, Olatunji Ruwase, Yuxiong He & @jeffra45

It proposes an optimizer to build huge language pre-trained models.

Thread👇🏼 🔎
thesequence.substack.com/p/-edge22-mach…
Zero Redundancy Optimizer (ZeRO) is an optimization module that maximizes both memory and scaling efficiency.

2/
It tries to address the limitations of data parallelism and model parallelism while achieving the merits of both

thesequence.substack.com/p/-edge22-mach…

3/
Read 7 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(