It presents a Private Aggregation of Teacher Ensembles (PATE) method to ensure privacy in training datasets
Thread👇🏼 🔎
Imagine that two different models, trained on two different datasets produce similar outputs
Then, their decision does not reveal information about any single training example
And this is another way to say it ensures the privacy of the training data
2/⬇️
PATE uses a perturbation technique that structures the learning process using an ensemble of teacher models communicating their knowledge to a student model
3/⬇️
PATE fundamental components:
+Teacher Models
+Aggregation Mechanisms: evaluate when the teacher models achieve consensus on output
+Student Models
4/⬇️
PATE = one of the most influential models in private machine learning, having been adopted by several frameworks and tools
TheSequence Edge covers:
+ML concept you should learn
+Review of an impactful research paper
+New ML framework or platform and how you can use it thesequence.substack.com/subscribe 6/6
• • •
Missing some Tweet in this thread? You can try to
force a refresh
❓AllenNLP:
+includes key building blocks for NLU
+offers state of the art NLU methods
+facilitates the work of researchers thesequence.substack.com/p/-edge22-mach…
2/
AllenNLP is built on top of @PyTorch and designed with experimentation in mind
Key contribution = maintains implementations of new models:
+text generation,
+question answering,
+sentiment analysis
+& many others
3/