It presents a Private Aggregation of Teacher Ensembles (PATE) method to ensure privacy in training datasets
Thread👇🏼 🔎
Imagine that two different models, trained on two different datasets produce similar outputs
Then, their decision does not reveal information about any single training example
And this is another way to say it ensures the privacy of the training data
2/⬇️
PATE uses a perturbation technique that structures the learning process using an ensemble of teacher models communicating their knowledge to a student model
3/⬇️
❓AllenNLP:
+includes key building blocks for NLU
+offers state of the art NLU methods
+facilitates the work of researchers thesequence.substack.com/p/-edge22-mach…
2/
AllenNLP is built on top of @PyTorch and designed with experimentation in mind
Key contribution = maintains implementations of new models:
+text generation,
+question answering,
+sentiment analysis
+& many others
3/