Wow - in 8 tweets I just learned and un-learned more about the mysteries of deep neural networks than I've probably learned or un-learned about them in the last two years.
This is the start of something really really big... also a huge door opened for federated learning.
This technique really seems to get a foothold on managing the intelligence in an AI model. Imagine training 10,000 small models on 10,000 different topic areas and being able to decide exactly what collections of specialties a model was to have.
Also - the natural next question.... can it be reversed? Can I un-merge two models that have been merged?
Example: If I trained an MNIST model on half the digits in MNIST, could I then use it to remove the intelligence of those digits from a normal MNIST model?
So many exciting research ideas here. This is the spark that starts a fire. What an exciting discovery. Well done!
Also - if that unlearning technique works (mentioned above) - that's a great boon for privacy! Maybe if you wanted just your data removed from a trained model - this could sortof.... carve it out?
Lots of interesting questions - will be interested to see where this goes!
Wild idea: maybe AI models really will be a flexible data structure we can use for all sorts of stuff. Imagine if it had the usability and utility of something like JSON?... giving you the ability to insert and remove concepts at will like a dictionary. Wow what a world!
Thank you to @madhavajay for bringing this to my attention!
• • •
Missing some Tweet in this thread? You can try to
force a refresh