🤝Backward Compatibility enables matching internal feature representations from different neural networks. Stationary Representation maintains feature spatial configurations fixed during learning. How does Neural Collapse fit in?
(2/7)
💥Neural Collapse is when feature representations converge towards a simpler structure during training. This fascinating phenomenon is strongly related to compatibility and stationary representation.
(3/7)
Neural collapse demonstrates that features and classifier prototypes tend to collapse into a symmetric shape, known as a regular simplex (a high-dimensional tetrahedron). Stationary Representation directly fixes classifier prototypes to a regular simplex and
(4/7)
Backward Compatibility aligns multiple model representations to a common reference.
(5/7)
The implications are noteworthy: Given that deep neural networks naturally align to a regular simplex, starting with this inherent structure could be beneficial. Can consistently aligning multiple models to a fixed structure achieve compatibility?
(6/7)