📌 It was introduced in the RetinaNet paper to address the foreground-background class imbalance encountered during training of dense detectors (one-stage detectors)
...
📌 It’s derived from the cross-entropy loss such that it down-weights the loss assigned to well-classified examples. It's used in the classification head.
📌 It’s used in many one-stage object detection models: EfficientDet, FCOS, VFNet, and many other models
📌 It can also be used in two-stage object detection models: e.g. Sparse R-CNN
📌 It crashes losses associated to easy examples: for a confidence score of 0.9, the focal loss is 100 times smaller than the cross-entropy loss (see figure here above)
📌 Thanks to the focal loss, RetinaNet was the first one-stage detector model to beat two-stage detector models
📌 Focal loss can also be used in classification centered tasks (not only in object detection tasks)
🤔 Are there other things you would add to this list?
Thanks for passing by!
🟧Def Follow @ai_fast_track for more stuff on Object Detection
🟦and if you could give the thread a quick retweet, it will help other people catch this content in their feed 🙏
CornerNet Follow-up Paper- CenterNet: Keypoint Triplets for Object Detection
🧵
- CornerNet focus on detecting object edges leads to generating boxes sharing similar edges👇
- 🔥CenterNet Solution🔥: It adds a Center Pooling
- Objects central parts have richer features (map) than corner regions (those rely on Corner Pooling to compensate features lack). CenterNet correctly detects objects by checking the central parts, in addition to their corners.
Performance Comparison: CenterNet ranks among the top SOTA Two-Stage Detectors.