备注
您正在阅读 MMClassification 0.x 版本的文档。MMClassification 0.x 会在 2022 年末被切换为次要分支。建议您升级到 MMClassification 1.0 版本,体验更多新特性和新功能。请查阅 MMClassification 1.0 的安装教程、迁移教程以及更新日志。
mmcls.models.LabelSmoothLoss¶
- class mmcls.models.LabelSmoothLoss(label_smooth_val, num_classes=None, mode='original', reduction='mean', loss_weight=1.0)[源代码]¶
Initializer for the label smoothed cross entropy loss.
Refers to Rethinking the Inception Architecture for Computer Vision
This decreases gap between output scores and encourages generalization. Labels provided to forward can be one-hot like vectors (NxC) or class indices (Nx1). And this accepts linear combination of one-hot like labels from mixup or cutmix except multi-label task.
- 参数
label_smooth_val (float) – The degree of label smoothing.
num_classes (int, optional) – Number of classes. Defaults to None.
mode (str) – Refers to notes, Options are ‘original’, ‘classy_vision’, ‘multi_label’. Defaults to ‘original’
reduction (str) – The method used to reduce the loss. Options are “none”, “mean” and “sum”. Defaults to ‘mean’.
loss_weight (float) – Weight of the loss. Defaults to 1.0.
提示
if the mode is “original”, this will use the same label smooth method as the original paper as:
\[(1-\epsilon)\delta_{k, y} + \frac{\epsilon}{K}\]where epsilon is the label_smooth_val, K is the num_classes and delta(k,y) is Dirac delta, which equals 1 for k=y and 0 otherwise.
if the mode is “classy_vision”, this will use the same label smooth method as the facebookresearch/ClassyVision repo as:
\[\frac{\delta_{k, y} + \epsilon/K}{1+\epsilon}\]if the mode is “multi_label”, this will accept labels from multi-label task and smoothing them as:
\[(1-2\epsilon)\delta_{k, y} + \epsilon\]