# mmcls.models.LabelSmoothLoss¶

class mmcls.models.LabelSmoothLoss(label_smooth_val, num_classes=None, mode='original', reduction='mean', loss_weight=1.0)[源代码]

Initializer for the label smoothed cross entropy loss.

This decreases gap between output scores and encourages generalization. Labels provided to forward can be one-hot like vectors (NxC) or class indices (Nx1). And this accepts linear combination of one-hot like labels from mixup or cutmix except multi-label task.

• label_smooth_val (float) – The degree of label smoothing.

• num_classes (int, optional) – Number of classes. Defaults to None.

• mode (str) – Refers to notes, Options are ‘original’, ‘classy_vision’, ‘multi_label’. Defaults to ‘original’

• reduction (str) – The method used to reduce the loss. Options are “none”, “mean” and “sum”. Defaults to ‘mean’.

• loss_weight (float) – Weight of the loss. Defaults to 1.0.

if the mode is “original”, this will use the same label smooth method as the original paper as:

$(1-\epsilon)\delta_{k, y} + \frac{\epsilon}{K}$

where epsilon is the label_smooth_val, K is the num_classes and delta(k,y) is Dirac delta, which equals 1 for k=y and 0 otherwise.

if the mode is “classy_vision”, this will use the same label smooth method as the facebookresearch/ClassyVision repo as:

$\frac{\delta_{k, y} + \epsilon/K}{1+\epsilon}$

if the mode is “multi_label”, this will accept labels from multi-label task and smoothing them as:

$(1-2\epsilon)\delta_{k, y} + \epsilon$