ClsHead¶
- class mmpretrain.models.heads.ClsHead(loss={'loss_weight': 1.0, 'type': 'CrossEntropyLoss'}, topk=(1,), cal_acc=False, init_cfg=None)[source]¶
Classification head.
- Parameters:
loss (dict) – Config of classification loss. Defaults to
dict(type='CrossEntropyLoss', loss_weight=1.0)
.topk (int | Tuple[int]) – Top-k accuracy. Defaults to
(1, )
.cal_acc (bool) – Whether to calculate accuracy during training. If you use batch augmentations like Mixup and CutMix during training, it is pointless to calculate accuracy. Defaults to False.
init_cfg (dict, optional) – the config to control the initialization. Defaults to None.
- loss(feats, data_samples, **kwargs)[source]¶
Calculate losses from the classification score.
- Parameters:
feats (tuple[Tensor]) – The features extracted from the backbone. Multiple stage inputs are acceptable but only the last stage will be used to classify. The shape of every item should be
(num_samples, num_classes)
.data_samples (List[DataSample]) – The annotation data of every samples.
**kwargs – Other keyword arguments to forward the loss module.
- Returns:
a dictionary of loss components
- Return type:
- pre_logits(feats)[source]¶
The process before the final classification head.
The input
feats
is a tuple of tensor, and each tensor is the feature of a backbone stage. InClsHead
, we just obtain the feature of the last stage.
- predict(feats, data_samples=None)[source]¶
Inference without augmentation.
- Parameters:
feats (tuple[Tensor]) – The features extracted from the backbone. Multiple stage inputs are acceptable but only the last stage will be used to classify. The shape of every item should be
(num_samples, num_classes)
.data_samples (List[DataSample | None], optional) – The annotation data of every samples. If not None, set
pred_label
of the input data samples. Defaults to None.
- Returns:
A list of data samples which contains the predicted results.
- Return type:
List[DataSample]