Shortcuts

BaseClassifier

class mmcls.models.BaseClassifier(init_cfg=None)[source]

Base class for classifiers.

forward(img, return_loss=True, **kwargs)[source]

Calls either forward_train or forward_test depending on whether return_loss=True.

Note this setting will change the expected inputs. When return_loss=True, img and img_meta are single-nested (i.e. Tensor and List[dict]), and when resturn_loss=False, img and img_meta should be double nested (i.e. List[Tensor], List[List[dict]]), with the outer list indicating test time augmentations.

forward_test(imgs, **kwargs)[source]
Parameters

imgs (List[Tensor]) – the outer list indicates test-time augmentations and inner Tensor should have a shape NxCxHxW, which contains all images in the batch.

abstract forward_train(imgs, **kwargs)[source]
Parameters
  • img (list[Tensor]) – List of tensors of shape (1, C, H, W). Typically these should be mean centered and std scaled.

  • kwargs (keyword arguments) – Specific to concrete implementation.

show_result(img, result, text_color='white', font_scale=0.5, row_width=20, show=False, fig_size=(15, 10), win_name='', wait_time=0, out_file=None)[source]

Draw result over img.

Parameters
  • img (str or ndarray) – The image to be displayed.

  • result (dict) – The classification results to draw over img.

  • text_color (str or tuple or Color) – Color of texts.

  • font_scale (float) – Font scales of texts.

  • row_width (int) – width between each row of results on the image.

  • show (bool) – Whether to show the image. Default: False.

  • fig_size (tuple) – Image show figure size. Defaults to (15, 10).

  • win_name (str) – The window name.

  • wait_time (int) – How many seconds to display the image. Defaults to 0.

  • out_file (str or None) – The filename to write the image. Default: None.

Returns

Image with overlaid results.

Return type

img (ndarray)

train_step(data, optimizer=None, **kwargs)[source]

The iteration step during training.

This method defines an iteration step during training, except for the back propagation and optimizer updating, which are done in an optimizer hook. Note that in some complicated cases or models, the whole process including back propagation and optimizer updating are also defined in this method, such as GAN.

Parameters
  • data (dict) – The output of dataloader.

  • optimizer (torch.optim.Optimizer | dict, optional) – The optimizer of runner is passed to train_step(). This argument is unused and reserved.

Returns

Dict of outputs. The following fields are contained.
  • loss (torch.Tensor): A tensor for back propagation, which can be a weighted sum of multiple losses.

  • log_vars (dict): Dict contains all the variables to be sent to the logger.

  • num_samples (int): Indicates the batch size (when the model is DDP, it means the batch size on each GPU), which is used for averaging the logs.

Return type

dict

val_step(data, optimizer=None, **kwargs)[source]

The iteration step during validation.

This method shares the same signature as train_step(), but used during val epochs. Note that the evaluation after training epochs is not implemented with this method, but an evaluation hook.

Parameters
  • data (dict) – The output of dataloader.

  • optimizer (torch.optim.Optimizer | dict, optional) – The optimizer of runner is passed to train_step(). This argument is unused and reserved.

Returns

Dict of outputs. The following fields are contained.
  • loss (torch.Tensor): A tensor for back propagation, which can be a weighted sum of multiple losses.

  • log_vars (dict): Dict contains all the variables to be sent to the logger.

  • num_samples (int): Indicates the batch size (when the model is DDP, it means the batch size on each GPU), which is used for averaging the logs.

Return type

dict

Read the Docs v: latest
Versions
master
latest
1.x
dev-1.x
Downloads
html
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.