You are reading the documentation for MMClassification 0.x, which will soon be deprecated at the end of 2022. We recommend you upgrade to MMClassification 1.0 to enjoy fruitful new features and better performance brought by OpenMMLab 2.0. Check the installation tutorial, migration tutorial and changelog for more details.


class mmcls.models.CSPNet(arch, stem_fn, in_channels=3, out_indices=- 1, frozen_stages=- 1, drop_path_rate=0.0, conv_cfg=None, norm_cfg={'eps': 1e-05, 'type': 'BN'}, act_cfg={'inplace': True, 'type': 'LeakyReLU'}, norm_eval=False, init_cfg={'layer': 'Conv2d', 'type': 'Kaiming'})[source]

The abstract CSP Network class.

A Pytorch implementation of CSPNet: A New Backbone that can Enhance Learning Capability of CNN

This class is an abstract class because the Cross Stage Partial Network (CSPNet) is a kind of universal network structure, and you network block to implement networks like CSPResNet, CSPResNeXt and CSPDarkNet.

  • arch (dict) –

    The architecture of the CSPNet. It should have the following keys:

    • block_fn (Callable): A function or class to return a block module, and it should accept at least in_channels, out_channels, expansion, drop_path_rate, norm_cfg and act_cfg.

    • in_channels (Tuple[int]): The number of input channels of each stage.

    • out_channels (Tuple[int]): The number of output channels of each stage.

    • num_blocks (Tuple[int]): The number of blocks in each stage.

    • expansion_ratio (float | Tuple[float]): The expansion ratio in the expand convolution of each stage. Defaults to 0.5.

    • bottle_ratio (float | Tuple[float]): The expansion ratio of blocks in each stage. Defaults to 2.

    • has_downsampler (bool | Tuple[bool]): Whether to add a downsample convolution in each stage. Defaults to True

    • down_growth (bool | Tuple[bool]): Whether to expand the channels in the downsampler layer of each stage. Defaults to False.

    • block_args (dict | Tuple[dict], optional): The extra arguments to the blocks in each stage. Defaults to None.

  • stem_fn (Callable) – A function or class to return a stem module. And it should accept in_channels.

  • in_channels (int) – Number of input image channels. Defaults to 3.

  • out_indices (int | Sequence[int]) – Output from which stages. Defaults to -1, which means the last stage.

  • frozen_stages (int) – Stages to be frozen (stop grad and set eval mode). -1 means not freezing any parameters. Defaults to -1.

  • conv_cfg (dict, optional) – The config dict for conv layers in blocks. Defaults to None, which means use Conv2d.

  • norm_cfg (dict) – The config dict for norm layers. Defaults to dict(type='BN', eps=1e-5).

  • act_cfg (dict) – The config dict for activation functions. Defaults to dict(type='LeakyReLU', inplace=True).

  • norm_eval (bool) – Whether to set norm layers to eval mode, namely, freeze running stats (mean and var). Note: Effect on Batch Norm and its variants only. Defaults to False.

  • init_cfg (dict, optional) – The initialization settings. Defaults to dict(type='Kaiming', layer='Conv2d')).


>>> from functools import partial
>>> import torch
>>> import torch.nn as nn
>>> from mmcls.models import CSPNet
>>> from mmcls.models.backbones.resnet import Bottleneck
>>> # A simple example to build CSPNet.
>>> arch = dict(
...     block_fn=Bottleneck,
...     in_channels=[32, 64],
...     out_channels=[64, 128],
...     num_blocks=[3, 4]
... )
>>> stem_fn = partial(nn.Conv2d, out_channels=32, kernel_size=3)
>>> model = CSPNet(arch=arch, stem_fn=stem_fn, out_indices=(0, 1))
>>> inputs = torch.rand(1, 3, 224, 224)
>>> outs = model(inputs)
>>> for out in outs:
...     print(out.shape)
(1, 64, 111, 111)
(1, 128, 56, 56)

Defines the computation performed at every call.

Should be overridden by all subclasses.


Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.


Sets the module in training mode.

This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. Dropout, BatchNorm, etc.


mode (bool) – whether to set training mode (True) or evaluation mode (False). Default: True.



Return type


Read the Docs v: master
On Read the Docs
Project Home

Free document hosting provided by Read the Docs.