Shortcuts

Note

You are reading the documentation for MMClassification 0.x, which will soon be deprecated at the end of 2022. We recommend you upgrade to MMClassification 1.0 to enjoy fruitful new features and better performance brought by OpenMMLab 2.0. Check the installation tutorial, migration tutorial and changelog for more details.

ConvNeXt

class mmcls.models.ConvNeXt(arch='tiny', in_channels=3, stem_patch_size=4, norm_cfg={'eps': 1e-06, 'type': 'LN2d'}, act_cfg={'type': 'GELU'}, linear_pw_conv=True, drop_path_rate=0.0, layer_scale_init_value=1e-06, out_indices=- 1, frozen_stages=0, gap_before_final_norm=True, with_cp=False, init_cfg=None)[source]

ConvNeXt.

A PyTorch implementation of : A ConvNet for the 2020s

Modified from the official repo and timm.

Parameters
  • arch (str | dict) –

    The model’s architecture. If string, it should be one of architecture in ConvNeXt.arch_settings. And if dict, it should include the following two keys:

    • depths (list[int]): Number of blocks at each stage.

    • channels (list[int]): The number of channels at each stage.

    Defaults to ‘tiny’.

  • in_channels (int) – Number of input image channels. Defaults to 3.

  • stem_patch_size (int) – The size of one patch in the stem layer. Defaults to 4.

  • norm_cfg (dict) – The config dict for norm layers. Defaults to dict(type='LN2d', eps=1e-6).

  • act_cfg (dict) – The config dict for activation between pointwise convolution. Defaults to dict(type='GELU').

  • linear_pw_conv (bool) – Whether to use linear layer to do pointwise convolution. Defaults to True.

  • drop_path_rate (float) – Stochastic depth rate. Defaults to 0.

  • layer_scale_init_value (float) – Init value for Layer Scale. Defaults to 1e-6.

  • out_indices (Sequence | int) – Output from which stages. Defaults to -1, means the last stage.

  • frozen_stages (int) – Stages to be frozen (all param fixed). Defaults to 0, which means not freezing any parameters.

  • gap_before_final_norm (bool) – Whether to globally average the feature map before the final norm layer. In the official repo, it’s only used in classification task. Defaults to True.

  • with_cp (bool) – Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed. Defaults to False.

  • init_cfg (dict, optional) – Initialization config dict

forward(x)[source]

Forward computation.

Parameters

x (tensor | tuple[tensor]) – x could be a Torch.tensor or a tuple of Torch.tensor, containing input data for forward computation.

train(mode=True)[source]

Set module status before forward computation.

Parameters

mode (bool) – Whether it is train_mode or test_mode

Read the Docs v: master
Versions
master
latest
1.x
dev-1.x
Downloads
html
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.