Note
You are reading the documentation for MMClassification 0.x, which will soon be deprecated at the end of 2022. We recommend you upgrade to MMClassification 1.0 to enjoy fruitful new features and better performance brought by OpenMMLab 2.0. Check the installation tutorial, migration tutorial and changelog for more details.
MlpMixer¶
- class mmcls.models.MlpMixer(arch='base', img_size=224, patch_size=16, out_indices=- 1, drop_rate=0.0, drop_path_rate=0.0, norm_cfg={'type': 'LN'}, act_cfg={'type': 'GELU'}, patch_cfg={}, layer_cfgs={}, init_cfg=None)[source]¶
Mlp-Mixer backbone.
Pytorch implementation of MLP-Mixer: An all-MLP Architecture for Vision
- Parameters
MLP Mixer architecture. If use string, choose from ‘small’, ‘base’ and ‘large’. If use dict, it should have below keys:
embed_dims (int): The dimensions of embedding.
num_layers (int): The number of MLP blocks.
tokens_mlp_dims (int): The hidden dimensions for tokens FFNs.
channels_mlp_dims (int): The The hidden dimensions for channels FFNs.
Defaults to ‘base’.
img_size (int | tuple) – The input image shape. Defaults to 224.
patch_size (int | tuple) – The patch size in patch embedding. Defaults to 16.
out_indices (Sequence | int) – Output from which layer. Defaults to -1, means the last layer.
drop_rate (float) – Probability of an element to be zeroed. Defaults to 0.
drop_path_rate (float) – stochastic depth rate. Defaults to 0.
norm_cfg (dict) – Config dict for normalization layer. Defaults to
dict(type='LN')
.act_cfg (dict) – The activation config for FFNs. Default GELU.
patch_cfg (dict) – Configs of patch embeding. Defaults to an empty dict.
layer_cfgs (Sequence | dict) – Configs of each mixer block layer. Defaults to an empty dict.
init_cfg (dict, optional) – Initialization config dict. Defaults to None.