Note
You are reading the documentation for MMClassification 0.x, which will soon be deprecated at the end of 2022. We recommend you upgrade to MMClassification 1.0 to enjoy fruitful new features and better performance brought by OpenMMLab 2.0. Check the installation tutorial, migration tutorial and changelog for more details.
MMClsWandbHook¶
- class mmcls.core.MMClsWandbHook(init_kwargs=None, interval=10, log_checkpoint=False, log_checkpoint_metadata=False, num_eval_images=100, **kwargs)[source]¶
Enhanced Wandb logger hook for classification.
Comparing with the :cls:`mmcv.runner.WandbLoggerHook`, this hook can not only automatically log all information in
log_buffer
but also log the following extra information.Checkpoints: If
log_checkpoint
is True, the checkpoint saved at every checkpoint interval will be saved as W&B Artifacts. This depends on the : class:mmcv.runner.CheckpointHook whose priority is higher than this hook. Please refer to https://docs.wandb.ai/guides/artifacts/model-versioning to learn more about model versioning with W&B Artifacts.Checkpoint Metadata: If
log_checkpoint_metadata
is True, every checkpoint artifact will have a metadata associated with it. The metadata contains the evaluation metrics computed on validation data with that checkpoint along with the current epoch/iter. It depends onEvalHook
whose priority is higher than this hook.Evaluation: At every interval, this hook logs the model prediction as interactive W&B Tables. The number of samples logged is given by
num_eval_images
. Currently, this hook logs the predicted labels along with the ground truth at every evaluation interval. This depends on theEvalHook
whose priority is higher than this hook. Also note that the data is just logged once and subsequent evaluation tables uses reference to the logged data to save memory usage. Please refer to https://docs.wandb.ai/guides/data-vis to learn more about W&B Tables.
Here is a config example:
checkpoint_config = dict(interval=10) # To log checkpoint metadata, the interval of checkpoint saving should # be divisible by the interval of evaluation. evaluation = dict(interval=5) log_config = dict( ... hooks=[ ... dict(type='MMClsWandbHook', init_kwargs={ 'entity': "YOUR_ENTITY", 'project': "YOUR_PROJECT_NAME" }, log_checkpoint=True, log_checkpoint_metadata=True, num_eval_images=100) ])
- Parameters
init_kwargs (dict) – A dict passed to wandb.init to initialize a W&B run. Please refer to https://docs.wandb.ai/ref/python/init for possible key-value pairs.
interval (int) – Logging interval (every k iterations). Defaults to 10.
log_checkpoint (bool) – Save the checkpoint at every checkpoint interval as W&B Artifacts. Use this for model versioning where each version is a checkpoint. Defaults to False.
log_checkpoint_metadata (bool) – Log the evaluation metrics computed on the validation data with the checkpoint, along with current epoch as a metadata to that checkpoint. Defaults to True.
num_eval_images (int) – The number of validation images to be logged. If zero, the evaluation won’t be logged. Defaults to 100.