Shortcuts

mmpretrain.evaluation.RetrievalAveragePrecision

class mmpretrain.evaluation.RetrievalAveragePrecision(topk=None, mode='IR', collect_device='cpu', prefix=None)[source]

Calculate the average precision for image retrieval.

Parameters:
  • topk (int, optional) – Predictions with the k-th highest scores are considered as positive.

  • mode (str, optional) – The mode to calculate AP, choose from ‘IR’(information retrieval) and ‘integrate’. Defaults to ‘IR’.

  • collect_device (str) – Device name used for collecting results from different ranks during distributed training. Must be ‘cpu’ or ‘gpu’. Defaults to ‘cpu’.

  • prefix (str, optional) – The prefix that will be added in the metric names to disambiguate homonymous metrics of different evaluators. If prefix is not provided in the argument, self.default_prefix will be used instead. Defaults to None.

Note

If the mode set to ‘IR’, use the stanford AP calculation of information retrieval as in wikipedia page[1]; if set to ‘integrate’, the method implemented integrates over the precision-recall curve by averaging two adjacent precision points, then multiplying by the recall step like mAP in Detection task. This is the convention for the Revisited Oxford/Paris datasets[2].

References

[1] Wikipedia entry for the Average precision

[2] The Oxford Buildings Dataset

Examples

Use in code:

>>> import torch
>>> import numpy as np
>>> from mmcls.evaluation import RetrievalAveragePrecision
>>> # using index format inputs
>>> pred = [ torch.Tensor([idx for idx in range(100)]) ] * 3
>>> target = [[0, 3, 6, 8, 35], [1, 2, 54, 105], [2, 42, 205]]
>>> RetrievalAveragePrecision.calculate(pred, target, 10, True, True)
29.246031746031747
>>> # using tensor format inputs
>>> pred = np.array([np.linspace(0.95, 0.05, 10)] * 2)
>>> target = torch.Tensor([[1, 0, 1, 0, 0, 1, 0, 0, 1, 1]] * 2)
>>> RetrievalAveragePrecision.calculate(pred, target, 10)
62.222222222222214

Use in OpenMMLab config files:

val_evaluator = dict(type='RetrievalAveragePrecision', topk=100)
test_evaluator = val_evaluator
Read the Docs v: latest
Versions
latest
stable
mmcls-1.x
mmcls-0.x
dev
Downloads
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.