Giter VIP home page Giter VIP logo

mmeval's Introduction

 
OpenMMLab website HOT      OpenMMLab platform TRY IT OUT
 

PyPI - Python Version PyPI license open issues issue resolution

🤔Reporting Issues

English | 简体中文

Introduction

MMEval is a machine learning evaluation library that supports efficient and accurate distributed evaluation on a variety of machine learning frameworks.

Major features:

  • Comprehensive metrics for various computer vision tasks (NLP will be covered soon!)
  • Efficient and accurate distributed evaluation, backed by multiple distributed communication backends
  • Support multiple machine learning frameworks via dynamic input dispatching mechanism
Supported distributed communication backends
MPI4Py torch.distributed Horovod paddle.distributed oneflow.comm
MPI4PyDist TorchCPUDist
TorchCUDADist
TFHorovodDist PaddleDist OneFlowDist
Supported metrics and ML frameworks

NOTE: MMEval tested with PyTorch 1.6+, TensorFlow 2.4+, Paddle 2.2+ and OneFlow 0.8+.

Metric numpy.ndarray torch.Tensor tensorflow.Tensor paddle.Tensor oneflow.Tensor
Accuracy
SingleLabelMetric
MultiLabelMetric
AveragePrecision
MeanIoU
VOCMeanAP
OIDMeanAP
COCODetection
ProposalRecall
F1Score
HmeanIoU
PCKAccuracy
MpiiPCKAccuracy
JhmdbPCKAccuracy
EndPointError
AVAMeanAP
StructuralSimilarity
SignalNoiseRatio
PeakSignalNoiseRatio
MeanAbsoluteError
MeanSquaredError

Installation

MMEval requires Python 3.6+ and can be installed via pip.

pip install mmeval

To install the dependencies required for all the metrics provided in MMEval, you can install them with the following command.

pip install 'mmeval[all]'

Get Started

There are two ways to use MMEval's metrics, using Accuracy as an example:

from mmeval import Accuracy
import numpy as np

accuracy = Accuracy()

The first way is to directly call the instantiated Accuracy object to calculate the metric.

labels = np.asarray([0, 1, 2, 3])
preds = np.asarray([0, 2, 1, 3])
accuracy(preds, labels)
# {'top1': 0.5}

The second way is to calculate the metric after accumulating data from multiple batches.

for i in range(10):
    labels = np.random.randint(0, 4, size=(100, ))
    predicts = np.random.randint(0, 4, size=(100, ))
    accuracy.add(predicts, labels)

accuracy.compute()
# {'top1': ...}

Learn More

Tutorials
Examples
Design

In the works

  • Continue to add more metrics and expand more tasks (e.g. NLP, audio).
  • Support more ML frameworks and explore multiple ML framework support paradigms.

Contributing

We appreciate all contributions to improve MMEval. Please refer to CONTRIBUTING.md for the contributing guideline.

License

This project is released under the Apache 2.0 license.

Projects in OpenMMLab

  • MMEngine: OpenMMLab foundational library for training deep learning models.
  • MIM: MIM installs OpenMMLab packages.
  • MMCV: OpenMMLab foundational library for computer vision.
  • MMClassification: OpenMMLab image classification toolbox and benchmark.
  • MMDetection: OpenMMLab detection toolbox and benchmark.
  • MMDetection3D: OpenMMLab's next-generation platform for general 3D object detection.
  • MMRotate: OpenMMLab rotated object detection toolbox and benchmark.
  • MMYOLO: OpenMMLab YOLO series toolbox and benchmark.
  • MMSegmentation: OpenMMLab semantic segmentation toolbox and benchmark.
  • MMOCR: OpenMMLab text detection, recognition, and understanding toolbox.
  • MMPose: OpenMMLab pose estimation toolbox and benchmark.
  • MMHuman3D: OpenMMLab 3D human parametric model toolbox and benchmark.
  • MMSelfSup: OpenMMLab self-supervised learning toolbox and benchmark.
  • MMRazor: OpenMMLab model compression toolbox and benchmark.
  • MMFewShot: OpenMMLab fewshot learning toolbox and benchmark.
  • MMAction2: OpenMMLab's next-generation action understanding toolbox and benchmark.
  • MMTracking: OpenMMLab video perception toolbox and benchmark.
  • MMFlow: OpenMMLab optical flow toolbox and benchmark.
  • MMEditing: OpenMMLab image and video editing toolbox.
  • MMGeneration: OpenMMLab image and video generative models toolbox.
  • MMDeploy: OpenMMLab model deployment framework.

mmeval's People

Contributors

bigwangyudong avatar c1rn09 avatar dai-wenxun avatar fengsxy avatar gaotongxiao avatar go-with-me000 avatar harold-lkk avatar ice-tong avatar lareinam avatar leoxing1996 avatar liqikai9 avatar ofhwei avatar vansin avatar xuan07472 avatar yanxingliu avatar yingfhu avatar ytzhao avatar z-fran avatar zachary-66 avatar zhouzaida avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mmeval's Issues

Add faster validation to improve training speed

Describe the feature

Faster implementation of the COCOeval function written in C++

Motivation

I often work with the mmdet project and use datasets in COCO format. There are a large number of objects in my datasets (more than 3000 for 1 photo). I would like to get validation after each epoch, but at the same time not delay training. The standard COCOeval algorithm with such a number of objects is slow, but there is a faster implementation, which I cleaned out of dependencies (torch / detectron2) and use in my work.
I am ready to open a PR and transfer the developments to the project.

Related resources

The original implementation of the library is in detectron2
Also, at some point, christofferedlund started working on clearing the library of facebook dependencies, but abandoned the project without putting the source codes on github.
I found the source codes on the Internet and continued his work. faster_coco_eval

Additional context

I benchmarked the validation on the original coco val dataset and presented the results in the project repository.

Visualization of testing comparison.ipynb available in comparison
Tested with yolo3 model (bbox eval) and yoloact model (segm eval)

Type COCOeval COCOeval_faster Profit
bbox 22.854 sec. 8.714 sec. more than 2x
segm 35.356 sec. 18.403 sec. 2x

[Attention] 超级视客营 MMEval 🥇🥈🥉

image

活动介绍

大家好,第一期 OpenMMLab 超级视客营实训活动开始啦!超级视客营实训活动提供十七个方向、上百个不同难度的任务供大家选择,不管你是初涉 AI 的新手还是资深炼丹师,都有适合你的任务供你选择。助力大家上手 OpenMMLab 开源算法库并参与项目建设。本期活动联合北京超级云计算中心,提供算力支持,为大家开发保驾护航。
活动参与方式:选择你感兴趣的任务,在 OpenMMLab 官网提交报名表。完成匹配后,即可和导师对接制定任务规划,开始上手开发。根据不同任务要求在对应的地址提交代码结果,出题方初步 review 通过后即可领取下一个任务或者坐等领奖。活动详情戳:OpenMMLab 官网活动页

任务列表

任务题目 任务描述 技术标签 难易程度 积分数
Accuracy 支持更多深度学习框架(JAX) 为 Accuracy 支持使用 jax.numpy.ndarray 计算,补充并通过单元测试 https://github.com/open-mmlab/mmeval/blob/main/mmeval/metrics/accuracy.py 多分派机制:https://mmeval.readthedocs.io/en/latest/design/multiple_dispatch.html Python、 JAX 新手任务 10
MeanIoU 支持更多深度学习框架 (JAX) 为 MeanIoU 支持使用 jax.numpy.ndarray 计算,补充并通过单元测试 https://github.com/open-mmlab/mmeval/blob/main/mmeval/metrics/mean_iou.py#L21 多分派机制:https://mmeval.readthedocs.io/en/latest/design/multiple_dispatch.html Python、JAX 新手任务 10
为 MMEval 添加一个 Metric 从列表中选择相关方向算法库中的一个 Metric,添加到 MMEval 中并对该算法库进行适配,详细任务列表见:#50 Python 进阶任务 30
为 MMEval 添加一个 Metric 从列表中选择相关方向算法库中的一个 Metric,添加到 MMEval 中并对该算法库进行适配,详细任务列表见:#50 Python 进阶任务 30
为 MMEval 添加一个 Metric 从列表中选择相关方向算法库中的一个 Metric,添加到 MMEval 中并对该算法库进行适配,详细任务列表见:#50 Python 进阶任务 30
为 MMEval 添加一个 Metric 从列表中选择相关方向算法库中的一个 Metric,添加到 MMEval 中并对该算法库进行适配,详细任务列表见:#50 Python 进阶任务 30
为 MMEval 添加一个 Metric 从列表中选择相关方向算法库中的一个 Metric,添加到 MMEval 中并对该算法库进行适配,详细任务列表见:#50 Python 进阶任务 30
为 MMEval 添加一个 Metric 从列表中选择相关方向算法库中的一个 Metric,添加到 MMEval 中并对该算法库进行适配,详细任务列表见:#50 Python 进阶任务 30
为 MMEval 添加一个 Metric 从列表中选择相关方向算法库中的一个 Metric,添加到 MMEval 中并对该算法库进行适配,详细任务列表见:#50 Python 进阶任务 30
为 MMEval 添加一个 Metric 从列表中选择相关方向算法库中的一个 Metric,添加到 MMEval 中并对该算法库进行适配,详细任务列表见:#50 Python 进阶任务 30
为 MMEval 添加一个 Metric 从列表中选择相关方向算法库中的一个 Metric,添加到 MMEval 中并对该算法库进行适配,详细任务列表见:#50 Python 进阶任务 30
为 MMEval 添加一个 Metric 从列表中选择相关方向算法库中的一个 Metric,添加到 MMEval 中并对该算法库进行适配,详细任务列表见:#50 Python 进阶任务 30

活动报名地址:报名表地址
根据任务难度可以获得对应积分,兑换不同奖品。另外完成任务后在知识社区发布学习心得即可获得额外积分(记得主动找小助手领取哦)。
活动交流群:群二维码
image

有任何疑问欢迎大家加入群聊或者 Issue 下参与讨论,快来完成挑战,加入 OpenMMLab 贡献者队伍吧~

Some question about PSNR metric

Codes:

def add(self, predictions: Sequence[np.ndarray], groundtruths: Sequence[np.ndarray], channel_order: Optional[str] = None) -> None: # type: ignore # yapf: disable # noqa: E501
"""Add PSNR score of batch to ``self._results``
Args:
predictions (Sequence[np.ndarray]): Predictions of the model.
groundtruths (Sequence[np.ndarray]): The ground truth images.
channel_order (Optional[str]): The channel order of the input
samples. If not passed, will set as :attr:`self.channel_order`.
Defaults to None.
"""
if channel_order is None:
channel_order = self.channel_order
for prediction, groundtruth in zip(predictions, groundtruths):
assert groundtruth.shape == prediction.shape, (
f'Image shapes are different: \
{groundtruth.shape}, {prediction.shape}.')
groundtruth = reorder_and_crop(
groundtruth,
crop_border=self.crop_border,
input_order=self.input_order,
convert_to=self.convert_to,
channel_order=self.channel_order)
prediction = reorder_and_crop(
prediction,
crop_border=self.crop_border,
input_order=self.input_order,
convert_to=self.convert_to,
channel_order=self.channel_order)

Seems channel_order will not be used if channel_order is not None

way to fix:

maybe change self.channel_order -> channel_order is OK?

Calling for volunteers for adding new metrics!

Dear community,

We are excited to introduce our new evaluation library: MMEval, a unified evaluation library for multiple machine learning libraries.

With the release of MMEval, we have some metrics from the OpenMMLab algorithm library that have not yet been added. We list the metrics below and call for community help.

If you are interested, you can claim a metric by replying to this issue in the following format:

Metric No. : <The Metric No. you claim>
Status :  Apply | Submit
Links : The repo links you fork (Apply status) / The PR links you open (Submit status)
No. Metric Algorithm library Level of Difficulty Assigned to Status / PR
1 CityScapesMetric mmdet / mmseg ⭐️⭐️⭐️ @Muyun99 Apply
2 CocoPanopticMetric mmdet ⭐️⭐️⭐️ @Even-ok Apply
3 LVISMetric mmdet ⭐️⭐️ @dmucby Apply
4 CrowdHumanMetric mmdet ⭐️⭐️ @tianleiSHI Apply
5 DOTAMetric mmrotate ⭐️⭐️ @YanxingLiu #65
6 RotatedCocoMetric mmrotate ⭐️⭐️ @YanxingLiu
7 IndoorMetric mmdet3d ⭐️⭐️⭐️
8 InstanceSegMetric mmdet3d ⭐️⭐️⭐️ @Pzzzzz5142 Apply
9 KittiMetric mmdet3d ⭐️⭐️⭐️
10 LyftMetric mmdet3d ⭐️⭐️⭐️
11 NuScenesMetric mmdet3d ⭐️⭐️⭐️
12 WaymoMetric mmdet3d ⭐️⭐️⭐️
13 OneMinusNEDMetric mmocr ⭐️⭐️
14 WordMetric mmocr ⭐️⭐️
15 CharMetric mmocr ⭐️⭐️
16 CocoWholeBodyMetric mmpose ⭐️⭐️
17 NME mmpose ⭐️⭐️
18 AUC mmpose ⭐️⭐️
19 EPE mmpose ⭐️⭐️
20 PoseTrack18Metric mmpose ⭐️⭐️
21 ANetMetric mmaction2 ⭐️⭐️
22 AccMetric mmaction2 ⭐️⭐️
23 FlowOutliers mmflow ⭐️⭐️ @fengsxy Apply
24 MOTChallengeMetrics mmtracking ⭐️⭐️
25 ReIDMetrics mmtracking ⭐️⭐️
26 SOTMetric mmtracking ⭐️⭐️
27 TAOMetric mmtracking ⭐️⭐️
28 YouTubeVISMetric mmtracking ⭐️⭐️ @qianlian-mozi Apply
29 MultiScaleStructureSimilarity mmediting ⭐️⭐️
30 FrechetInceptionDistance & TransFID mmediting ⭐️⭐️
31 InceptionScore & TransIS mmediting ⭐️⭐️
32 SAD mmediting ⭐️⭐️ @xuan07472 #76
33 MattingMSE mmediting ⭐️⭐️ @xuan07472 #71
34 ConnectivityError mmediting ⭐️⭐️ @xuan07472 #79
35 GradientError mmediting ⭐️⭐️ @xuan07472 #78
36 PerceptualPathLength mmediting ⭐️⭐️
37 PrecisionAndRecall mmediting ⭐️⭐️
38 SlicedWassersteinDistance mmediting ⭐️⭐️
39 NIQE mmediting ⭐️⭐️
40 Equivariance mmediting ⭐️⭐️

NOTES:

  1. Documents of MMEval can be found at: https://mmeval.readthedocs.io/en/latest/
  2. The contributing guides can be found at: https://github.com/open-mmlab/mmeval/blob/main/CONTRIBUTING.md

Examples:

We provide some examples showing how to add Metric from the OpenMMLab algorithm library to MMEval

  1. mmeval.Accuracy from mmcsl
  1. mmeval.MeanIoU from mmseg
  1. mmeval.VOCMeanAP from mmdet
  1. mmeval.COCODetectionMetric from mmdet

Submitted Content:

  1. The PR in MMEval that adds the new metric (with complete documentation and testing)
  2. The PR in original algorithm library that uses the new metric (with evaluation comparison between using MMEval and before)

No1. CityScapesMetric

  • Add the CityScapesMetric from mmdet / mmseg to MMEval, and use the MMEval's CityScapesMetric in mmdet / mmseg.
  • Technical Tags: Python; Object detection; Semantic segmentation;
  • The original CityScapesMetric in mmdet
  • The original CityScapesMetric in mmseg

No2. CocoPanopticMetric

  • Add the CocoPanopticMetric from mmdet to MMEval, and use the MMEval's CocoPanopticMetric in mmdet.
  • Technical Tags: Python; Object detection; Panoptic segmentation;
  • The original CocoPanopticMetric in mmdet

No3. LVISMetric

  • Add the LVISMetric from mmdet to MMEval, and use the MMEval's LVISMetric in mmdet.
  • Technical Tags: Python; Object detection;
  • The original LVISMetric in mmdet

No4. CrowdHumanMetric

  • Add the CrowdHumanMetric from mmdet to MMEval, and use the MMEval's CrowdHumanMetric in mmdet.
  • Technical Tags: Python; Object detection;
  • The original CrowdHumanMetric in mmdet

No5. DOTAMetric

  • Add the DOTAMetric from mmrotate to MMEval, and use the MMEval's DOTAMetric in mmrotate.
  • Technical Tags: Python; Rotated object detection;
  • The original DOTAMetric in mmrotate

No6. RotatedCocoMetric

  • Add the RotatedCocoMetric from mmrotate to MMEval, and use the MMEval's RotatedCocoMetric in mmrotate.
  • Technical Tags: Python; Rotated object detection;
  • The original RotatedCocoMetric in mmrotate

No7. IndoorMetric

  • Add the IndoorMetric from mmdet3d to MMEval, and use the MMEval's IndoorMetric in mmdet3d.
  • Technical Tags: Python; 3D object detection;
  • The original IndoorMetric in mmdet3d

No8. InstanceSegMetric

  • Add the InstanceSegMetric from mmdet3d to MMEval, and use the MMEval's InstanceSegMetric in mmdet3d.
  • Technical Tags: Python; 3D instance segmentation;
  • The original InstanceSegMetric in mmdet3d

No9. KittiMetric

  • Add the KittiMetric from mmdet3d to MMEval, and use the MMEval's KittiMetric in mmdet3d.
  • Technical Tags: Python; 3D object detection;
  • The original KittiMetric in mmdet3d

No10. LyftMetric

  • Add the LyftMetric from mmdet3d to MMEval, and use the MMEval's LyftMetric in mmdet3d.
  • Technical Tags: Python; 3D object detection;
  • The original LyftMetric in mmdet3d

No11. NuScenesMetric

  • Add the NuScenesMetric from mmdet3d to MMEval, and use the MMEval's NuScenesMetric in mmdet3d.
  • Technical Tags: Python; 3D object detection;
  • The original NuScenesMetric in mmdet3d

No12. WaymoMetric

  • Add the WaymoMetric from mmdet3d to MMEval, and use the MMEval's WaymoMetric in mmdet3d.
  • Technical Tags: Python; 3D object detection;
  • The original WaymoMetric in mmdet3d

No13. OneMinusNEDMetric

  • Add the OneMinusNEDMetric from mmocr to MMEval, and use the MMEval's OneMinusNEDMetric in mmocr.
  • Technical Tags: Python; OCR;
  • The original OneMinusNEDMetric in mmocr

No14. WordMetric

  • Add the WordMetric from mmocr to MMEval, and use the MMEval's WordMetric in mmocr.
  • Technical Tags: Python; OCR;
  • The original WordMetric in mmocr

No15. CharMetric

  • Add the CharMetric from mmocr to MMEval, and use the MMEval's CharMetric in mmocr.
  • Technical Tags: Python; OCR;
  • The original CharMetric in mmocr

No16. CocoWholeBodyMetric

  • Add the CocoWholeBodyMetric from mmpose to MMEval, and use the MMEval's CocoWholeBodyMetric in mmpose.
  • Technical Tags: Python; Pose estimation;
  • The original CocoWholeBodyMetric in mmpose

No17. NME

  • Add the NME from mmpose to MMEval, and use the MMEval's NME in mmpose.
  • Technical Tags: Python; Pose estimation;
  • The original NME in mmpose

No18. AUC

  • Add the AUC from mmpose to MMEval, and use the MMEval's AUC in mmpose.
  • Technical Tags: Python; Pose estimation;
  • The original AUC in mmpose

No19. EPE

  • Add the EPE from mmpose to MMEval, and use the MMEval's EPE in mmpose.
  • Technical Tags: Python; Pose estimation;
  • The original EPE in mmpose

No20. PoseTrack18Metric

  • Add the PoseTrack18Metric from mmpose to MMEval, and use the MMEval's PoseTrack18Metric in mmpose.
  • Technical Tags: Python; Pose estimation;
  • The original PoseTrack18Metric in mmpose

No21. ANetMetric

  • Add the ANetMetric from mmaction2 to MMEval, and use the MMEval's ANetMetric in mmaction2.
  • Technical Tags: Python; Video understanding;
  • The original ANetMetric in mmaction2

No22. AccMetric

  • Add the AccMetric from mmaction2 to MMEval, and use the MMEval's AccMetric in mmaction2.
  • Technical Tags: Python; Video understanding;
  • The original AccMetric in mmaction2

No23. FlowOutliers

  • Add the FlowOutliers from mmflow to MMEval, and use the MMEval's FlowOutliers in mmflow.
  • Technical Tags: Python; Optical flow;
  • The original FlowOutliers in mmflow

No24. MOTChallengeMetrics

  • Add the MOTChallengeMetrics from mmtracking to MMEval, and use the MMEval's MOTChallengeMetrics in mmtracking.
  • Technical Tags: Python; Multiple object tracking;
  • The original MOTChallengeMetrics in mmtracking

No25. ReIDMetrics

  • Add the ReIDMetrics from mmtracking to MMEval, and use the MMEval's ReIDMetrics in mmtracking.
  • Technical Tags: Python; Re-identification;
  • The original ReIDMetrics in mmtracking

No26. SOTMetric

  • Add the SOTMetric from mmtracking to MMEval, and use the MMEval's SOTMetric in mmtracking.
  • Technical Tags: Python; Single object tracking;
  • The original SOTMetric in mmtracking

No27. TAOMetric

  • Add the TAOMetric from mmtracking to MMEval, and use the MMEval's TAOMetric in mmtracking.
  • Technical Tags: Python; Object tracking;
  • The original TAOMetric in mmtracking

No28. YouTubeVISMetric

  • Add the YouTubeVISMetric from mmtracking to MMEval, and use the MMEval's YouTubeVISMetric in mmtracking.
  • Technical Tags: Python; Video instance segmentation;
  • The original YouTubeVISMetric in mmtracking

No29. MultiScaleStructureSimilarity

  • Add the MultiScaleStructureSimilarity from mmediting to MMEval, and use the MMEval's MultiScaleStructureSimilarity in mmediting.
  • Technical Tags: Python; GAN;
  • The original MultiScaleStructureSimilarity in mmediting

No30. FrechetInceptionDistance & TransFID

  • Add the FrechetInceptionDistance and TransFID from mmediting to MMEval, and use the MMEval's FrechetInceptionDistance and TransFID in mmediting.
  • Technical Tags: Python; GAN;
  • The original FrechetInceptionDistance in mmediting
  • The original TransFID in mmediting

No31. InceptionScore & TransIS

  • Add the InceptionScore and TransIS from mmediting to MMEval, and use the MMEval's InceptionScore and TransIS in mmediting.
  • Technical Tags: Python; GAN;
  • The original InceptionScore in mmediting
  • The original TransIS in mmediting

No32. SAD

  • Add the SAD from mmediting to MMEval, and use the MMEval's SAD in mmediting.
  • Technical Tags: Python; GAN;
  • The original SAD in mmediting

No33. MattingMSE

  • Add the MattingMSE from mmediting to MMEval, and use the MMEval's MattingMSE in mmediting.
  • Technical Tags: Python; Matting;
  • The original MattingMSE in mmediting

No34. ConnectivityError

  • Add the ConnectivityError from mmediting to MMEval, and use the MMEval's ConnectivityError in mmediting.
  • Technical Tags: Python; Matting;
  • The original ConnectivityError in mmediting

No35. GradientError

  • Add the GradientError from mmediting to MMEval, and use the MMEval's GradientError in mmediting.
  • Technical Tags: Python; Matting;
  • The original GradientError in mmediting

No36. PerceptualPathLength

  • Add the PerceptualPathLength from mmediting to MMEval, and use the MMEval's PerceptualPathLength in mmediting.
  • Technical Tags: Python; GAN;
  • The original PerceptualPathLength in mmediting

No37. PrecisionAndRecall

  • Add the PrecisionAndRecall from mmediting to MMEval, and use the MMEval's PrecisionAndRecall in mmediting.
  • Technical Tags: Python; GAN;
  • The original PrecisionAndRecall in mmediting

No38. SlicedWassersteinDistance

  • Add the SlicedWassersteinDistance from mmediting to MMEval, and use the MMEval's SlicedWassersteinDistance in mmediting.
  • Technical Tags: Python; GAN;
  • The original SlicedWassersteinDistance in mmediting

No39. NIQE

  • Add the NIQE from mmediting to MMEval, and use the MMEval's NIQE in mmediting.
  • Technical Tags: Python; GAN;
  • The original NIQE in mmediting

No40. Equivariance

  • Add the Equivariance from mmediting to MMEval, and use the MMEval's Equivariance in mmediting.
  • Technical Tags: Python; GAN;
  • The original Equivariance in mmediting

error

Traceback (most recent call last):
File "", line 971, in _find_and_load
File "", line 955, in _find_and_load_unlocked
File "", line 665, in load_unlocked
File "", line 678, in exec_module
File "", line 219, in call_with_frames_removed
File "D:\soft_files\anconda3\envs\mmseg\lib\site-packages\mmeval_init
.py", line 7, in
from .metrics import *
File "D:\soft_files\anconda3\envs\mmseg\lib\site-packages\mmeval\metrics_init
.py", line 12, in
from .multi_label import AveragePrecision, MultiLabelMetric
File "D:\soft_files\anconda3\envs\mmseg\lib\site-packages\mmeval\metrics\multi_label.py", line 171, in
class MultiLabelMetric(MultiLabelMixin, BaseMetric):
File "D:\soft_files\anconda3\envs\mmseg\lib\site-packages\mmeval\metrics\multi_label.py", line 380, in MultiLabelMetric
labels: Sequence['torch.Tensor']) -> List:
File "D:\soft_files\anconda3\envs\mmseg\lib\site-packages\mmeval\core\dispatcher.py", line 236, in call
signature._return_annotation) # type: ignore
File "D:\soft_files\anconda3\envs\mmseg\lib\site-packages\mmeval\core\dispatcher.py", line 218, in _traverse_type_hints
for tp_arg in annotation.args:
TypeError: 'NoneType' object is not iterable

Add args for `pip install mmeval` to install specific metric

Describe the feature
Add args for pip install mmeval to install specific metric

Motivation
If we add some metrics in MMEval like KittiMetric, WaymoMetric, which rely on the official SDK and package to evaluate, it will increase the installation burden of MMEval. I suggest MMEval support install like pip install mmeval-waymo to install the version which includes WaymoMetric.

Related resources

Additional context

MeanIoU ( when num class is 1 the function can't work )

Describe the bug
MeanIoU ( when num class is 1 the function can't work )

Reproduction

miou = MeanIoU(num_classes=1)
for i in range(10):
  labels = torch.randint(0, 2, size=(100, 10, 10))
  predicts = torch.randint(0, 2, size=(100, 10, 10))
  miou.add(predicts, labels)
miou.compute()  

Error traceback

[/usr/local/lib/python3.7/dist-packages/mmeval/metrics/mean_iou.py](https://localhost:8080/#) in compute_confusion_matrix(self, prediction, label, num_classes)
    204             num_classes * label + prediction, minlength=num_classes**2)
    205         confusion_matrix = confusion_matrix_1d.reshape(num_classes,
--> 206                                                        num_classes)
    207         return confusion_matrix.cpu().numpy()
    208 

RuntimeError: shape '[1, 1]' is invalid for input of size 3

[Attention] OpenMMLab Codecamp 🥇🥈🥉

image

Introduction

Interested in deeply participating in OpenMMLab projects? Want to learn more about OpenMMLab's awesome tools without wasting plenty of time reading docs? The First OpenMMLab Codecamp has begun!! We provide more than a hundred tasks from seventeenth research directions for you to pick. Whether you are a novice in AI or a senior developer, there are suitable tasks for you to participate in. We will provide quick response and full guidance to help you smoothly complete those tasks and upgrade to a core contributor of OpenMMLab. We combined Beijing Super Cloud Center to support with computing power.
How to participate?
Select the task you are interested in and submit registration here. We will inform you in three days if you have enrolled for the tasks, and then you can formulate the task plan with tutor and start development ! Once your PR has passed preliminary review, you can apply for the next task or just wait for the award!
More details: OpenMMLab Activity page

Task Discription Related skills Difficulty Credits
Accuracy supports more ML framework(JAX) Support calculated using jax.numpy.ndarray for Accuracy, and add unit test: https://github.com/open-mmlab/mmeval/blob/main/mmeval/metrics/accuracy.py Multiple dispatch:https://mmeval.readthedocs.io/en/latest/design/multiple_dispatch.html Python、 JAX Easy 10
MeanIoU supports more ML framework(JAX) Support calculated using jax.numpy.ndarray for MeanIoU, and add unit test: https://github.com/open-mmlab/mmeval/blob/main/mmeval/metrics/mean_iou.py#L21 Mutiple dispatch:https://mmeval.readthedocs.io/en/latest/design/multiple_dispatch.html Python、JAX Easy 10
Add a new Metric to MMEval Select a Metric from the list, add it to MMEval and adapt it to the algorithm library, see the detailed list of tasks in:#50 Python Medium 30
Add a new Metric to MMEval Select a Metric from the list, add it to MMEval and adapt it to the algorithm library, see the detailed list of tasks in:#50 Python Medium 30
Add a new Metric to MMEval Select a Metric from the list, add it to MMEval and adapt it to the algorithm library, see the detailed list of tasks in:#50 Python Medium 30
Add a new Metric to MMEval Select a Metric from the list, add it to MMEval and adapt it to the algorithm library, see the detailed list of tasks in:#50 Python Medium 30
Add a new Metric to MMEval Select a Metric from the list, add it to MMEval and adapt it to the algorithm library, see the detailed list of tasks in:#50 Python Medium 30
Add a new Metric to MMEval Select a Metric from the list, add it to MMEval and adapt it to the algorithm library, see the detailed list of tasks in:#50 Python Medium 30
Add a new Metric to MMEval Select a Metric from the list, add it to MMEval and adapt it to the algorithm library, see the detailed list of tasks in:#50 Python Medium 30
Add a new Metric to MMEval Select a Metric from the list, add it to MMEval and adapt it to the algorithm library, see the detailed list of tasks in:#50 Python Medium 30
Add a new Metric to MMEval Select a Metric from the list, add it to MMEval and adapt it to the algorithm library, see the detailed list of tasks in:#50 Python Medium 30
Add a new Metric to MMEval Select a Metric from the list, add it to MMEval and adapt it to the algorithm library, see the detailed list of tasks in:#50 Python Medium 30

Sign Up Here : application form
By the way, we strongly encourage you to publish your experience on social media like medium or twitter with tag "OpenMMLab Codecamp" to share your experience with more developers!
https://discord.gg/KuWMWVbCcD
Discussion group: discord link
Welcome to join the discussion below or in discord. Come to take the challenge, and become a contributor to the OpenMMLab !

Very Basic 'How to Use' Accuracy example is not runnable if I only install PyTorch; error may relate to multi-dispatch behaviour

Describe the bug
I try to run example in doc of 'How to use', I only install np and torch in my env.

from mmeval import Accuracy
import numpy as np

accuracy = Accuracy()
labels = np.asarray([0, 1, 2, 3])
preds = np.asarray([0, 2, 1, 3])
accuracy(preds, labels)
# {'top1': 0.5}

And I got the error msg:

Patch `plum.type.TypeMeta` with singleton failed, raise error: module 'plum.type' has no attribute 'TypeMeta'. The multiple dispatch speed may be slow.
Patch plum Type with hash value cache failed, raise error: cannot import name 'Dict' from 'plum.parametric' (/anaconda/envs/onemodel/lib/python3.8/site-packages/plum/parametric.py). The multiple dispatch speed may be slow.
Traceback (most recent call last):
  File "test_mmeval.py", line 8, in <module>
    accuracy(preds, labels)
  File "/anaconda/envs/onemodel/lib/python3.8/site-packages/mmeval/core/base_metric.py", line 105, in __call__
    self.add(*args, **kwargs)
  File "/anaconda/envs/onemodel/lib/python3.8/site-packages/mmeval/metrics/accuracy.py", line 191, in add
    corrects = self._compute_corrects(predictions, labels)
  File "/anaconda/envs/onemodel/lib/python3.8/site-packages/plum/function.py", line 419, in __call__
    return self._f(self._instance, *args, **kw_args)
  File "/anaconda/envs/onemodel/lib/python3.8/site-packages/plum/function.py", line 342, in __call__
    self._resolve_pending_registrations()
  File "/anaconda/envs/onemodel/lib/python3.8/site-packages/plum/function.py", line 220, in _resolve_pending_registrations
    signature = extract_signature(f, precedence=precedence)
  File "/anaconda/envs/onemodel/lib/python3.8/site-packages/plum/signature.py", line 187, in extract_signature
    for k, v in typing.get_type_hints(f).items():
  File "/anaconda/envs/onemodel/lib/python3.8/typing.py", line 1264, in get_type_hints
    value = _eval_type(value, globalns, localns)
  File "/anaconda/envs/onemodel/lib/python3.8/typing.py", line 270, in _eval_type
    return t._evaluate(globalns, localns)
  File "/anaconda/envs/onemodel/lib/python3.8/typing.py", line 518, in _evaluate
    eval(self.__forward_code__, globalns, localns),
  File "<string>", line 1, in <module>
NameError: name 'oneflow' is not defined

Indicating oneflow is not defined.
And I use my debugger try to debug, and found the problem is in accuracy.py L248

@overload  # type: ignore
@dispatch
def _compute_corrects(  # type: ignore
    self, predictions: Union['oneflow.Tensor', Sequence['oneflow.Tensor']],
    labels: Union['oneflow.Tensor',
                  Sequence['oneflow.Tensor']]) -> 'oneflow.Tensor':

when the dispatch try to decorate the _compute_corrects related to oneflow.
And I go deeper to dispatch, found that in dispatcher.py L176,

try:
    module = importlib.import_module(module_name)
    resolved_type = getattr(module, module_attr_basename)
except Exception as e: #L176
    if importable_name not in self._unimportable_types:
        logger.debug(
            f"Unimportable: '{importable_name}', raise error: {e}.")
        resolved_type = type(importable_name, (), {})
        self._unimportable_types[importable_name] = resolved_type
    else:
        resolved_type = self._unimportable_types[importable_name]
return resolved_type

this except should have caught the ModuleNotFoundError and solve this, but the error is not caught and I totally have no idea why.

Reproduction

  1. What command or script did you run?
from mmeval import Accuracy
import numpy as np

accuracy = Accuracy()
labels = np.asarray([0, 1, 2, 3])
preds = np.asarray([0, 2, 1, 3])
accuracy(preds, labels)
# {'top1': 0.5}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.