Giter VIP home page Giter VIP logo

mlmark's People

Contributors

panzhenman avatar petertorelli avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

mlmark's Issues

ArmNN Target does not have SSDMobileNet support yet

ArmNN only supports SSDMobileNet via the TFLite parser, not the TFParser. However, the SSDMobileNet mAP accuracy score is very low on ArmNN, compared to running the same model in native Python TensorFlowLite, which scores an expected value. There are two open issues on Arm's GitHub repo for ArmNN addressing this issue:

MLMark is awaiting resolution on these issues to update the target in a V1.0.x release.

TensorRT ResNet50 Segfaults with Telsa T4

User reports that MLMark abruptly segfaults when running TensorRT target on an x86 System with a Tesla T4, and not other warning messages given. See below.

-INFO- --------------------------------------------------------------------------------
-INFO- Welcome to the EEMBC MLMark(tm) Benchmark!
-INFO- --------------------------------------------------------------------------------
-INFO- MLMark Version       : 1.0.0
-INFO- Python Version       : 3.7
-INFO- CPU Name             : GenuineIntel Intel(R) Xeon(R) Platinum 8176 CPU @ 2.10GHz
-INFO- Total Memory (MiB)   : 127571
-INFO- # of Logical CPUs    : 112
-INFO- Instruction Set      : x86_64
-INFO- OS Platform          : Linux-4.4.0-131-generic-x86_64-with-debian-stretch-sid
-INFO- --------------------------------------------------------------------------------
-INFO- Models in this release:
-INFO-     resnet50       : ResNet-50 v1.0 [ILSVRC2012]
-INFO-     mobilenet      : MobileNet v1.0 [ILSVRC2012]
-INFO-     ssdmobilenet   : SSD-MobileNet v1.0 [COCO2017]
-INFO- --------------------------------------------------------------------------------
-INFO- Parsing config file config/trt-gpu-resnet50-fp32-throughput.json
-INFO- Task: Target 'tensorrt', Workload 'resnet50'
-INFO-     batch                : 1
-INFO-     concurrency          : 1
-INFO-     hardware             : gpu
-INFO-     iterations           : 1024
-INFO-     mode                 : throughput
-INFO-     precision            : fp32
failed to parse uff model
Entered in engine building part
Segmentation fault (core dumped)

Running mobilenet model on myriad.

I have followed the steps in the readme file and tried to run the mobilenet 1.0 version on myriad.
I get this error
"FileNotFoundError: [Errno 2] No such file or directory: /mlmark/targets/openvino_ubuntu/workloads/mobilenet/result_Wed172220.txt"

While the same model runs successfully on the CPU. Im provide the snap of error. Would like some help on it.

Screenshot from 2019-09-04 17-27-44

Any reason ResNet50 is different from MLPerf?

The MobileNet and SSD-MobileNet TF models are exactly the same as used in MLPerf, which is great for vendors who might wish to submit to both MLPerf and MLMark. But ResNet50 is different. I think MLMark uses ResNet50-v1.0, while MLPerf uses ResNet50-v1.5. Of course, MLMark is free to choose its own tasks, models and reference implementations. But would not this lead to problems for vendors who wish to support both initiatives?

Performance unexplainabilitty for tflite int8 and fp32 models

Dear all,
I am testing the performance/throughput of fp32 and quantized models on my platform. My configuration is as follows:

tflite-runtime==2.5.0.post1
tensorflow==1.14.0

*FP32 on CPU

-INFO- Running prediction...
-INFO- Acquired 1 file(s) for model 'MobileNet v1.0'
-INFO- Task runtime: 0:00:28.796083
-INFO- Throughput: 35.8 fps
-INFO- Latency: 29.5 ms
-INFO- Target          Workload        H/W   Prec  Batch Conc. Metric       Score    Units
-INFO- -----------------------------------------------------------------------------------
-INFO- tensorflow_lite mobilenet       cpu   fp32      1     1 throughput    35.8      fps
-INFO- tensorflow_lite mobilenet       cpu   fp32      1     1 latency       29.5       ms
-INFO- Total runtime: 0:00:28.830364
-INFO- Done

INT8 on CPU

google@localhost:~/mlmark$ harness/mlmark.py -c config/tflite-cpu-mobilenet-int8-throughput.json 
-INFO- Running prediction...
-INFO- Acquired 1 file(s) for model 'MobileNet v1.0'
-INFO- Task runtime: 0:01:00.933346
-INFO- Throughput: 16.9 fps
-INFO- Latency: 65. ms
-INFO- Target          Workload        H/W   Prec  Batch Conc. Metric       Score    Units
-INFO- -----------------------------------------------------------------------------------
-INFO- tensorflow_lite mobilenet       cpu   int8      1     1 throughput    16.9      fps
-INFO- tensorflow_lite mobilenet       cpu   int8      1     1 latency       65.        ms
-INFO- Total runtime: 0:01:00.960828
-INFO- Done

Observations: The performance of FP32 model is almost double than INT8 models on CPU, but Google TensorFlow lite benchmarking mentions the opposite:
https://www.tensorflow.org/lite/guide/hosted_models#quantized_models

I also tried replacing the models from the models present in above Hosted location, but the harness gives the similar results.

Could you let me know, where it's going wrong?

Thanks
Kind Regards
Arun

running models on Myriad target

Hey, i have been trying to run the models on different targets, and it has been successful in cpu targets. But when i try to run on myriad using openvino target i get an error. The throughput files have completed successfully, but the accuracy file stops specifically on above specified target with the following error
Screenshot from 2019-09-11 15-32-49

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.