Giter VIP home page Giter VIP logo

openvinotoolkit / openvino_contrib Goto Github PK

View Code? Open in Web Editor NEW
97.0 30.0 139.0 15.97 MB

Repository for OpenVINO's extra modules

License: Apache License 2.0

CMake 1.86% C++ 74.39% Java 2.18% Python 4.98% Shell 0.41% Dockerfile 0.08% Makefile 0.02% Cuda 11.13% Jinja 0.06% Batchfile 0.02% Kotlin 0.14% Jupyter Notebook 0.11% JavaScript 0.15% TypeScript 4.23% HTML 0.01% CSS 0.13% Mustache 0.09%
openvino pytorch arm java inference-engine nvidia-gpu

openvino_contrib's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

openvino_contrib's Issues

error: unknown target CPU 'armv8.2-a+fp16'

hello,when compile the project in m1 machine,i got the errors below:
I pull the master branch of openvino and openvino_contrib in 2022/04/12

  • mac mini(M1 2020)
  • big sur(11.5.1)
FAILED: build-modules/arm_plugin/thirdparty/libarm_compute-static.a build-modules/arm_plugin/thirdparty/libarm_compute_core-static.a 
cd /Users/ws/Desktop/third_lib_compile/openvino_arm/openvino_contrib/modules/arm_plugin/thirdparty/ComputeLibrary && /Applications/CMake.app/Contents/bin/cmake -E env /opt/homebrew/bin/scons neon=1 opencl=0 cppthreads=1 embed_kernels=0 examples=0 internal_only=0 Werror=0 data_layout_support=nchw build_dir=/Users/ws/Desktop/third_lib_compile/openvino_arm/ie_build/build-modules/arm_plugin/thirdparty arch=armv8.6-a -j4 os=macos build=native extra_cxx_flags=-fPIC\ \ -fsigned-char\ -ffunction-sections\ -fdata-sections\ -fdiagnostics-show-option\ -Wundef\ -Wreturn-type\ -Wunused-variable\ -Wswitch\ -Wno-error=deprecated-declarations\ -Wno-undef\ -Wno-error=return-stack-address
scons: Reading SConscript files ...
scons: done reading SConscript files.
scons: Building targets ...
scons: `arm_compute' is up to date.
scons: `support' is up to date.
scons: `utils' is up to date.
scons: `include/half' is up to date.
scons: `include/stb' is up to date.
scons: `include/libnpy' is up to date.
scons: `include/CL' is up to date.
create_version_file(["/Users/ws/Desktop/third_lib_compile/openvino_arm/ie_build/build-modules/arm_plugin/thirdparty/src/core/arm_compute_version.embed"], [])
clang++ -o /Users/ws/Desktop/third_lib_compile/openvino_arm/ie_build/build-modules/arm_plugin/thirdparty/src/common/cpuinfo/CpuInfo.o -c -Wall -DARCH_ARM -Wextra -pedantic -Wdisabled-optimization -Wformat=2 -Winit-self -Wstrict-overflow=2 -Wswitch-default -std=c++14 -Woverloaded-virtual -Wformat-security -Wctor-dtor-privacy -Wsign-promo -Weffc++ -Wno-overlength-strings -Wno-vla-extension -march=armv8.2-a+fp16 -DENABLE_NEON -DARM_COMPUTE_ENABLE_NEON -DENABLE_FP16_KERNELS -DENABLE_FP32_KERNELS -DENABLE_QASYMM8_KERNELS -DENABLE_QASYMM8_SIGNED_KERNELS -DENABLE_QSYMM16_KERNELS -DENABLE_INTEGER_KERNELS -DENABLE_NCHW_KERNELS -O3 -fPIC -fsigned-char -ffunction-sections -fdata-sections -fdiagnostics-show-option -Wundef -Wreturn-type -Wunused-variable -Wswitch -Wno-error=deprecated-declarations -Wno-undef -Wno-error=return-stack-address -D_GLIBCXX_USE_NANOSLEEP -DARM_COMPUTE_CPP_SCHEDULER=1 -DARM_COMPUTE_ENABLE_I8MM -DARM_COMPUTE_ENABLE_BF16 -DARM_COMPUTE_ENABLE_SVEF32MM -DARM_COMPUTE_ENABLE_FP16 -DARM_COMPUTE_GRAPH_ENABLED -DARM_COMPUTE_CPU_ENABLED -DARM_COMPUTE_VERSION_MAJOR=25 -DARM_COMPUTE_VERSION_MINOR=0 -DARM_COMPUTE_VERSION_PATCH=0 -Iinclude -I. -I/Users/ws/Desktop/third_lib_compile/openvino_arm/ie_build/build-modules/arm_plugin/thirdparty/src/core -Isrc/core -I/Users/ws/Desktop/third_lib_compile/openvino_arm/ie_build/build-modules/arm_plugin/thirdparty/src/core/NEON/kernels/convolution/common -Isrc/core/NEON/kernels/convolution/common -I/Users/ws/Desktop/third_lib_compile/openvino_arm/ie_build/build-modules/arm_plugin/thirdparty/src/core/NEON/kernels/convolution/winograd -Isrc/core/NEON/kernels/convolution/winograd -I/Users/ws/Desktop/third_lib_compile/openvino_arm/ie_build/build-modules/arm_plugin/thirdparty/src/core/NEON/kernels/convolution/depthwise -Isrc/core/NEON/kernels/convolution/depthwise -I/Users/ws/Desktop/third_lib_compile/openvino_arm/ie_build/build-modules/arm_plugin/thirdparty/src/core/NEON/kernels/assembly -Isrc/core/NEON/kernels/assembly -I/Users/ws/Desktop/third_lib_compile/openvino_arm/ie_build/build-modules/arm_plugin/thirdparty/arm_compute/core/NEON/kernels/assembly -Iarm_compute/core/NEON/kernels/assembly -I/Users/ws/Desktop/third_lib_compile/openvino_arm/ie_build/build-modules/arm_plugin/thirdparty/src/cpu/kernels/assembly -Isrc/cpu/kernels/assembly src/common/cpuinfo/CpuInfo.cpp
clang++ -o /Users/ws/Desktop/third_lib_compile/openvino_arm/ie_build/build-modules/arm_plugin/thirdparty/src/common/cpuinfo/CpuModel.o -c -Wall -DARCH_ARM -Wextra -pedantic -Wdisabled-optimization -Wformat=2 -Winit-self -Wstrict-overflow=2 -Wswitch-default -std=c++14 -Woverloaded-virtual -Wformat-security -Wctor-dtor-privacy -Wsign-promo -Weffc++ -Wno-overlength-strings -Wno-vla-extension -march=armv8.2-a+fp16 -DENABLE_NEON -DARM_COMPUTE_ENABLE_NEON -DENABLE_FP16_KERNELS -DENABLE_FP32_KERNELS -DENABLE_QASYMM8_KERNELS -DENABLE_QASYMM8_SIGNED_KERNELS -DENABLE_QSYMM16_KERNELS -DENABLE_INTEGER_KERNELS -DENABLE_NCHW_KERNELS -O3 -fPIC -fsigned-char -ffunction-sections -fdata-sections -fdiagnostics-show-option -Wundef -Wreturn-type -Wunused-variable -Wswitch -Wno-error=deprecated-declarations -Wno-undef -Wno-error=return-stack-address -D_GLIBCXX_USE_NANOSLEEP -DARM_COMPUTE_CPP_SCHEDULER=1 -DARM_COMPUTE_ENABLE_I8MM -DARM_COMPUTE_ENABLE_BF16 -DARM_COMPUTE_ENABLE_SVEF32MM -DARM_COMPUTE_ENABLE_FP16 -DARM_COMPUTE_GRAPH_ENABLED -DARM_COMPUTE_CPU_ENABLED -DARM_COMPUTE_VERSION_MAJOR=25 -DARM_COMPUTE_VERSION_MINOR=0 -DARM_COMPUTE_VERSION_PATCH=0 -Iinclude -I. -I/Users/ws/Desktop/third_lib_compile/openvino_arm/ie_build/build-modules/arm_plugin/thirdparty/src/core -Isrc/core -I/Users/ws/Desktop/third_lib_compile/openvino_arm/ie_build/build-modules/arm_plugin/thirdparty/src/core/NEON/kernels/convolution/common -Isrc/core/NEON/kernels/convolution/common -I/Users/ws/Desktop/third_lib_compile/openvino_arm/ie_build/build-modules/arm_plugin/thirdparty/src/core/NEON/kernels/convolution/winograd -Isrc/core/NEON/kernels/convolution/winograd -I/Users/ws/Desktop/third_lib_compile/openvino_arm/ie_build/build-modules/arm_plugin/thirdparty/src/core/NEON/kernels/convolution/depthwise -Isrc/core/NEON/kernels/convolution/depthwise -I/Users/ws/Desktop/third_lib_compile/openvino_arm/ie_build/build-modules/arm_plugin/thirdparty/src/core/NEON/kernels/assembly -Isrc/core/NEON/kernels/assembly -I/Users/ws/Desktop/third_lib_compile/openvino_arm/ie_build/build-modules/arm_plugin/thirdparty/arm_compute/core/NEON/kernels/assembly -Iarm_compute/core/NEON/kernels/assembly -I/Users/ws/Desktop/third_lib_compile/openvino_arm/ie_build/build-modules/arm_plugin/thirdparty/src/cpu/kernels/assembly -Isrc/cpu/kernels/assembly src/common/cpuinfo/CpuModel.cpp
clang++ -o /Users/ws/Desktop/third_lib_compile/openvino_arm/ie_build/build-modules/arm_plugin/thirdparty/src/common/cpuinfo/CpuIsaInfo.o -c -Wall -DARCH_ARM -Wextra -pedantic -Wdisabled-optimization -Wformat=2 -Winit-self -Wstrict-overflow=2 -Wswitch-default -std=c++14 -Woverloaded-virtual -Wformat-security -Wctor-dtor-privacy -Wsign-promo -Weffc++ -Wno-overlength-strings -Wno-vla-extension -march=armv8.2-a+fp16 -DENABLE_NEON -DARM_COMPUTE_ENABLE_NEON -DENABLE_FP16_KERNELS -DENABLE_FP32_KERNELS -DENABLE_QASYMM8_KERNELS -DENABLE_QASYMM8_SIGNED_KERNELS -DENABLE_QSYMM16_KERNELS -DENABLE_INTEGER_KERNELS -DENABLE_NCHW_KERNELS -O3 -fPIC -fsigned-char -ffunction-sections -fdata-sections -fdiagnostics-show-option -Wundef -Wreturn-type -Wunused-variable -Wswitch -Wno-error=deprecated-declarations -Wno-undef -Wno-error=return-stack-address -D_GLIBCXX_USE_NANOSLEEP -DARM_COMPUTE_CPP_SCHEDULER=1 -DARM_COMPUTE_ENABLE_I8MM -DARM_COMPUTE_ENABLE_BF16 -DARM_COMPUTE_ENABLE_SVEF32MM -DARM_COMPUTE_ENABLE_FP16 -DARM_COMPUTE_GRAPH_ENABLED -DARM_COMPUTE_CPU_ENABLED -DARM_COMPUTE_VERSION_MAJOR=25 -DARM_COMPUTE_VERSION_MINOR=0 -DARM_COMPUTE_VERSION_PATCH=0 -Iinclude -I. -I/Users/ws/Desktop/third_lib_compile/openvino_arm/ie_build/build-modules/arm_plugin/thirdparty/src/core -Isrc/core -I/Users/ws/Desktop/third_lib_compile/openvino_arm/ie_build/build-modules/arm_plugin/thirdparty/src/core/NEON/kernels/convolution/common -Isrc/core/NEON/kernels/convolution/common -I/Users/ws/Desktop/third_lib_compile/openvino_arm/ie_build/build-modules/arm_plugin/thirdparty/src/core/NEON/kernels/convolution/winograd -Isrc/core/NEON/kernels/convolution/winograd -I/Users/ws/Desktop/third_lib_compile/openvino_arm/ie_build/build-modules/arm_plugin/thirdparty/src/core/NEON/kernels/convolution/depthwise -Isrc/core/NEON/kernels/convolution/depthwise -I/Users/ws/Desktop/third_lib_compile/openvino_arm/ie_build/build-modules/arm_plugin/thirdparty/src/core/NEON/kernels/assembly -Isrc/core/NEON/kernels/assembly -I/Users/ws/Desktop/third_lib_compile/openvino_arm/ie_build/build-modules/arm_plugin/thirdparty/arm_compute/core/NEON/kernels/assembly -Iarm_compute/core/NEON/kernels/assembly -I/Users/ws/Desktop/third_lib_compile/openvino_arm/ie_build/build-modules/arm_plugin/thirdparty/src/cpu/kernels/assembly -Isrc/cpu/kernels/assembly src/common/cpuinfo/CpuIsaInfo.cpp
clang++ -o /Users/ws/Desktop/third_lib_compile/openvino_arm/ie_build/build-modules/arm_plugin/thirdparty/src/common/utils/LegacySupport.o -c -Wall -DARCH_ARM -Wextra -pedantic -Wdisabled-optimization -Wformat=2 -Winit-self -Wstrict-overflow=2 -Wswitch-default -std=c++14 -Woverloaded-virtual -Wformat-security -Wctor-dtor-privacy -Wsign-promo -Weffc++ -Wno-overlength-strings -Wno-vla-extension -march=armv8.2-a+fp16 -DENABLE_NEON -DARM_COMPUTE_ENABLE_NEON -DENABLE_FP16_KERNELS -DENABLE_FP32_KERNELS -DENABLE_QASYMM8_KERNELS -DENABLE_QASYMM8_SIGNED_KERNELS -DENABLE_QSYMM16_KERNELS -DENABLE_INTEGER_KERNELS -DENABLE_NCHW_KERNELS -O3 -fPIC -fsigned-char -ffunction-sections -fdata-sections -fdiagnostics-show-option -Wundef -Wreturn-type -Wunused-variable -Wswitch -Wno-error=deprecated-declarations -Wno-undef -Wno-error=return-stack-address -D_GLIBCXX_USE_NANOSLEEP -DARM_COMPUTE_CPP_SCHEDULER=1 -DARM_COMPUTE_ENABLE_I8MM -DARM_COMPUTE_ENABLE_BF16 -DARM_COMPUTE_ENABLE_SVEF32MM -DARM_COMPUTE_ENABLE_FP16 -DARM_COMPUTE_GRAPH_ENABLED -DARM_COMPUTE_CPU_ENABLED -DARM_COMPUTE_VERSION_MAJOR=25 -DARM_COMPUTE_VERSION_MINOR=0 -DARM_COMPUTE_VERSION_PATCH=0 -Iinclude -I. -I/Users/ws/Desktop/third_lib_compile/openvino_arm/ie_build/build-modules/arm_plugin/thirdparty/src/core -Isrc/core -I/Users/ws/Desktop/third_lib_compile/openvino_arm/ie_build/build-modules/arm_plugin/thirdparty/src/core/NEON/kernels/convolution/common -Isrc/core/NEON/kernels/convolution/common -I/Users/ws/Desktop/third_lib_compile/openvino_arm/ie_build/build-modules/arm_plugin/thirdparty/src/core/NEON/kernels/convolution/winograd -Isrc/core/NEON/kernels/convolution/winograd -I/Users/ws/Desktop/third_lib_compile/openvino_arm/ie_build/build-modules/arm_plugin/thirdparty/src/core/NEON/kernels/convolution/depthwise -Isrc/core/NEON/kernels/convolution/depthwise -I/Users/ws/Desktop/third_lib_compile/openvino_arm/ie_build/build-modules/arm_plugin/thirdparty/src/core/NEON/kernels/assembly -Isrc/core/NEON/kernels/assembly -I/Users/ws/Desktop/third_lib_compile/openvino_arm/ie_build/build-modules/arm_plugin/thirdparty/arm_compute/core/NEON/kernels/assembly -Iarm_compute/core/NEON/kernels/assembly -I/Users/ws/Desktop/third_lib_compile/openvino_arm/ie_build/build-modules/arm_plugin/thirdparty/src/cpu/kernels/assembly -Isrc/cpu/kernels/assembly src/common/utils/LegacySupport.cpp
error: unknown target CPU 'armv8.2-a+fp16'
note: valid target CPU values are: nocona, core2, penryn, bonnell, atom, silvermont, slm, goldmont, goldmont-plus, tremont, nehalem, corei7, westmere, sandybridge, corei7-avx, ivybridge, core-avx-i, haswell, core-avx2, broadwell, skylake, skylake-avx512, skx, cascadelake, cooperlake, cannonlake, icelake-client, icelake-server, tigerlake, knl, knm, k8, athlon64, athlon-fx, opteron, k8-sse3, athlon64-sse3, opteron-sse3, amdfam10, barcelona, btver1, btver2, bdver1, bdver2, bdver3, bdver4, znver1, znver2, x86-64
error: unknown target CPU 'armv8.2-a+fp16'
note: valid target CPU values are: nocona, core2, penryn, bonnell, atom, silvermont, slm, goldmont, goldmont-plus, tremont, nehalem, corei7, westmere, sandybridge, corei7-avx, ivybridge, core-avx-i, haswell, core-avx2, broadwell, skylake, skylake-avx512, skx, cascadelake, cooperlake, cannonlake, icelake-client, icelake-server, tigerlake, knl, knm, k8, athlon64, athlon-fx, opteron, k8-sse3, athlon64-sse3, opteron-sse3, amdfam10, barcelona, btver1, btver2, bdver1, bdver2, bdver3, bdver4, znver1, znver2, x86-64
scons: *** [/Users/ws/Desktop/third_lib_compile/openvino_arm/ie_build/build-modules/arm_plugin/thirdparty/src/common/cpuinfo/CpuModel.o] Error 1
scons: *** [/Users/ws/Desktop/third_lib_compile/openvino_arm/ie_build/build-modules/arm_plugin/thirdparty/src/common/cpuinfo/CpuInfo.o] Error 1
error: unknown target CPU 'armv8.2-a+fp16'
note: valid target CPU values are: nocona, core2, penryn, bonnell, atom, silvermont, slm, goldmont, goldmont-plus, tremont, nehalem, corei7, westmere, sandybridge, corei7-avx, ivybridge, core-avx-i, haswell, core-avx2, broadwell, skylake, skylake-avx512, skx, cascadelake, cooperlake, cannonlake, icelake-client, icelake-server, tigerlake, knl, knm, k8, athlon64, athlon-fx, opteron, k8-sse3, athlon64-sse3, opteron-sse3, amdfam10, barcelona, btver1, btver2, bdver1, bdver2, bdver3, bdver4, znver1, znver2, x86-64
scons: *** [/Users/ws/Desktop/third_lib_compile/openvino_arm/ie_build/build-modules/arm_plugin/thirdparty/src/common/cpuinfo/CpuIsaInfo.o] Error 1
error: unknown target CPU 'armv8.2-a+fp16'
note: valid target CPU values are: nocona, core2, penryn, bonnell, atom, silvermont, slm, goldmont, goldmont-plus, tremont, nehalem, corei7, westmere, sandybridge, corei7-avx, ivybridge, core-avx-i, haswell, core-avx2, broadwell, skylake, skylake-avx512, skx, cascadelake, cooperlake, cannonlake, icelake-client, icelake-server, tigerlake, knl, knm, k8, athlon64, athlon-fx, opteron, k8-sse3, athlon64-sse3, opteron-sse3, amdfam10, barcelona, btver1, btver2, bdver1, bdver2, bdver3, bdver4, znver1, znver2, x86-64
scons: *** [/Users/ws/Desktop/third_lib_compile/openvino_arm/ie_build/build-modules/arm_plugin/thirdparty/src/common/utils/LegacySupport.o] Error 1
scons: building terminated because of errors.
[2646/2777] Building CXX object docs/template_plugin/backend/CMakeFiles/interpreter_backend.dir/evaluates_map.cpp.o
ninja: build stopped: subcommand failed.

[ARM Plugin] Native Compile - Apple M1 Compile Error...

I believe we cannot use gcc '-march=armv7-a' in Apple arm compiler? I assume apple gcc (arm64) is not supported?

[ 59%] Built target armPlugin_cpplint Scanning dependencies of target cpplint_all [ 59%] Built target cpplint_all Scanning dependencies of target arm_compute_static_libs [ 60%] Build Arm Compute Library scons: Reading SConscript files ... scons: done reading SConscript files. scons: Building targets ... scons: arm_compute' is up to date.
scons: support' is up to date. scons: building associated VariantDir targets: /Users/raymondlo/Documents/openvino_contrib/modules/arm_plugin/build/thirdparty/utils scons: utils' is up to date.
scons: include/half' is up to date. scons: include/stb' is up to date.
scons: include/libnpy' is up to date. scons: include/linux' is up to date.
scons: include/CL' is up to date. create_version_file(["/Users/raymondlo/Documents/openvino_contrib/modules/arm_plugin/build/thirdparty/src/core/arm_compute_version.embed"], []) g++ -o /Users/raymondlo/Documents/openvino_contrib/modules/arm_plugin/build/thirdparty/src/core/AccessWindowAutoPadding.o -c -Wall -DARCH_ARM -Wextra -pedantic -Wdisabled-optimization -Wformat=2 -Winit-self -Wstrict-overflow=2 -Wswitch-default -std=gnu++11 -Woverloaded-virtual -Wformat-security -Wctor-dtor-privacy -Wsign-promo -Weffc++ -Wno-overlength-strings -Wlogical-op -Wnoexcept -Wstrict-null-sentinel -march=armv7-a -mthumb -mfpu=neon -mfloat-abi=hard -Wno-ignored-attributes -DENABLE_FP16_KERNELS -DENABLE_FP32_KERNELS -DENABLE_QASYMM8_KERNELS -DENABLE_QASYMM8_SIGNED_KERNELS -DENABLE_QSYMM16_KERNELS -Werror -Wno-psabi -O3 -fPIC -D_GLIBCXX_USE_NANOSLEEP -DARM_COMPUTE_CPP_SCHEDULER=1 -DARM_COMPUTE_VERSION_MAJOR=21 -DARM_COMPUTE_VERSION_MINOR=0 -DARM_COMPUTE_VERSION_PATCH=0 -Iinclude -I. -I/Users/raymondlo/Documents/openvino_contrib/modules/arm_plugin/build/thirdparty/src/core -Isrc/core -I/Users/raymondlo/Documents/openvino_contrib/modules/arm_plugin/build/thirdparty/src/core/NEON/kernels/convolution/common -Isrc/core/NEON/kernels/convolution/common -I/Users/raymondlo/Documents/openvino_contrib/modules/arm_plugin/build/thirdparty/src/core/NEON/kernels/convolution/winograd -Isrc/core/NEON/kernels/convolution/winograd -I/Users/raymondlo/Documents/openvino_contrib/modules/arm_plugin/build/thirdparty/src/core/NEON/kernels/convolution/depthwise -Isrc/core/NEON/kernels/convolution/depthwise -I/Users/raymondlo/Documents/openvino_contrib/modules/arm_plugin/build/thirdparty/src/core/NEON/kernels/assembly -Isrc/core/NEON/kernels/assembly -I/Users/raymondlo/Documents/openvino_contrib/modules/arm_plugin/build/thirdparty/arm_compute/core/NEON/kernels/assembly -Iarm_compute/core/NEON/kernels/assembly src/core/AccessWindowAutoPadding.cpp g++ -o /Users/raymondlo/Documents/openvino_contrib/modules/arm_plugin/build/thirdparty/src/core/AccessWindowStatic.o -c -Wall -DARCH_ARM -Wextra -pedantic -Wdisabled-optimization -Wformat=2 -Winit-self -Wstrict-overflow=2 -Wswitch-default -std=gnu++11 -Woverloaded-virtual -Wformat-security -Wctor-dtor-privacy -Wsign-promo -Weffc++ -Wno-overlength-strings -Wlogical-op -Wnoexcept -Wstrict-null-sentinel -march=armv7-a -mthumb -mfpu=neon -mfloat-abi=hard -Wno-ignored-attributes -DENABLE_FP16_KERNELS -DENABLE_FP32_KERNELS -DENABLE_QASYMM8_KERNELS -DENABLE_QASYMM8_SIGNED_KERNELS -DENABLE_QSYMM16_KERNELS -Werror -Wno-psabi -O3 -fPIC -D_GLIBCXX_USE_NANOSLEEP -DARM_COMPUTE_CPP_SCHEDULER=1 -DARM_COMPUTE_VERSION_MAJOR=21 -DARM_COMPUTE_VERSION_MINOR=0 -DARM_COMPUTE_VERSION_PATCH=0 -Iinclude -I. -I/Users/raymondlo/Documents/openvino_contrib/modules/arm_plugin/build/thirdparty/src/core -Isrc/core -I/Users/raymondlo/Documents/openvino_contrib/modules/arm_plugin/build/thirdparty/src/core/NEON/kernels/convolution/common -Isrc/core/NEON/kernels/convolution/common -I/Users/raymondlo/Documents/openvino_contrib/modules/arm_plugin/build/thirdparty/src/core/NEON/kernels/convolution/winograd -Isrc/core/NEON/kernels/convolution/winograd -I/Users/raymondlo/Documents/openvino_contrib/modules/arm_plugin/build/thirdparty/src/core/NEON/kernels/convolution/depthwise -Isrc/core/NEON/kernels/convolution/depthwise -I/Users/raymondlo/Documents/openvino_contrib/modules/arm_plugin/build/thirdparty/src/core/NEON/kernels/assembly -Isrc/core/NEON/kernels/assembly -I/Users/raymondlo/Documents/openvino_contrib/modules/arm_plugin/build/thirdparty/arm_compute/core/NEON/kernels/assembly -Iarm_compute/core/NEON/kernels/assembly src/core/AccessWindowStatic.cpp clang: error: the clang compiler does not support '-march=armv7-a' g++ -o /Users/raymondlo/Documents/openvino_contrib/modules/arm_plugin/build/thirdparty/src/core/AccessWindowTranspose.o -c -Wall -DARCH_ARM -Wextra -pedantic -Wdisabled-optimization -Wformat=2 -Winit-self -Wstrict-overflow=2 -Wswitch-default -std=gnu++11 -Woverloaded-virtual -Wformat-security -Wctor-dtor-privacy -Wsign-promo -Weffc++ -Wno-overlength-strings -Wlogical-op -Wnoexcept -Wstrict-null-sentinel -march=armv7-a -mthumb -mfpu=neon -mfloat-abi=hard -Wno-ignored-attributes -DENABLE_FP16_KERNELS -DENABLE_FP32_KERNELS -DENABLE_QASYMM8_KERNELS -DENABLE_QASYMM8_SIGNED_KERNELS -DENABLE_QSYMM16_KERNELS -Werror -Wno-psabi -O3 -fPIC -D_GLIBCXX_USE_NANOSLEEP -DARM_COMPUTE_CPP_SCHEDULER=1 -DARM_COMPUTE_VERSION_MAJOR=21 -DARM_COMPUTE_VERSION_MINOR=0 -DARM_COMPUTE_VERSION_PATCH=0 -Iinclude -I. -I/Users/raymondlo/Documents/openvino_contrib/modules/arm_plugin/build/thirdparty/src/core -Isrc/core -I/Users/raymondlo/Documents/openvino_contrib/modules/arm_plugin/build/thirdparty/src/core/NEON/kernels/convolution/common -Isrc/core/NEON/kernels/convolution/common -I/Users/raymondlo/Documents/openvino_contrib/modules/arm_plugin/build/thirdparty/src/core/NEON/kernels/convolution/winograd -Isrc/core/NEON/kernels/convolution/winograd -I/Users/raymondlo/Documents/openvino_contrib/modules/arm_plugin/build/thirdparty/src/core/NEON/kernels/convolution/depthwise -Isrc/core/NEON/kernels/convolution/depthwise -I/Users/raymondlo/Documents/openvino_contrib/modules/arm_plugin/build/thirdparty/src/core/NEON/kernels/assembly -Isrc/core/NEON/kernels/assembly -I/Users/raymondlo/Documents/openvino_contrib/modules/arm_plugin/build/thirdparty/arm_compute/core/NEON/kernels/assembly -Iarm_compute/core/NEON/kernels/assembly src/core/AccessWindowTranspose.cpp g++ -o /Users/raymondlo/Documents/openvino_contrib/modules/arm_plugin/build/thirdparty/src/core/Error.o -c -Wall -DARCH_ARM -Wextra -pedantic -Wdisabled-optimization -Wformat=2 -Winit-self -Wstrict-overflow=2 -Wswitch-default -std=gnu++11 -Woverloaded-virtual -Wformat-security -Wctor-dtor-privacy -Wsign-promo -Weffc++ -Wno-overlength-strings -Wlogical-op -Wnoexcept -Wstrict-null-sentinel -march=armv7-a -mthumb -mfpu=neon -mfloat-abi=hard -Wno-ignored-attributes -DENABLE_FP16_KERNELS -DENABLE_FP32_KERNELS -DENABLE_QASYMM8_KERNELS -DENABLE_QASYMM8_SIGNED_KERNELS -DENABLE_QSYMM16_KERNELS -Werror -Wno-psabi -O3 -fPIC -D_GLIBCXX_USE_NANOSLEEP -DARM_COMPUTE_CPP_SCHEDULER=1 -DARM_COMPUTE_VERSION_MAJOR=21 -DARM_COMPUTE_VERSION_MINOR=0 -DARM_COMPUTE_VERSION_PATCH=0 -Iinclude -I. -I/Users/raymondlo/Documents/openvino_contrib/modules/arm_plugin/build/thirdparty/src/core -Isrc/core -I/Users/raymondlo/Documents/openvino_contrib/modules/arm_plugin/build/thirdparty/src/core/NEON/kernels/convolution/common -Isrc/core/NEON/kernels/convolution/common -I/Users/raymondlo/Documents/openvino_contrib/modules/arm_plugin/build/thirdparty/src/core/NEON/kernels/convolution/winograd -Isrc/core/NEON/kernels/convolution/winograd -I/Users/raymondlo/Documents/openvino_contrib/modules/arm_plugin/build/thirdparty/src/core/NEON/kernels/convolution/depthwise -Isrc/core/NEON/kernels/convolution/depthwise -I/Users/raymondlo/Documents/openvino_contrib/modules/arm_plugin/build/thirdparty/src/core/NEON/kernels/assembly -Isrc/core/NEON/kernels/assembly -I/Users/raymondlo/Documents/openvino_contrib/modules/arm_plugin/build/thirdparty/arm_compute/core/NEON/kernels/assembly -Iarm_compute/core/NEON/kernels/assembly src/core/Error.cpp g++ -o /Users/raymondlo/Documents/openvino_contrib/modules/arm_plugin/build/thirdparty/src/core/GPUTarget.o -c -Wall -DARCH_ARM -Wextra -pedantic -Wdisabled-optimization -Wformat=2 -Winit-self -Wstrict-overflow=2 -Wswitch-default -std=gnu++11 -Woverloaded-virtual -Wformat-security -Wctor-dtor-privacy -Wsign-promo -Weffc++ -Wno-overlength-strings -Wlogical-op -Wnoexcept -Wstrict-null-sentinel -march=armv7-a -mthumb -mfpu=neon -mfloat-abi=hard -Wno-ignored-attributes -DENABLE_FP16_KERNELS -DENABLE_FP32_KERNELS -DENABLE_QASYMM8_KERNELS -DENABLE_QASYMM8_SIGNED_KERNELS -DENABLE_QSYMM16_KERNELS -Werror -Wno-psabi -O3 -fPIC -D_GLIBCXX_USE_NANOSLEEP -DARM_COMPUTE_CPP_SCHEDULER=1 -DARM_COMPUTE_VERSION_MAJOR=21 -DARM_COMPUTE_VERSION_MINOR=0 -DARM_COMPUTE_VERSION_PATCH=0 -Iinclude -I. -I/Users/raymondlo/Documents/openvino_contrib/modules/arm_plugin/build/thirdparty/src/core -Isrc/core -I/Users/raymondlo/Documents/openvino_contrib/modules/arm_plugin/build/thirdparty/src/core/NEON/kernels/convolution/common -Isrc/core/NEON/kernels/convolution/common -I/Users/raymondlo/Documents/openvino_contrib/modules/arm_plugin/build/thirdparty/src/core/NEON/kernels/convolution/winograd -Isrc/core/NEON/kernels/convolution/winograd -I/Users/raymondlo/Documents/openvino_contrib/modules/arm_plugin/build/thirdparty/src/core/NEON/kernels/convolution/depthwise -Isrc/core/NEON/kernels/convolution/depthwise -I/Users/raymondlo/Documents/openvino_contrib/modules/arm_plugin/build/thirdparty/src/core/NEON/kernels/assembly -Isrc/core/NEON/kernels/assembly -I/Users/raymondlo/Documents/openvino_contrib/modules/arm_plugin/build/thirdparty/arm_compute/core/NEON/kernels/assembly -Iarm_compute/core/NEON/kernels/assembly src/core/GPUTarget.cpp clang: error: the clang compiler does not support '-march=armv7-a' clang: error: the clang compiler does not support '-march=armv7-a' g++ -o /Users/raymondlo/Documents/openvino_contrib/modules/arm_plugin/build/thirdparty/src/core/HOGInfo.o -c -Wall -DARCH_ARM -Wextra -pedantic -Wdisabled-optimization -Wformat=2 -Winit-self -Wstrict-overflow=2 -Wswitch-default -std=gnu++11 -Woverloaded-virtual -Wformat-security -Wctor-dtor-privacy -Wsign-promo -Weffc++ -Wno-overlength-strings -Wlogical-op -Wnoexcept -Wstrict-null-sentinel -march=armv7-a -mthumb -mfpu=neon -mfloat-abi=hard -Wno-ignored-attributes -DENABLE_FP16_KERNELS -DENABLE_FP32_KERNELS -DENABLE_QASYMM8_KERNELS -DENABLE_QASYMM8_SIGNED_KERNELS -DENABLE_QSYMM16_KERNELS -Werror -Wno-psabi -O3 -fPIC -D_GLIBCXX_USE_NANOSLEEP -DARM_COMPUTE_CPP_SCHEDULER=1 -DARM_COMPUTE_VERSION_MAJOR=21 -DARM_COMPUTE_VERSION_MINOR=0 -DARM_COMPUTE_VERSION_PATCH=0 -Iinclude -I. -I/Users/raymondlo/Documents/openvino_contrib/modules/arm_plugin/build/thirdparty/src/core -Isrc/core -I/Users/raymondlo/Documents/openvino_contrib/modules/arm_plugin/build/thirdparty/src/core/NEON/kernels/convolution/common -Isrc/core/NEON/kernels/convolution/common -I/Users/raymondlo/Documents/openvino_contrib/modules/arm_plugin/build/thirdparty/src/core/NEON/kernels/convolution/winograd -Isrc/core/NEON/kernels/convolution/winograd -I/Users/raymondlo/Documents/openvino_contrib/modules/arm_plugin/build/thirdparty/src/core/NEON/kernels/convolution/depthwise -Isrc/core/NEON/kernels/convolution/depthwise -I/Users/raymondlo/Documents/openvino_contrib/modules/arm_plugin/build/thirdparty/src/core/NEON/kernels/assembly -Isrc/core/NEON/kernels/assembly -I/Users/raymondlo/Documents/openvino_contrib/modules/arm_plugin/build/thirdparty/arm_compute/core/NEON/kernels/assembly -Iarm_compute/core/NEON/kernels/assembly src/core/HOGInfo.cpp clang: error: the clang compiler does not support '-march=armv7-a' g++ -o /Users/raymondlo/Documents/openvino_contrib/modules/arm_plugin/build/thirdparty/src/core/Helpers.o -c -Wall -DARCH_ARM -Wextra -pedantic -Wdisabled-optimization -Wformat=2 -Winit-self -Wstrict-overflow=2 -Wswitch-default -std=gnu++11 -Woverloaded-virtual -Wformat-security -Wctor-dtor-privacy -Wsign-promo -Weffc++ -Wno-overlength-strings -Wlogical-op -Wnoexcept -Wstrict-null-sentinel -march=armv7-a -mthumb -mfpu=neon -mfloat-abi=hard -Wno-ignored-attributes -DENABLE_FP16_KERNELS -DENABLE_FP32_KERNELS -DENABLE_QASYMM8_KERNELS -DENABLE_QASYMM8_SIGNED_KERNELS -DENABLE_QSYMM16_KERNELS -Werror -Wno-psabi -O3 -fPIC -D_GLIBCXX_USE_NANOSLEEP -DARM_COMPUTE_CPP_SCHEDULER=1 -DARM_COMPUTE_VERSION_MAJOR=21 -DARM_COMPUTE_VERSION_MINOR=0 -DARM_COMPUTE_VERSION_PATCH=0 -Iinclude -I. -I/Users/raymondlo/Documents/openvino_contrib/modules/arm_plugin/build/thirdparty/src/core -Isrc/core -I/Users/raymondlo/Documents/openvino_contrib/modules/arm_plugin/build/thirdparty/src/core/NEON/kernels/convolution/common -Isrc/core/NEON/kernels/convolution/common -I/Users/raymondlo/Documents/openvino_contrib/modules/arm_plugin/build/thirdparty/src/core/NEON/kernels/convolution/winograd -Isrc/core/NEON/kernels/convolution/winograd -I/Users/raymondlo/Documents/openvino_contrib/modules/arm_plugin/build/thirdparty/src/core/NEON/kernels/convolution/depthwise -Isrc/core/NEON/kernels/convolution/depthwise -I/Users/raymondlo/Documents/openvino_contrib/modules/arm_plugin/build/thirdparty/src/core/NEON/kernels/assembly -Isrc/core/NEON/kernels/assembly -I/Users/raymondlo/Documents/openvino_contrib/modules/arm_plugin/build/thirdparty/arm_compute/core/NEON/kernels/assembly -Iarm_compute/core/NEON/kernels/assembly src/core/Helpers.cpp scons: *** [/Users/raymondlo/Documents/openvino_contrib/modules/arm_plugin/build/thirdparty/src/core/AccessWindowAutoPadding.o] Error 1 scons: *** [/Users/raymondlo/Documents/openvino_contrib/modules/arm_plugin/build/thirdparty/src/core/AccessWindowTranspose.o] Error 1 scons: *** [/Users/raymondlo/Documents/openvino_contrib/modules/arm_plugin/build/thirdparty/src/core/AccessWindowStatic.o] Error 1 scons: *** [/Users/raymondlo/Documents/openvino_contrib/modules/arm_plugin/build/thirdparty/src/core/Error.o] Error 1 clang: error: the clang compiler does not support '-march=armv7-a' scons: *** [/Users/raymondlo/Documents/openvino_contrib/modules/arm_plugin/build/thirdparty/src/core/GPUTarget.o] Error 1 clang: error: the clang compiler does not support '-march=armv7-a' scons: *** [/Users/raymondlo/Documents/openvino_contrib/modules/arm_plugin/build/thirdparty/src/core/HOGInfo.o] Error 1 clang: error: the clang compiler does not support '-march=armv7-a' scons: *** [/Users/raymondlo/Documents/openvino_contrib/modules/arm_plugin/build/thirdparty/src/core/Helpers.o] Error 1 scons: building terminated because of errors. make[2]: *** [thirdparty/libarm_compute-static.a] Error 2 make[1]: *** [thirdparty/CMakeFiles/arm_compute_static_libs.dir/all] Error 2 make: *** [all] Error 2

Mask-RCNN outputs

Hey,

I've managed to convert the Mask-RCNN, but I am having some issues understanding the outputs as they differ from the torchvision model.

Current outputs are: [DetectionOutput_647', 'Sigmoid_733', 'TopK_.0']. With box_detections_per_img=25 and rpn_post_nms_top_n_test=100, their shapes are (1, 1, 25, 7), (25, 91, 28, 28), and (100,), respectively.

As far as I could understand, the DetectionOutput_647 contains box_detections_per_img boxes, where first element is either 0 or -1 (indicating last detected box which can appear sooner, depending on the NMS thresholds I assume). So, one row is:

[_, class_number, probability, x1, y1, x2, y2]

The Sigmoid_733 then contains 28x28 masks for ROI of each of the 25 boxes? And then 91 corresponds to the class label?

And what the TopK_.0 tells us?

Thanks!

Converting Mask-RCNN model trained on custom dataset to FP16 format gives error during inference

Hey there,

I tried to convert the model mentioned in #271 to FP16 format and run inference with that. Converting the model to FP16 seems to work but unfortunately the Inference Engine returns an error when it tries to load the model:

RuntimeError: Check 'element::Type::merge(element_type, element_type, node->get_input_element_type(i))' failed at core/src/op/util/elementwise_args.cpp:22:
While validating node 'v1::Multiply Multiply_1288 (StridedSlice_562[0]:f16{1000,4}, Mul_565/mul[0]:f32{1,4}) -> ()' with friendly_name 'Multiply_1288':
Argument element types are inconsistent.

There are numerous warnings when exporting the model to FP16 format, especially one which is related to the operation Mul_565 which is also mentioned in the error message of the Inference Engine:

[ WARNING ]  Changing Const node 'Mul_565/mul' data type from int64 to <class 'numpy.float32'> for Elementwise operation

I have applied all workarounds and fixes mentioned in #271.
For reference, I attach the full logs of the command line output from exporting the model and from running inference:

Export log: log_export.txt
Inference log: log_inference.txt

Thanks and best regards,

Jens

'OpenVINOTensor' object has no attribute 'is_floating_point'

Hey,
I am getting this error:

Exception: Exception occurred during running replacer "REPLACEMENT_ID (<class 'mo_extensions.load.pytorch.loader.PyTorchLoader'>)": 'OpenVINOTensor' object has no attribute 'is_floating_point'

when trying to run the following code:

import torchvision.models as models
import torch

# Create model
model = models.detection.mask_rcnn.maskrcnn_resnet50_fpn(pretrained=True,box_detections_per_img=25, rpn_post_nms_top_n_test=200)
model.eval()
print(model(torch.rand([1, 3, 800, 800])))

# Convert to OpenVINO IR
import mo_pytorch
mo_pytorch.convert(model, input_shape=[1, 3, 800, 800], model_name='maskrcnn_resnet50_fpn')

I am using the latest 2021.4.2 LTS version of OpenVINO and I've added the mo_pytorch model to the PYTHONPATH.

model conversion failed in mo_pytorch.py for fasterrcnn_resnet50_fpn

I've trained a model based on Mask-RCNN with a ResNet50 backbone but trained on a custom dataset similar to #271 . I've attempted to follow the example in https://github.com/openvinotoolkit/openvino_contrib/modules to convert this directly into an openvino model. My code looks like this:

# Create model
model = torch.jit.load('model.pt')
model.eval()

# Convert to OpenVINO IR
import mo_pytorch
mo_pytorch.convert(model, input_shape=[1, 3, 800, 800], model_name='Mask_RCNN',
                   scale=255,
                   reverse_input_channels=True)

Originally I tried this with the latest and greatest versions of PyTorch V 1.11.0+cu102 and TorchVision V 0.12.0+cu102, but once I encountered this error I switched to the versions PyTorch V 1.9.0+cu102 and TorchVision V 0.10.0+cu102 due to others implying this was the "tested" version. With either version when I run this code I get the model conversion failed error with the following debug information:

        - Path for generated IR:        /home/cameron/convert_to_vino/.
        - IR output name:       Mask_RCNN
        - Log level:    ERROR
        - Batch:        Not specified, inherited from the model
        - Input layers:         Not specified, inherited from the model
        - Output layers:        Not specified, inherited from the model
        - Input shapes:         [1, 3, 800, 800]
        - Source layout:        Not specified
        - Target layout:        Not specified
        - Layout:       Not specified
        - Mean values:  Not specified
        - Scale values:         Not specified
        - Scale factor:         255.0
        - Precision of IR:      FP32
        - Enable fusing:        True
        - User transformations:         Not specified
        - Reverse input channels:       True
        - Enable IR generation for fixed input shape:   False
        - Use the transformations config file:  None
Advanced parameters:
        - Force the usage of legacy Frontend of Model Optimizer for model conversion into IR:   False
        - Force the usage of new Frontend of Model Optimizer for model conversion into IR:      False
[ ERROR ]  -------------------------------------------------
[ ERROR ]  ----------------- INTERNAL ERROR ----------------
[ ERROR ]  Unexpected exception happened.
[ ERROR ]  Please contact Model Optimizer developers and forward the following information:
[ ERROR ]  Exception occurred during running replacer "REPLACEMENT_ID (<class 'mo_extensions.load.pytorch.loader.PyTorchLoader'>)": no implementation found for '.forward' on types that implement __torch_function__: [OpenVINOTensor]
[ ERROR ]  Traceback (most recent call last):
  File "/home/cameron/convert_to_vino/openvino_env/lib/python3.8/site-packages/openvino/tools/mo/utils/class_registration.py", line 278, in apply_transform
    for_graph_and_each_sub_graph_recursively(graph, replacer.find_and_replace_pattern)
  File "/home/cameron/convert_to_vino/openvino_env/lib/python3.8/site-packages/openvino/tools/mo/middle/pattern_match.py", line 46, in for_graph_and_each_sub_graph_recursively
    func(graph)
  File "/home/cameron/convert_to_vino/openvino_env/lib/python3.8/site-packages/openvino/tools/mo/load/loader.py", line 14, in find_and_replace_pattern
    self.load(graph)
  File "/home/cameron/openvino_contrib/modules/mo_pytorch/mo_extensions/load/pytorch/loader.py", line 96, in load
    model(inputs['input'])
  File "/home/cameron/convert_to_vino/openvino_env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
    return forward_call(*input, **kwargs)
TypeError: no implementation found for '.forward' on types that implement __torch_function__: [OpenVINOTensor]

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/cameron/convert_to_vino/openvino_env/lib/python3.8/site-packages/openvino/tools/mo/main.py", line 533, in main
    ret_code = driver(argv)
  File "/home/cameron/convert_to_vino/openvino_env/lib/python3.8/site-packages/openvino/tools/mo/main.py", line 489, in driver
    graph, ngraph_function = prepare_ir(argv)
  File "/home/cameron/openvino_contrib/modules/mo_pytorch/mo_pytorch.py", line 99, in _prepare_ir
    graph = unified_pipeline(argv)
  File "/home/cameron/convert_to_vino/openvino_env/lib/python3.8/site-packages/openvino/tools/mo/pipeline/unified.py", line 13, in unified_pipeline
    class_registration.apply_replacements(graph, [
  File "/home/cameron/convert_to_vino/openvino_env/lib/python3.8/site-packages/openvino/tools/mo/utils/class_registration.py", line 328, in apply_replacements
    apply_replacements_list(graph, replacers_order)
  File "/home/cameron/convert_to_vino/openvino_env/lib/python3.8/site-packages/openvino/tools/mo/utils/class_registration.py", line 314, in apply_replacements_list
    apply_transform(
  File "/home/cameron/convert_to_vino/openvino_env/lib/python3.8/site-packages/openvino/tools/mo/utils/logger.py", line 112, in wrapper
    function(*args, **kwargs)
  File "/home/cameron/convert_to_vino/openvino_env/lib/python3.8/site-packages/openvino/tools/mo/utils/class_registration.py", line 302, in apply_transform
    raise Exception('Exception occurred during running replacer "{} ({})": {}'.format(
Exception: Exception occurred during running replacer "REPLACEMENT_ID (<class 'mo_extensions.load.pytorch.loader.PyTorchLoader'>)": no implementation found for '.forward' on types that implement __torch_function__: [OpenVINOTensor]

[ ERROR ]  ---------------- END OF BUG REPORT --------------
[ ERROR ]  -------------------------------------------------

I've tried loading the model and performing a prediction and the model does function as expected, so I do not believe anything is wrong with the model. Since there is no example and / or details beyond the couple lines of code in the readme, I'm not certain what the actual torchvision version is that I should be targeting or the openvino version I should choose. The version of OpenVino I'm targeting is 2022.1.0.643.

arm_plugin compile error

/usr/bin/ld: ../thirdparty/libarm_compute-static.a(CPPScheduler.o): in function arm_compute::(anonymous namespace)::Thread::Thread(int)': CPPScheduler.cpp:(.text._ZN11arm_compute12_GLOBAL__N_16ThreadC2Ei+0x78): undefined reference to pthread_create'
/usr/bin/ld: CPPScheduler.cpp:(.text._ZN11arm_compute12_GLOBAL__N_16ThreadC2Ei+0x88): undefined reference to `pthread_create'
collect2: error: ld returned 1 exit status

Anyone can help? Thanks in advance.

softmax with opset8 is not supported

Hi, I'm trying to convert a keras model which has CNN2d + Softmax layer at output.
It works fine on Intel CPU, but I got an error at arm64 platform

[02:42:00.6018]I[plugin.cpp:346] [AUTOPLUGIN]:device:CPU, priority:0
[02:42:00.6021]I[executable_network.cpp:163] [AUTOPLUGIN]ExecutableNetwork start
[02:42:00.6026]I[executable_network.cpp:183] [AUTOPLUGIN]:select device:CPU
Arm Plugin: Nodes from model_1 are not supported by plugin:
	conv2d_25/truediv:0 (Softmax.0)

Version:

  • releases/2022/1
  • Dockerfile.RPi64_focal

Model:

Unable to cross-compile ARM plugin for RPI Buster - `fail 12 OpenVINO build failed`

Hi ๐Ÿ‘‹ I'm trying to cross-compile the ARM CPU plugin for Raspberry Pi Buster 32-bit using this tutorial https://github.com/openvinotoolkit/openvino_contrib/wiki/How-to-build-ARM-CPU-plugin#approach-1-build-opencv-openvino-and-the-plugin-using-pre-configured-dockerfile-cross-compiling-the-preferred-way.

The compilation got through the OpenCV successfully but stopped at OpenVino with the error:
make: *** [Makefile:163: all] Error 2

Last lines of the output:

cc1plus: warning: unrecognized command line option '-Wno-pessimizing-move'
cc1plus: warning: unrecognized command line option '-Wno-redundant-move'
arm-linux-gnueabihf-g++ -o /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/libarm_compute_graph.so -Wl,--no-undefined -shared /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/DataLayerVisitor.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/Graph.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/GraphBuilder.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/GraphContext.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/GraphManager.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/INode.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/INodeVisitor.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/PassManager.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/Tensor.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/TypeLoader.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/Utils.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/Workload.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/algorithms/TopologicalSort.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/backends/BackendRegistry.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/detail/CrossLayerMemoryManagerHelpers.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/detail/ExecutionHelpers.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/frontend/Stream.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/frontend/SubStream.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/mutators/DepthConcatSubTensorMutator.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/mutators/GroupedConvolutionMutator.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/mutators/InPlaceOperationMutator.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/mutators/MutatorUtils.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/mutators/NodeExecutionMethodMutator.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/mutators/NodeFusionMutator.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/mutators/SplitLayerSubTensorMutator.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/mutators/SyntheticDataTypeMutator.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/nodes/ActivationLayerNode.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/nodes/ArgMinMaxLayerNode.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/nodes/BatchNormalizationLayerNode.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/nodes/BoundingBoxTransformLayerNode.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/nodes/ChannelShuffleLayerNode.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/nodes/ConcatenateLayerNode.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/nodes/ConstNode.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/nodes/ConvolutionLayerNode.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/nodes/DeconvolutionLayerNode.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/nodes/DepthToSpaceLayerNode.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/nodes/DepthwiseConvolutionLayerNode.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/nodes/DequantizationLayerNode.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/nodes/DetectionOutputLayerNode.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/nodes/DetectionPostProcessLayerNode.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/nodes/DummyNode.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/nodes/EltwiseLayerNode.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/nodes/FlattenLayerNode.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/nodes/FullyConnectedLayer.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/nodes/FusedConvolutionBatchNormalizationNode.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/nodes/FusedConvolutionBatchNormalizationWithPostOpsNode.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/nodes/FusedConvolutionWithPostOpNode.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/nodes/FusedDepthwiseConvolutionBatchNormalizationNode.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/nodes/GenerateProposalsLayerNode.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/nodes/InputNode.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/nodes/L2NormalizeLayerNode.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/nodes/NormalizationLayerNode.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/nodes/NormalizePlanarYUVLayerNode.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/nodes/OutputNode.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/nodes/PReluLayerNode.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/nodes/PadLayerNode.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/nodes/PermuteLayerNode.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/nodes/PoolingLayerNode.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/nodes/PrintLayerNode.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/nodes/PriorBoxLayerNode.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/nodes/QuantizationLayerNode.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/nodes/ROIAlignLayerNode.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/nodes/ReductionLayerNode.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/nodes/ReorgLayerNode.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/nodes/ReshapeLayer.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/nodes/ResizeLayerNode.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/nodes/SliceLayerNode.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/nodes/SoftmaxLayerNode.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/nodes/SplitLayerNode.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/nodes/StackLayerNode.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/nodes/StridedSliceLayerNode.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/printers/DotGraphPrinter.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/backends/NEON/NEDeviceBackend.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/backends/NEON/NEFunctionFactory.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/backends/NEON/NENodeValidator.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/backends/NEON/NESubTensorHandle.os /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/src/graph/backends/NEON/NETensorHandle.os -L/arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty -L. -lpthread -ldl -larm_compute
arm-linux-gnueabihf-ar rc /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/tests/framework/libarm_compute_test_framework.a /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/tests/framework/DatasetModes.o /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/tests/framework/Exceptions.o /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/tests/framework/Framework.o /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/tests/framework/ParametersLibrary.o /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/tests/framework/Profiler.o /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/tests/framework/TestFilter.o /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/tests/framework/Utils.o /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/tests/framework/command_line/CommonOptions.o /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/tests/framework/printers/JSONPrinter.o /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/tests/framework/printers/PrettyPrinter.o /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/tests/framework/printers/Printer.o /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/tests/framework/printers/Printers.o /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/tests/framework/instruments/Instruments.o /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/tests/framework/instruments/InstrumentsStats.o /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/tests/framework/instruments/SchedulerTimer.o /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/tests/framework/instruments/WallClockTimer.o
arm-linux-gnueabihf-ranlib /arm_cpu_plugin/openvino/build/build-modules/arm_plugin/thirdparty/tests/framework/libarm_compute_test_framework.a
scons: done building targets.
[ 36%] Built target arm_compute_static_libs
make: *** [Makefile:163: all] Error 2
+ fail 12 OpenVINO build failed. Stopping
+ [ 2 -lt 2 ]
+ retval=12
+ shift
+ echo OpenVINO build failed. Stopping
OpenVINO build failed. Stopping
+ exit 12

openvino_contrib branch: master
openvino_contrib commit: 663dce6


Thanks in advance!

Avoid origin PyTorch calls when convert the model

At this moment, model conversion registers functional callbacks to create a model:

@implements(torch.addmm)
def function_hook(bias, mat1, mat2):

    class ADDMM(nn.Module):
        def __init__(self, weight, bias):
            super().__init__()
            self.register_buffer('weight', weight)
            self.register_buffer('bias', bias)

    output = torch.addmm(bias, mat1.tensor(), mat2)  # Here we call an original PyTorch method
    return forward_hook(ADDMM(mat2, bias), (mat1,), output)

However it requires more time and memory. Especially with large input resolution.

Task is to remove original PyTorch methods calls. It will probably require more graceful Reshape/Resize and other shape based methods management.

[MO] Investigate failed tests

RuntimeError: Check 'scales_et == element::f32 || scales_et == element::f16 || scales_et == element::bf16' failed at core/src/op/interpolate.cpp:245:
While validating node 'v4::Interpolate Interpolate_4172 (Conv2d_496[0]:f32{1,256,10,10}, 6167[0]:i64{2}, Upsample_786/scales6164_const[0]:i64{2}, Upsample_786/axis6162_const[0]:i64{2}) -> ()' with friendly_name 'Interpolate_4172':
Scales element type must be f32, f16 or bf16

[Java] Fix failed test

InferRequestTests > testGetPerformanceCounts FAILED
    java.lang.AssertionError: Map size expected:<10> but was:<22>
        at org.junit.Assert.fail(Assert.java:89)
        at org.junit.Assert.failNotEquals(Assert.java:835)
        at org.junit.Assert.assertEquals(Assert.java:647)
        at InferRequestTests.testGetPerformanceCounts(InferRequestTests.java:57)

source: https://dev.azure.com/openvinoci/dldt/_build/results?buildId=87114&view=logs&j=0609cd82-f9f7-5e70-371c-89fca78031ab&t=0971f6e8-539b-5170-a800-5a6fa6f3c74b

Use static library with onnxruntime build

I have built static library of OpenVino using the following command:

cmake -DENABLE_IR_V7_READER=OFF -DENABLE_ONEDNN_FOR_GPU=OFF -DBUILD_SHARED_LIBS=OFF -DCMAKE_TOOLCHAIN_FILE=../cmake/toolchains/mt.runtime.win32.toolchain.cmake -G "Visual Studio 16 2019" -A x64 -DCMAKE_BUILD_TYPE=Release ..
and

cmake --build . --target openvino --config Release -j12

I have the output libs in <openvino_src_dir>\bin\intel64\Release

Now we need to build onnxruntime with this output, I am unable to find any steps. Looks like we need to run setupvars.bat but looks like output folder needs to be in some structure and not sure what all files needs to be in that.

Steps for building onnxruntime with OpenVino was referred here:
https://onnxruntime.ai/docs/execution-providers/OpenVINO-ExecutionProvider.html
But doesn't specify about custom build in OpenVino.

Can I get some help here?

Demo for Java API v2.0?

Hi, thanks for the great work of Java API. I am able to follow the examples in compatibility mode. But since it is deprecated, could you also provide some examples in API 2.0? Right now I only see some simple usages under unit test folder.

[arm plugin] "make install" does not copy libarmPlugin.so from build directory bin/<arch>/Release/lib/ to install target directory $CMAKE_INSTALL_PREFIX/deployment_tools/inference_engine/lib/<arch>/

After building with "cmake -D CMAKE_BUILD_TYPE=Release -D CMAKE_INSTALL_PREFIX=somewhere <> -D IE_EXTRA_MODULES=../../openvino_contrib/modules .. " and "make" , "make install" does not copy libarmPlugin.so from build directory bin/armv7l/Release/lib/ to install target directory somewhere/deployment_tools/inference_engine/lib/armv7l/ .

So https://storage.openvinotoolkit.org/repositories/openvino /packages/2021.4/l_openvino_toolkit_runtime_raspbian_p_2021.4.582.tgz does not include libarmPlugin.so in spite of "plugin name="CPU" location="libarmPlugin.so" " definition in plugin.xml .

Regards,

arm_plugin OpenCL supoprt?

I have ARM target where OpenCL exposes GPU, and I wonder what is the shortest way of using it so that I can run my demos faster, even without NCS2(MYRIAD) devices.
Currently arm_plugin (CPU) is the only option, but it is very slow compared to NCS2.

arm_plugin uses ARM Compute Library, and ACL could be configured to be using OpenCL (via opencl=1 flag to its scons build command), but it does not mean arm_plugin takes advantage of OpenCL.
I can see to use OpenCL arm_plugin code needs to be modifier?

What puzzles me now is, if ever, will it take a lot of effort to make it use OpenCL and if it worths.
(if it can, how?)

Alternatively, I can see intel_gpu plugin uses OpenCL (with intel platform specific extentions). If I port it to ARM my goal (using OpenCL for inference on ARM) is archieved?
(if it ever can, how and with how much efforts?)

If anyone has suggestions or comment, I really appreciate.

Also if there is any alternative to these (fix arm_plugin or port intel_gpu), I also want to hear what it is.

Try dynamism for Mask R-CNN

Entire model efficiency depends on number of proposals and detections selected at RPN. Currently, we work with fixed number of boxes (1000 proposals, 100 detections by default). Check if dynamic output from DetectionOutput which supported since openvinotoolkit/openvino#8576 improves efficiency.

CMake Error: The source directory "/arm_cpu_plugin/opencv" does not exist.

Hi!
I was trying to inference with ARM CPU and follow the step of Building ARM CPU plugin with LINUX.

I successfully built docker image and but when I try to build the plugin in Docker container with docker container run --rm -ti -v $PWD/build:/arm_cpu_plugin arm-plugin below

image

I faced the error message

-- Installing: /arm_cpu_plugin/oneTBB/build/install/lib/libtbbmalloc_proxy.so.2
-- Set runtime path of "/arm_cpu_plugin/oneTBB/build/install/lib/libtbbmalloc_proxy.so.2.8" to ""
-- Installing: /arm_cpu_plugin/oneTBB/build/install/lib/libtbbmalloc_proxy.so
+ cd /arm_cpu_plugin
+ export TBB_DIR=/arm_cpu_plugin/oneTBB/build/install/lib/cmake/TBB/
+ [ ON = ON ]
+ cmake -DCMAKE_BUILD_TYPE=Release -DBUILD_LIST=imgcodecs,videoio,highgui,gapi,python3 -DCMAKE_INSTALL_PREFIX=/arm_cpu_plugin/armcpu_package/extras/opencv -DBUILD_opencv_python2=OFF -DBUILD_opencv_python3=ON -DOPENCV_SKIP_PYTHON_LOADER=OFF -DPYTHON3_LIMITED_API=ON -DPYTHON3_EXECUTABLE=/opt/cross_venv/cross/bin/python3 -DPYTHON3_INCLUDE_PATH=/opt/python3.8_arm/include/python3.8 -DPYTHON3_LIBRARIES=/opt/python3.8_arm/lib/libpython3.8.so -DPYTHON3_NUMPY_INCLUDE_DIRS=/opt/cross_venv/cross/lib/python3.8/site-packages/numpy/core/include -D CMAKE_USE_RELATIVE_PATHS=ON -D CMAKE_SKIP_INSTALL_RPATH=ON -D OPENCV_SKIP_PKGCONFIG_GENERATION=ON -D OPENCV_BIN_INSTALL_PATH=bin -D OPENCV_PYTHON3_INSTALL_PATH=python -D OPENCV_INCLUDE_INSTALL_PATH=include -D OPENCV_LIB_INSTALL_PATH=lib -D OPENCV_CONFIG_INSTALL_PATH=cmake -D OPENCV_3P_LIB_INSTALL_PATH=3rdparty -D OPENCV_SAMPLES_SRC_INSTALL_PATH=samples -D OPENCV_DOC_INSTALL_PATH=doc -D OPENCV_OTHER_INSTALL_PATH=etc -D OPENCV_LICENSES_INSTALL_PATH=etc/licenses -DWITH_GTK_2_X=OFF -DOPENCV_ENABLE_PKG_CONFIG=ON -S /arm_cpu_plugin/opencv -B /arm_cpu_plugin/opencv/build
CMake Error: The source directory "/arm_cpu_plugin/opencv" does not exist.
Specify --help for usage, or press the help button on the CMake GUI.
+ fail 11 OpenCV build failed. Stopping
+ [ 2 -lt 2 ]
+ retval=11
+ shift
+ echo OpenCV build failed. Stopping
OpenCV build failed. Stopping
+ exit 11

Does anyone konw how to fix this error. It looks the path ("/arm_cpu_plugin/opencv) in docker container doesn't exist but I'm not sure how to fix this.

Also, there are some packages stored in mount directory but I'm not sure if I can use this openvino with python.
image

Please let me know if you havea solution.

Thank you so much.

ImportError: libpython3.7m.so.1.0: cannot open shared object file

Hi!
I was trying to inference with ARM CPU and follow the step of Building ARM CPU plugin with LINUX.

However, when I try to build a Docker image with docker image build -t arm-plugin -f dockerfiles/Dockerfile.RPi64_focal . , I faced the error

Error

        esac; \
         _PYTHON_PROJECT_BASE=/Python-3.7.9 _PYTHON_HOST_PLATFORM=linux-aarch64 PYTHONPATH=/Python-3.7.9/build/lib.linux-aarch64-3.7:./Lib _PYTHON_SYSCONFIGDATA_NAME=_sysconfigdata_m_linux_aarch64-linux-gnu python3.7 -m ensurepip \
                $ensurepip --root=/ ; \
fi
Traceback (most recent call last):
  File "/Python-3.7.9/Lib/runpy.py", line 183, in _run_module_as_main
    mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
  File "/Python-3.7.9/Lib/runpy.py", line 142, in _get_module_details
    return _get_module_details(pkg_main_name, error)
  File "/Python-3.7.9/Lib/runpy.py", line 109, in _get_module_details
    __import__(pkg_name)
  File "/Python-3.7.9/Lib/ensurepip/__init__.py", line 6, in <module>
    import tempfile
  File "/Python-3.7.9/Lib/tempfile.py", line 45, in <module>
    from random import Random as _Random
  File "/Python-3.7.9/Lib/random.py", line 42, in <module>
    from math import log as _log, exp as _exp, pi as _pi, e as _e, ceil as _ceil
ImportError: libpython3.7m.so.1.0: cannot open shared object file: No such file or directory
make: *** [Makefile:1151: altinstall] Error 1

My environment is

Raspberry PI 4G
Ubuntu20.04 (64bit)
Docker version 20.10.18, build b40c2f6

Cause

It seems that Python-3.7.9.tar.xz doesn't include libpython3.7m.so.1.0 from the begining.
(https://www.python.org/ftp/python/3.7.9/Python-3.7.9.tar.xz)

Questions

How can I install libpython3.7m.so.1.0 mannually?

Please help me if you know any solutions.

Thanks.

Converted Mask-RCNN model trained on custom dataset does not return any meaningful detections

Hey there,

based on the Mask-RCNN example provided by @dkurt I tried to convert my own model which is based on Mask-RCNN with a ResNet50 backbone but trained on a custom dataset. Converting the model to IR works but when running the inference, the model doesn't return any meaningful detections.

I'm working with OpenVINO 2021.4 on Ubuntu 20.04 (x64), torch 1.10.0, torchvision 0.11.1, mo_pytorch commit 4de1104 (as suggested by @dkurt because of OpenVINO 2021.4).

I modified the Mask-RCNN example in the following way:

inst_classes = [
    "__background__",
    "aisle post",
    "palette"
] # The model is trained on only 2 classes + background

[...]

# Load origin model
model = models.detection.maskrcnn_resnet50_fpn(pretrained=False, num_classes=len(inst_classes))
model.load_state_dict(torch.load("aisle_post_detector_maskrcnn_resnet50.pth"))
model.eval()

When I remove probability threshold filter in the example code and print all detections, the model returns the following:

[0.         2.         0.09952605 0.         0.12424597 0.47801715 1.        ]
[-1.  0.  0.  0.  0.  0.  0.]
[0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0.]
[...]

Converting the same model to ONNX and running the inference using the ONNX runtime on the same input image returns three detections, one for class "aisle post" and two for class "palette" with probabilities > 0.9 each.

For reference here are some additional files:

Mask-RCNN example command line output, default model: out_default_model.txt
Mask-RCNN example command line output, custom trained model: out_own_model.txt
mo_pytorch conversion output (XML), default model: Mask_RCNN_default_model.xml.txt
mo_pytorch conversion output (XML), custom trained model: Mask_RCNN_own_model.xml.txt

When comparing these files, lots of changes are shown, especially in the XML files. But I don't understand all the effects these changes might have.

Any help is very appreciated! Let me know if you need any additional information...

Best regards,

Jens

[Java API 2.0]GetCore: Failed to find plugins.xml file in arm Android application.

Hello, When I use Java API 2.0 new Core() in android application, It seems that the application went wrong and shows that it can't find the plugins.xml file in my application.

That means I need to load the plugins.xml file to specific path?

E/AndroidRuntime: FATAL EXCEPTION: main
    Process: org.intel.openvino, PID: 3283
    java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
        at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:558)
        at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:1003)
     Caused by: java.lang.reflect.InvocationTargetException
        at java.lang.reflect.Method.invoke(Native Method)
        at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:548)
        at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:1003)ย 
     Caused by: java.lang.Exception: 
    InferenceEngineException: 
    	GetCore: Failed to find plugins.xml file
        at org.intel.openvino.Core.GetCore(Native Method)
        at org.intel.openvino.Core.<init>(Core.java:20)
        at org.intel.openvinodemo.MainActivity.processNetwork(MainActivity.java:107)
        at org.intel.openvinodemo.MainActivity.onRequestPermissionsResult(MainActivity.java:196)
        at android.app.Activity.requestPermissions(Activity.java:5297)
        at org.opencv.android.CameraActivity.onStart(CameraActivity.java:42)
        at android.app.Instrumentation.callActivityOnStart(Instrumentation.java:1467)
        at android.app.Activity.performStart(Activity.java:8079)
        at android.app.ActivityThread.handleStartActivity(ActivityThread.java:3666)
        at android.app.servertransaction.TransactionExecutor.performLifecycleSequence(TransactionExecutor.java:221)
        at android.app.servertransaction.TransactionExecutor.cycleToPath(TransactionExecutor.java:201)
        at android.app.servertransaction.TransactionExecutor.executeLifecycleState(TransactionExecutor.java:173)
        at android.app.servertransaction.TransactionExecutor.execute(TransactionExecutor.java:97)
        at android.app.ActivityThread$H.handleMessage(ActivityThread.java:2212)
        at android.os.Handler.dispatchMessage(Handler.java:106)
        at android.os.Looper.loopOnce(Looper.java:201)
        at android.os.Looper.loop(Looper.java:288)
        at android.app.ActivityThread.main(ActivityThread.java:7798)

build arm_plugin error on arm64 platform

I need run openvino on arm64 platform,so i need build openvino and arm_plugin modules of openvino_contrib,but I met a issue when i built it, anyone can help me?

make[2]: *** No rule to make target '/home/duke/openvino_contrib-master/modules/arm_plugin/thirdparty/ComputeLibrary/SConstruct', needed by 'inference-engine/build-modules/arm_plugin/thirdparty/libarm_compute-static.a'. Stop.
CMakeFiles/Makefile2:5951: recipe for target 'inference-engine/build-modules/arm_plugin/thirdparty/CMakeFiles/arm_compute_static_libs.dir/all' failed
make[1]: *** [inference-engine/build-modules/arm_plugin/thirdparty/CMakeFiles/arm_compute_static_libs.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....


I config with command : 
sudo cmake -DIE_EXTRA_MODULES=/home/duke/openvino_contrib-master/modules -DBUILD_java_api=OFF -DCMAKE_INSTALL_PREFIX=/opt/intel ..

Use arm with python

Hi! I want to build and use openvino in mac os with m1 CPU. So, I use this instruction: https://github.com/openvinotoolkit/openvino_contrib/wiki/How-to-build-ARM-CPU-plugin. After building i run setupvars.sh in install_ie folder. But it is not founded python:

[setupvars.sh] WARNING: Can not find OpenVINO Python binaries by path /Users/rizhiy/Downloads/openvino_builds/install_ie/python
[setupvars.sh] WARNING: OpenVINO Python environment does not set properly
[setupvars.sh] OpenVINO environment initialized

ะกan anyone help with this?

[ARM plugin] Unsupported Slice configuration

Wav2Vec is audio recognition model. IR generated for PyTorch model from https://huggingface.co/anton-l/wav2vec2-base-ft-keyword-spotting

xml: https://drive.google.com/file/d/1398EKT21lldoYABaPzo9P6Okj7NaY8Ov/view?usp=sharing
bin: https://drive.google.com/file/d/1hshvJGYmn1q714T7JSpVUHwdaXPqDPiX/view?usp=sharing

CreateInferRequest: Arm Plugin: Nodes from torch-jit-export are not supported by plugin:
    	315 (Slice.0)

Seems to me, problem with a Slice layer.

<layer id="152" name="315" type="Slice" version="opset8">
	<input>
		<port id="0" precision="FP32">
			<dim>1</dim>
			<dim>768</dim>
			<dim>50</dim>
		</port>
		<port id="1" precision="I64">
			<dim>1</dim>
		</port>
		<port id="2" precision="I64">
			<dim>1</dim>
		</port>
		<port id="3" precision="I64">
			<dim>1</dim>
		</port>
		<port id="4" precision="I64">
			<dim>1</dim>
		</port>
	</input>
	<output>
		<port id="5" precision="FP32" names="315">
			<dim>1</dim>
			<dim>768</dim>
			<dim>49</dim>
		</port>
	</output>
</layer>

Fix failed Java tests

https://dev.azure.com/openvinoci/dldt/_build/results?buildId=307827&view=logs&j=0609cd82-f9f7-5e70-371c-89fca78031ab&t=73ab56eb-fe05-59b6-aeb0-ce6e8e3090a3

tests.CoreTests > testReadModelIncorrectBinPath STANDARD_OUT
    testReadModelIncorrectBinPath(tests.CoreTests)- [CPU] - FAILED

tests.CoreTests > testReadModelIncorrectBinPath FAILED
    java.lang.AssertionError
        at org.junit.Assert.fail(Assert.java:87)
        at org.junit.Assert.assertTrue(Assert.java:42)
        at org.junit.Assert.assertTrue(Assert.java:53)
        at tests.CoreTests.testReadModelIncorrectBinPath(CoreTests.java:34)
tests.compatibility.IECoreTests > testReadNetworkIncorrectBinPath STANDARD_OUT
    testReadNetworkIncorrectBinPath(tests.compatibility.IECoreTests)- [CPU] - FAILED

tests.compatibility.IECoreTests > testReadNetworkIncorrectBinPath FAILED
    java.lang.AssertionError
        at org.junit.Assert.fail(Assert.java:87)
        at org.junit.Assert.assertTrue(Assert.java:42)
        at org.junit.Assert.assertTrue(Assert.java:53)
        at tests.compatibility.IECoreTests.testReadNetworkIncorrectBinPath(IECoreTests.java:45)

openvino build fails for python onnx

I followed the recommended (cross-compiling) build guide and docker image gets created successfully on Raspberry Pi 4 (Raspbian Buster OS). However, when I run

mkdir build
docker container run --rm -ti -v $PWD/build:/arm_cpu_plugin arm-plugin

while building openvino I get the following error:

ERROR: Command errored out with exit status 1:
 command: /usr/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-ia0575zc/onnx/setup.py'"'"'; __file__='"'"'/tmp/pip-install-ia0575zc/onnx/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-i5x424zi/install-record.txt --single-version-externally-managed --compile --install-headers /usr/local/include/python3.7m/onnx
     cwd: /tmp/pip-install-ia0575zc/onnx/
Complete output (132 lines):
fatal: not a git repository (or any of the parent directories): .git
running install
running build
running build_py
running create_version
running cmake_build
Using cmake args: ['/usr/local/bin/cmake', '-DPYTHON_INCLUDE_DIR=/usr/local/include/python3.7m', '-DPYTHON_EXECUTABLE=/usr/bin/python3', '-DBUILD_ONNX_PYTHON=ON', '-DCMAKE_EXPORT_COMPILE_COMMANDS=ON', '-DONNX_NAMESPACE=onnx', '-DPY_EXT_SUFFIX=.cpython-37m-arm-linux-gnueabihf.so', '-DCMAKE_BUILD_TYPE=Release', '-DONNX_ML=1', '/tmp/pip-install-ia0575zc/onnx']
Generated: /tmp/pip-install-ia0575zc/onnx/.setuptools-cmake-build/onnx/onnx-ml.proto
Generated: /tmp/pip-install-ia0575zc/onnx/.setuptools-cmake-build/onnx/onnx-operators-ml.proto
Generated: /tmp/pip-install-ia0575zc/onnx/.setuptools-cmake-build/onnx/onnx-data.proto
CMake Warning at CMakeLists.txt:451 (find_package):
  By not providing "Findpybind11.cmake" in CMAKE_MODULE_PATH this project has
  asked CMake to find a package configuration file provided by "pybind11",
  but CMake did not find one.

  Could not find a package configuration file provided by "pybind11"
  (requested version 2.2) with any of the following names:

    pybind11Config.cmake
    pybind11-config.cmake

  Add the installation prefix of "pybind11" to CMAKE_PREFIX_PATH or set
  "pybind11_DIR" to a directory containing one of the above files.  If
  "pybind11" provides a separate development package or SDK, be sure it has
  been installed.


--
-- ******** Summary ********
--   CMake version             : 3.13.4
--   CMake command             : /usr/local/bin/cmake
--   System                    : Linux
--   C++ compiler              : /usr/bin/c++
--   C++ compiler version      : 8.3.0
--   CXX flags                 :  -Wnon-virtual-dtor
--   Build type                : Release
--   Compile definitions       : __STDC_FORMAT_MACROS
--   CMAKE_PREFIX_PATH         :
--   CMAKE_INSTALL_PREFIX      : /usr/local
--   CMAKE_MODULE_PATH         :
--
--   ONNX version              : 1.11.0
--   ONNX NAMESPACE            : onnx
--   ONNX_USE_LITE_PROTO       : OFF
--   USE_PROTOBUF_SHARED_LIBS  : OFF
--   Protobuf_USE_STATIC_LIBS  : ON
--   ONNX_DISABLE_EXCEPTIONS   : OFF
--   ONNX_WERROR               : OFF
--   ONNX_BUILD_TESTS          : OFF
--   ONNX_BUILD_BENCHMARKS     : OFF
--   ONNXIFI_DUMMY_BACKEND     : OFF
--   ONNXIFI_ENABLE_EXT        : OFF
--
--   Protobuf compiler         : /usr/bin/protoc
--   Protobuf includes         : /usr/include
--   Protobuf libraries        : /usr/lib/arm-linux-gnueabihf/libprotobuf.a;-lpthread
--   BUILD_ONNX_PYTHON         : ON
--     Python version        :
--     Python executable     : /usr/bin/python3
--     Python includes       : /usr/local/include/python3.7m
-- Configuring done
-- Generating done
-- Build files have been written to: /tmp/pip-install-ia0575zc/onnx/.setuptools-cmake-build
make[3]: warning: -j4 forced in submake: resetting jobserver mode.
[  2%] Built target onnxifi_dummy
[  5%] Built target onnxifi_loader
[  8%] Built target gen_onnx_proto
[ 11%] Built target onnxifi_wrapper
[ 14%] Built target gen_onnx_data_proto
[ 16%] Built target gen_onnx_operators_proto
[ 30%] Built target onnx_proto
[ 97%] Built target onnx
[ 98%] Linking CXX shared module onnx_cpp2py_export.cpython-37m-arm-linux-gnueabihf.so
/usr/bin/ld: /usr/lib/arm-linux-gnueabihf/libprotobuf.a(arena.o)(.text+0x6bc): R_ARM_TLS_LE32 relocation not permitted in shared object
/usr/bin/ld: /usr/lib/arm-linux-gnueabihf/libprotobuf.a(arena.o): in function `google::protobuf::internal::ArenaImpl::Init()':
(.text+0x6bc): dangerous relocation: unsupported relocation
/usr/bin/ld: /usr/lib/arm-linux-gnueabihf/libprotobuf.a(arena.o)(.text+0x758): R_ARM_TLS_LE32 relocation not permitted in shared object
/usr/bin/ld: /usr/lib/arm-linux-gnueabihf/libprotobuf.a(arena.o): in function `google::protobuf::internal::ArenaImpl::GetSerialArenaFallback(void*)':
(.text+0x758): dangerous relocation: unsupported relocation
/usr/bin/ld: /usr/lib/arm-linux-gnueabihf/libprotobuf.a(arena.o)(.text+0x79c): R_ARM_TLS_LE32 relocation not permitted in shared object
/usr/bin/ld: /usr/lib/arm-linux-gnueabihf/libprotobuf.a(arena.o): in function `google::protobuf::internal::ArenaImpl::GetSerialArena()':
(.text+0x79c): dangerous relocation: unsupported relocation
/usr/bin/ld: /usr/lib/arm-linux-gnueabihf/libprotobuf.a(arena.o)(.text+0x96c): R_ARM_TLS_LE32 relocation not permitted in shared object
/usr/bin/ld: /usr/lib/arm-linux-gnueabihf/libprotobuf.a(arena.o): in function `google::protobuf::internal::ArenaImpl::AllocateAligned(unsigned int)':
(.text+0x96c): dangerous relocation: unsupported relocation
/usr/bin/ld: /usr/lib/arm-linux-gnueabihf/libprotobuf.a(arena.o)(.text+0xba8): R_ARM_TLS_LE32 relocation not permitted in shared object
/usr/bin/ld: /usr/lib/arm-linux-gnueabihf/libprotobuf.a(arena.o): in function `google::protobuf::internal::ArenaImpl::AllocateAlignedAndAddCleanup(unsigned int, void (*)(void*))':
(.text+0xba8): dangerous relocation: unsupported relocation
/usr/bin/ld: /usr/lib/arm-linux-gnueabihf/libprotobuf.a(arena.o)(.text+0xc40): R_ARM_TLS_LE32 relocation not permitted in shared object
/usr/bin/ld: /usr/lib/arm-linux-gnueabihf/libprotobuf.a(arena.o): in function `google::protobuf::internal::ArenaImpl::AddCleanup(void*, void (*)(void*))':
(.text+0xc40): dangerous relocation: unsupported relocation
collect2: error: ld returned 1 exit status
make[5]: *** [CMakeFiles/onnx_cpp2py_export.dir/build.make:88: onnx_cpp2py_export.cpython-37m-arm-linux-gnueabihf.so] Error 1
make[4]: *** [CMakeFiles/Makefile2:357: CMakeFiles/onnx_cpp2py_export.dir/all] Error 2
make[3]: *** [Makefile:130: all] Error 2
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/tmp/pip-install-ia0575zc/onnx/setup.py", line 358, in <module>
    'backend-test-tools = onnx.backend.test.cmd_tools:main',
  File "/usr/local/lib/python3.7/site-packages/setuptools/__init__.py", line 144, in setup
    return distutils.core.setup(**attrs)
  File "/usr/local/lib/python3.7/distutils/core.py", line 148, in setup
    dist.run_commands()
  File "/usr/local/lib/python3.7/distutils/dist.py", line 966, in run_commands
    self.run_command(cmd)
  File "/usr/local/lib/python3.7/distutils/dist.py", line 985, in run_command
    cmd_obj.run()
  File "/usr/local/lib/python3.7/site-packages/setuptools/command/install.py", line 61, in run
    return orig.install.run(self)
  File "/usr/local/lib/python3.7/distutils/command/install.py", line 545, in run
    self.run_command('build')
  File "/usr/local/lib/python3.7/distutils/cmd.py", line 313, in run_command
    self.distribution.run_command(command)
  File "/usr/local/lib/python3.7/distutils/dist.py", line 985, in run_command
    cmd_obj.run()
  File "/usr/local/lib/python3.7/distutils/command/build.py", line 135, in run
    self.run_command(cmd_name)
  File "/usr/local/lib/python3.7/distutils/cmd.py", line 313, in run_command
    self.distribution.run_command(command)
  File "/usr/local/lib/python3.7/distutils/dist.py", line 985, in run_command
    cmd_obj.run()
  File "/tmp/pip-install-ia0575zc/onnx/setup.py", line 232, in run
    self.run_command('cmake_build')
  File "/usr/local/lib/python3.7/distutils/cmd.py", line 313, in run_command
    self.distribution.run_command(command)
  File "/usr/local/lib/python3.7/distutils/dist.py", line 985, in run_command
    cmd_obj.run()
  File "/tmp/pip-install-ia0575zc/onnx/setup.py", line 226, in run
    subprocess.check_call(build_args)
  File "/usr/local/lib/python3.7/subprocess.py", line 363, in check_call
    raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['/usr/local/bin/cmake', '--build', '.', '--', '-j', '4']' returned non-zero exit status 2.
----------------------------------------

[ 16%] [cpplint] /arm_cpu_plugin/openvino/src/tests/ie_test_utils/common_test_utils/ngraph_test_utils.hpp
ERROR: Command errored out with exit status 1: /usr/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-ia0575zc/onnx/setup.py'"'"'; file='"'"'/tmp/pip-install-ia0575zc/onnx/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record /tmp/pip-record-i5x424zi/install-record.txt --single-version-externally-managed --compile --install-headers /usr/local/include/python3.7m/onnx Check the logs for full command output.

How can I solve this?

FPS on the raspberry pi 4

I finished the build process for Ubuntu 18.04 and proceeded a object detection inference. The average inference latency is 328ms. But the average inference latency on a intel 2.6G Hz CPU is around 20ms. Is this normal? What can I do to improve it?

[ ERROR ] std::bad_cast in benchmark_app sample with arm plugin in 2021.4

It seems "std::bad_cast" occurs when calling ie.GetConfig(ds.first, key).asstd::string() for key value "CPU_THROUGHPUT_STREAMS".

platform: Raspberry Pi 4, utuntu 18.04 64bit
version: openvino 2021.4

P.S
ie.GetConfig(device, configKey) in hello_query_deive sample works without error in 2021.4.
It aborts in 2021.3.

<< 2021.4 >>
Available devices:
CPU
SUPPORTED_METRICS:
AVAILABLE_DEVICES : [ NEON ]
FULL_DEVICE_NAME : arm_compute::NEON
OPTIMIZATION_CAPABILITIES : [ FP32 FP16 ]
SUPPORTED_CONFIG_KEYS (default values):
PERF_COUNT : YES
CPU_THROUGHPUT_STREAMS : 1
CPU_BIND_THREAD : NO
CPU_THREADS_NUM : 0
CPU_THREADS_PER_STREAM : 4

<< 2021.3 >>
Available devices:
Device: ARM
Metrics:
SUPPORTED_METRICS : [ AVAILABLE_DEVICES SUPPORTED_METRICS SUPPORTED_CONFIG_KEYS FULL_DEVICE_NAME OPTIMIZATION_CAPABILITIES ]
SUPPORTED_CONFIG_KEYS : [ PERF_COUNT CPU_THROUGHPUT_STREAMS CPU_BIND_THREAD CPU_THREADS_NUM CPU_THREADS_PER_STREAM ]
FULL_DEVICE_NAME : arm_compute::NEON
OPTIMIZATION_CAPABILITIES : [ FP32 FP16 ]
Default values for device configuration keys:
PERF_COUNT : YES
CPU_THROUGHPUT_STREAMS : [NOT_FOUND] : CPU_THROUGHPUT_STREAMS

Beat Regards,

[NVIDIA] Failed inference on OMZ models

Hi! I did some analysis of OMZ models and found the following issues using NVIDIA plugin. Hope it might be helpful.

Model name Error
asl-recognition-0004
common-sign-language-0002
[ ERROR ] /content/openvino_contrib/modules/nvidia_plugin/src/ops/subgraph.cpp:59(initExecuteSequence): Node: name = Sqrt_4372, description = Sqrt; Is not found in OperationRegistry
bert-large-uncased-whole-word-masking-squad-0001
bert-large-uncased-whole-word-masking-squad-emb-0001
bert-small-uncased-whole-word-masking-squad-0001
bert-small-uncased-whole-word-masking-squad-0002
[ ERROR ] /content/openvino_contrib/modules/nvidia_plugin/src/cuda_plugin.cpp:74(LoadExeNetworkImpl): Input format 72 is not supported yet. Supported formats are: FP32, FP16, I32, I16, I8, U8 and BOOL.
face-detection-0205
face-detection-0206
horizontal-text-detection-0001
instance-segmentation-person-0007
instance-segmentation-security-0002
instance-segmentation-security-0091
instance-segmentation-security-0228
instance-segmentation-security-1039
instance-segmentation-security-1040
machine-translation-nar-de-en-0002
machine-translation-nar-en-de-0002
machine-translation-nar-en-ru-0002
machine-translation-nar-ru-en-0002
person-detection-0106
person-detection-0203
person-detection-0301
person-detection-0302
person-detection-0303
person-vehicle-bike-detection-2003
person-vehicle-bike-detection-2004
semantic-segmentation-adas-0001
[ ERROR ] /content/openvino_contrib/modules/nvidia_plugin/src/cuda_plugin.cpp:60(LoadExeNetworkImpl): Output format 72 is not supported yet. Supported formats are: FP32, FP16, I32, I16, I8, U8 and BOOL.
facial-landmarks-98-detection-0001
human-pose-estimation-0005
human-pose-estimation-0006
human-pose-estimation-0007
icnet-camvid-ava-0001
icnet-camvid-ava-sparse-30-0001
icnet-camvid-ava-sparse-60-0001
road-segmentation-adas-0001
[ ERROR ] /content/openvino_contrib/modules/nvidia_plugin/src/ops/interpolate.cpp:62(interpolateFactory): Interpolate node is not supported: not implemented.
faster-rcnn-resnet101-coco-sparse-60-0001
person-detection-retail-0002
[ ERROR ] /content/openvino_contrib/modules/nvidia_plugin/src/ops/subgraph.cpp:59(initExecuteSequence): Node: name = Proposal_4852, description = Proposal; Is not found in OperationRegistry
human-pose-estimation-0001
person-vehicle-bike-detection-crossroad-0078
time-series-forecasting-electricity-0001
[ ERROR ] /content/openvino_contrib/modules/nvidia_plugin/src/ops/subgraph.cpp:59(initExecuteSequence): Node: name = Elu_2781, description = Elu; Is not found in OperationRegistry
license-plate-recognition-barrier-0001 [ ERROR ] /content/openvino_contrib/modules/nvidia_plugin/src/ops/subgraph.cpp:59(initExecuteSequence): Node: name = Tile_2031, description = Tile; Is not found in OperationRegistry
person-detection-action-recognition-0005
person-detection-action-recognition-teacher-0002
person-detection-raisinghand-recognition-0001
person-detection-action-recognition-0006
[ ERROR ] /content/openvino_contrib/modules/nvidia_plugin/src/ops/subgraph.cpp:59(initExecuteSequence): Node: name = NormalizeL2_9411, description = NormalizeL2; Is not found in OperationRegistry
person-detection-asl-0001 [ ERROR ] /content/openvino_contrib/modules/nvidia_plugin/src/cuda_executable_network.cpp:59(ExecutableNetwork): Standard exception from compilation library: get_shape was called on a descriptor::Tensor with dynamic shape
person-vehicle-bike-detection-crossroad-yolov3-1020
yolo-v2-tiny-ava-0001
yolo-v2-tiny-ava-sparse-30-0001
yolo-v2-tiny-ava-sparse-60-0001
yolo-v2-tiny-vehicle-detection-0001
[ ERROR ] /content/openvino_contrib/modules/nvidia_plugin/src/ops/subgraph.cpp:59(initExecuteSequence): Node: name = RegionYolo_4660, description = RegionYolo; Is not found in OperationRegistry
smartlab-sequence-modelling-0001 [ ERROR ] /content/openvino_contrib/modules/nvidia_plugin/src/ops/subgraph.cpp:59(initExecuteSequence): Node: name = HardSigmoid_2960, description = HardSigmoid; Is not found in OperationRegistry
unet-camvid-onnx-0001 [ ERROR ] /content/openvino_contrib/modules/nvidia_plugin/src/ops/subgraph.cpp:59(initExecuteSequence): Node: name = Exp_2112, description = Exp; Is not found in OperationRegistry
yolo-v2-ava-0001
yolo-v2-ava-sparse-35-0001
yolo-v2-ava-sparse-70-0001
[ ERROR ] /content/openvino_contrib/modules/nvidia_plugin/src/ops/subgraph.cpp:59(initExecuteSequence): Node: name = ExtractImagePatches_2200, description = ExtractImagePatches; Is not found in OperationRegistry

Failed Windows build

Failed "Install dependencies" step. Example: https://dev.azure.com/openvinoci/dldt/_build/results?buildId=234614&view=logs&j=4c86bc1b-1091-5192-4404-c74dfaad23e7&t=d216f1e9-c2b5-54b5-eceb-b0e7ea6b3dcc

Access is denied.
Expand-Archive : The path 'ninja-win.zip' either does not exist or is not a valid file system path.
At line:1 char:1
+ Expand-Archive -Force ninja-win.zip
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : InvalidArgument: (ninja-win.zip:String) [Expand-Archive], InvalidOperationException
    + FullyQualifiedErrorId : ArchiveCmdletPathNotFound,Expand-Archive

/cc @azhogov

arm_plugin RuntimeError: [ NOT_FOUND ] : PERFORMANCE_HINT

I am trying arm_plugin (branch releases/2022/2) on Jetson Nano and seeing this error.
This plugin is seen as "CPU" device, so I specified "-d CPU" to benchmark_app command line, and but it trminates with this error.

I am new to OpenVINO and its arm_plugin, so I am not sure if this error expected or anythong is wrong in my build.

Though it is not part of the core openvino distribution, I was seeing people using this on RPi so I expected I can use this on Jetson without hassle.

Any advice or hint is highly appreiated!

$ benchmark_app -d CPU -m intel/yolo-v2-tiny-ava-0001/FP16/yolo-v2-tiny-ava-0001.xml
[Step 1/11] Parsing and validating input arguments
[ INFO ] Parsing input parameters
[ INFO ] Input command: /usr/local/bin/benchmark_app -d CPU -m intel/yolo-v2-tiny-ava-0001/FP16/yolo-v2-tiny-ava-0001.xml
[Step 2/11] Loading OpenVINO Runtime
[ INFO ] OpenVINO:
[ INFO ] Build ................................. 2022.3.0-8655-5ac06838cd8
[ INFO ]
[ INFO ] Device info:
[ INFO ] CPU
[ INFO ] Build ................................. 2022.3.0-8655-5ac06838cd8
[ INFO ]
[ INFO ]
[Step 3/11] Setting device configuration
[ WARNING ] Device CPU does not support performance hint property(-hint).
[ WARNING ] -nstreams default value is determined automatically for CPU device. Although the automatic selection usually provides a reasonable performance, but it still may be non-optimal for some cases, for more information look at README.
[ ERROR ] [ NOT_FOUND ] : PERFORMANCE_HINT
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/openvino/tools/benchmark/main.py", line 250, in main
benchmark.set_config(config)
File "/usr/local/lib/python3.6/dist-packages/openvino/tools/benchmark/benchmark.py", line 57, in set_config
self.core.set_property(device, config[device])
RuntimeError: [ NOT_FOUND ] : PERFORMANCE_HINT
$
$

failure to build arm_plugin with exist computeLibrary libs.

I have built and installed the computeLibrary libs, then pass the ARM_COMPUTE_INCLUDE_DIR and ARM_COMPUTE_LIB_DIR to cmake, it will found arm_compute libs but failure to found targets "arm_compute::xxx".

the build environment is yocto, target is raspberry aarch64 ARCH, here is the snippets of the log:

| -- Inference Engine enabled features:
| --
| --     CI_BUILD_NUMBER: custom_releases/2021/4_c2bfbf29fbc44f9a3c8403d77da5be7e45cbbb4f
| --     ARM_COMPUTE_INCLUDE_DIR = /home/xlla/develop/git/out/linux64/build/tmp/work/aarch64-poky-linux/openvino-inference-engine/2021.4.1-r0/recipe-sysroot/usr/include
| --     ARM_COMPUTE_LIB_DIR = /home/xlla/develop/git/out/linux64/build/tmp/work/aarch64-poky-linux/openvino-inference-engine/2021.4.1-r0/recipe-sysroot/usr/lib
| --     ARM_COMPUTE_TOOLCHAIN_PREFIX =
| --     ARM_COMPUTE_TARGET_ARCH = arm64-v8a
| --     ARM_COMPUTE_SCONS_JOBS =
| --
| -- Using /home/xlla/develop/git/out/linux64/build/tmp/work/aarch64-poky-linux/openvino-inference-engine/2021.4.1-r0/recipe-sysroot/usr/include to include arm compute library headers
| -- Found arm_compute-static: /home/xlla/develop/git/out/linux64/build/tmp/work/aarch64-poky-linux/openvino-inference-engine/2021.4.1-r0/recipe-sysroot/usr/lib/libarm_compute-static.a
| -- Found arm_compute_core-static: /home/xlla/develop/git/out/linux64/build/tmp/work/aarch64-poky-linux/openvino-inference-engine/2021.4.1-r0/recipe-sysroot/usr/lib/libarm_compute_core-static.a
| -- Register template_plugin to be built in build-modules/template_plugin
| -- Found PythonInterp: /home/xlla/develop/git/out/linux64/build/tmp/work/aarch64-poky-linux/openvino-inference-engine/2021.4.1-r0/recipe-sysroot-native/usr/bin/python3-native/python3 (found suitable version "3.8.13", minimum required is "3")
| CMake Warning at cmake/developer_package/shellcheck/shellcheck.cmake:11 (message):
|   shellcheck tool is not found
| Call Stack (most recent call first):
|   CMakeLists.txt:163 (ie_shellcheck_process)
| 
| 
| -- Configuring done
| CMake Error at /home/xlla/develop/git/out/linux64/build/tmp/work/aarch64-poky-linux/openvino-inference-engine/2021.4.1-r0/git/cmake/developer_package/plugins/plugins.cmake:58 (add_library):
|   Target "armPlugin" links to target "arm_compute::arm_compute" but the
|   target was not found.  Perhaps a find_package() call is missing for an
|   IMPORTED target, or an ALIAS target is missing?
| Call Stack (most recent call first):
|   /home/xlla/develop/git/out/linux64/build/tmp/work/aarch64-poky-linux/openvino-inference-engine/2021.4.1-r0/contrib/modules/arm_plugin/src/CMakeLists.txt:18 (ie_add_plugin)


[ARM plugin] Docker build may fail if RAM allocation is not increased accordingly

One issue I kept hitting is the OpenVINO compilation fails if we do not provide enough memory for the docker environment. The issue is we are running "make -j" according to the number of available CPUs, and that is dangerous because it can easily throw off the RAM usage.


Approach #1: build OpenCV, OpenVINO and the plugin using pre-configured Dockerfile (cross-compiling, the preferred way)


Suggestions: Hard lock the RAM usage by setting the -j to 1 and so it will not fail. Alternatively, document the minimal RAM usage per CPUs, and so the docker environment has to set up to match (and this is "CRITICAL"). Otherwise, we will face random compile error when RAM runs out.

Migrate to ARM Compute 2022.11 version

v22.11 Public major release

New features:
Add new experimental dynamic fusion API.
Add CPU batch matrix multiplication with adj_x = false and adj_y = false for FP32.
Add CPU MeanStdDevNorm for QASYMM8.
Add CPU and GPU GELU activation function for FP32 and FP16.
Add CPU swish activation function for FP32 and FP16.
Performance optimizations:
Optimize CPU bilinear scale for FP32, FP16, QASYMM8, QASYMM8_SIGNED, U8 and S8.
Optimize CPU activation functions using LUT-based implementation:
Sigmoid function for QASYMM8 and QASYMM8_SIGNED.
Hard swish function for QASYMM8_SIGNED.
Optimize CPU addition for QASYMM8 and QASYMM8_SIGNED using fixed-point arithmetic.
Optimize CPU multiplication, subtraction and activation layers by considering tensors as 1D.
Optimize GPU depthwise convolution kernel and heuristic.
Optimize GPU Conv2d heuristic.
Optimize CPU MeanStdDevNorm for FP16.
Optimize CPU tanh activation function for FP16 using rational approximation.
Improve GPU GeMMLowp start-up time.
Various optimizations and bug fixes.

So, we can integrate Swish and GELU

Link https://github.com/ARM-software/ComputeLibrary/releases/tag/v22.11

Strange Mac build

https://dev.azure.com/openvinoci/dldt/_build/results?buildId=62073&view=logs&j=a82d9e72-1de4-5bd6-c188-90296135479e&t=a830fa9d-d9f0-58fd-7bcf-dc112343c892

Despite it's Mac build, logs show that libraries have .so extension but not .dylib:

/Users/runner/work/1/openvino_contrib/../openvino/bin//intel64/Release/lib:
total 520320
drwxr-xr-x  43 runner  staff       1376 Mar 24 17:40 .
drwxr-xr-x  19 runner  staff        608 Mar 24 17:39 ..
-rw-r--r--   1 runner  staff      12568 Mar 24 17:39 benchmark_app.jar
-rw-r--r--   1 runner  staff      17842 Mar 24 17:39 inference_engine_java_api.jar
-rwxr-xr-x   1 runner  staff     708592 Mar 24 17:38 libHeteroPlugin.so
-rwxr-xr-x   1 runner  staff   45396704 Mar 24 17:38 libMKLDNNPlugin.so
-rwxr-xr-x   1 runner  staff     560000 Mar 24 17:38 libMultiDevicePlugin.so
-rw-r--r--   1 runner  staff     141592 Mar 24 17:15 libXLink.a
-rw-r--r--   1 runner  staff  138651640 Mar 24 17:10 libdnnl.a
-rw-r--r--   1 runner  staff    4943424 Mar 24 16:50 libfluid.a
-rwxr-xr-x   1 runner  staff      88040 Mar 24 17:38 libformat_reader.dylib
-rw-r--r--   1 runner  staff     152032 Mar 24 17:38 libgflags_nothreads.a
-rwxr-xr-x   1 runner  staff     178656 Mar 24 17:21 libie_backend.dylib
-rw-r--r--   1 runner  staff    1350280 Mar 24 17:40 libie_docs_snippets.a
-rwxr-xr-x   1 runner  staff    1591512 Mar 24 17:21 libinference_engine.dylib
-rwxr-xr-x   1 runner  staff     313440 Mar 24 17:39 libinference_engine_c_api.dylib
-rwxr-xr-x   1 runner  staff     343896 Mar 24 17:21 libinference_engine_ir_reader.so
-rwxr-xr-x   1 runner  staff    1247824 Mar 24 17:33 libinference_engine_ir_v7_reader.so
-rwxr-xr-x   1 runner  staff     302136 Mar 24 17:39 libinference_engine_java_api.dylib
-rwxr-xr-x   1 runner  staff    3653312 Mar 24 17:33 libinference_engine_legacy.dylib
-rwxr-xr-x   1 runner  staff    2091768 Mar 24 17:33 libinference_engine_lp_transformations.dylib
-rwxr-xr-x   1 runner  staff    1725320 Mar 24 17:21 libinference_engine_preproc.so
-rwxr-xr-x   1 runner  staff     509000 Mar 24 17:33 libinference_engine_snippets.dylib
-rwxr-xr-x   1 runner  staff    4076416 Mar 24 17:20 libinference_engine_transformations.dylib
-rwxr-xr-x   1 runner  staff    6703736 Mar 24 16:58 libinterpreter_backend.dylib
-rw-r--r--   1 runner  staff       2176 Mar 24 16:40 libitt.a
-rw-r--r--   1 runner  staff     126880 Mar 24 17:15 libmvnc.a
-rwxr-xr-x   1 runner  staff    6690232 Mar 24 17:38 libmyriadPlugin.so
-rwxr-xr-x   1 runner  staff    8213480 Mar 24 16:47 libngraph.dylib
-rwxr-xr-x   1 runner  staff     836136 Mar 24 16:47 libngraph_backend.dylib
-rw-r--r--   1 runner  staff     696200 Mar 24 16:46 libngraph_builders.a
-rw-r--r--   1 runner  staff     554584 Mar 24 16:47 libngraph_reference.a
-rw-r--r--   1 runner  staff     707712 Mar 24 16:48 libngraph_test_util.a
-rw-r--r--   1 runner  staff     495080 Mar 24 17:33 liboffline_transformations.a
-rwxr-xr-x   1 runner  staff      52216 Mar 24 17:39 libopencv_c_wraper.dylib
-rw-r--r--   1 runner  staff     293880 Mar 24 16:48 libpugixml.a
-rwxr-xr-x   1 runner  staff     564264 Mar 24 17:39 libtemplatePlugin.so
-rwxr-xr-x   1 runner  staff     187544 Mar 24 17:40 libtemplate_extension.so
-rw-r--r--   1 runner  staff    4184944 Mar 24 17:33 libvpu_common_lib.a
-rw-r--r--   1 runner  staff   22243040 Mar 24 17:38 libvpu_graph_transformer.a
-rw-r--r--   1 runner  staff    1990312 Mar 24 16:48 pcie-ma2x8x.mvcmd
-rw-r--r--   1 runner  staff        356 Mar 24 17:21 plugins.xml
-rw-r--r--   1 runner  staff    2253152 Mar 24 16:48 usb-ma2x8x.mvcmd

Any chance to support GatherNd and GatherNd_1in the future?

Raspberry pi 4
Ubuntu 20.04 arm64
Openvino 2022.3

Hi, I was able to compile the ARM CPU plugin successfully but when I run my inference code it shows RuntimeError: Arm Plugin: Nodes from snapshot are not supported by the plugin: GatherNd (GatherND.opset8) GatherNd_1 (GatherND.opset8)

I read the https://github.com/openvinotoolkit/openvino_contrib/wiki/ARM-plugin-operation-set-specification, just wondering if is there any chance you guys will support these two nodes in the future.

Thank you.

why arm_plugin does not install itself after being build?

I have built openvino with openvino_contrib successful, after install the inference-engine to my device, it complain libarmPlugin.so can't be found.

rpi4:~$ python3 test/test-opencv-dnn-cpu.py 
[INFO] loading model...
use dnn target cpu...
Traceback (most recent call last):
  File "test/test-opencv-dnn-cpu.py", line 25, in 
    preds = net.forward()
cv2.error: OpenCV(4.5.3-openvino) /usr/src/debug/opencv/4.5.2-r0/git/modules/dnn/src/ie_ngraph.cpp:738: error: (-2:Unspecified error) in function 'initPlugin'
> Failed to initialize Inference Engine backend (device = CPU): Failed to create plugin libarmPlugin.so for device CPU
> Please, check your environment
> Cannot load library 'libarmPlugin.so': libarmPlugin.so: cannot open shared object file: No such file or directory
> 

then I investigate the whole build and deploy process, I found in my branch the install task inside arm_plugin's CMakeLists.txt was commented, and in master branch those component install task was removed.

RuntimeError: Arm Plugin: Nodes from torch_jit are not supported

Hi!
I'm trying to run openvino object_detection python sample with the nanodet-plus model on Raspberry Pi 4 using the arm_cpu plugin and getting the following error:

[ INFO ] OpenVINO Runtime
[ INFO ]        build: 2022.3.0-000-c8e62fddb2b
[ INFO ] Reading model /home/toutaudio/openvino_models/nanodet-plus-m-1.5x-416/FP32/nanodet-plus-m-1.5x-416.xml
[ WARNING ] The parameter "input_size" not found in NanoDet-Plus wrapper, will be omitted
[ INFO ]        Input layer: data, shape: [1, 3, 416, 416], precision: f32, layout: NCHW
[ INFO ]        Output layer: output, shape: [1, 3598, 112], precision: f32, layout: 
Traceback (most recent call last):
  File "object_detection_demo.py", line 304, in <module>
    sys.exit(main() or 0)
  File "object_detection_demo.py", line 189, in main
    detector_pipeline = AsyncPipeline(model)
  File "/opt/intel/openvino_cc/extras/open_model_zoo/demos/common/python/openvino/model_zoo/model_api/pipelines/async_pipeline.py", line 86, in __init__
    self.model.load()
  File "/opt/intel/openvino_cc/extras/open_model_zoo/demos/common/python/openvino/model_zoo/model_api/models/model.py", line 263, in load
    self.model_adapter.load_model()
  File "/opt/intel/openvino_cc/extras/open_model_zoo/demos/common/python/openvino/model_zoo/model_api/adapters/openvino_adapter.py", line 66, in load_model
    self.async_queue = AsyncInferQueue(self.compiled_model, self.max_num_requests)
RuntimeError: Arm Plugin: Nodes from torch_jit are not supported:
        Resize_378 (Interpolate.4)- node input index is out of range;
        Resize_392 (Interpolate.4)- node input index is out of range;

I was able to run the model after converting it using omz_converter --add_mo_arg=--use_legacy_frontend command but the inference speed is very low and varies between 7000-10000 ms. I tried nanodet, nanodet-plus, fp16 and fp32 - same problem.
The detection accuracy is good, though.

Is this a desired behaviour in terms of speed for Nanodet model on ARM CPU? Is there a way to optimize it to run at <100ms ?

Thanks in advance!


System info:
OS: Raspbian Buster armv7l
Device: Raspberry Pi4
Openvino: 2022.3.0-000-c8e62fddb2b (cross-compiled via docker)
Openvino dev tools (omz_converted, omz_downloader) : 2022.2.0.dev20220829

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.