Giter VIP home page Giter VIP logo

deftruth / lite.ai.toolkit Goto Github PK

View Code? Open in Web Editor NEW
3.4K 70.0 664.0 446.28 MB

🛠 A lite C++ toolkit of awesome AI models, support ONNXRuntime, MNN. Contains YOLOv5, YOLOv6, YOLOX, YOLOR, FaceDet, HeadSeg, HeadPose, Matting etc. Engine: ONNXRuntime, MNN.

Home Page: https://github.com/DefTruth/lite.ai.toolkit

License: GNU General Public License v3.0

CMake 0.47% C++ 99.45% C 0.08% Shell 0.01%
yolox retinaface onnxruntime segmentation yolor yolop nanodet robustvideomatting mnn ncnn

lite.ai.toolkit's Introduction

logo-v3

🛠Lite.Ai.ToolKit: A lite C++ toolkit of awesome AI models, such as Object Detection, Face Detection, Face Recognition, Segmentation, Matting, etc. See Model Zoo and ONNX Hub, MNN Hub, TNN Hub, NCNN Hub.

Features 👏👋

  • Simply and User friendly. Simply and Consistent syntax like lite::cv::Type::Class, see examples.
  • Minimum Dependencies. Only OpenCV and ONNXRuntime are required by default, see build.
  • Many Models Supported. 300+ C++ implementations and 500+ weights 👉 Supported-Matrix.

Other Repos 🔥🔥

🛠lite.ai.toolkit 💎torchlm 📒statistic-learning-R-note 🎉cuda-learn-note 📖Awesome-LLM-Inference

Build 👇👇

Download prebuilt lite.ai.toolkit library from tag/v0.2.0, or just build it from source:

git clone --depth=1 https://github.com/DefTruth/lite.ai.toolkit.git  # latest
cd lite.ai.toolkit && sh ./build.sh # >= 0.2.0, support Linux only, tested on Ubuntu 20.04.6 LTS

Quick Start 🌟🌟

Example0: Object Detection using YOLOv5. Download model from Model-Zoo2.

#include "lite/lite.h"

int main(int argc, char *argv[]) {
  std::string onnx_path = "yolov5s.onnx";
  std::string test_img_path = "test_yolov5.jpg";
  std::string save_img_path = "test_results.jpg";

  auto *yolov5 = new lite::cv::detection::YoloV5(onnx_path); 
  std::vector<lite::types::Boxf> detected_boxes;
  cv::Mat img_bgr = cv::imread(test_img_path);
  yolov5->detect(img_bgr, detected_boxes);
  
  lite::utils::draw_boxes_inplace(img_bgr, detected_boxes);
  cv::imwrite(save_img_path, img_bgr);  
  delete yolov5;
  return 0;
}

You can download the prebuilt lite.ai.tooklit library and test resources from tag/v0.2.0.

export LITE_AI_TAG_URL=https://github.com/DefTruth/lite.ai.toolkit/releases/download/v0.2.0
wget ${LITE_AI_TAG_URL}/lite-ort1.17.1+ocv4.9.0+ffmpeg4.2.2-linux-x86_64.tgz
wget ${LITE_AI_TAG_URL}/yolov5s.onnx && wget ${LITE_AI_TAG_URL}/test_yolov5.jpg

Quick Setup 👀

To quickly setup lite.ai.toolkit, you can follow the CMakeLists.txt listed as belows. 👇👀

set(lite.ai.toolkit_DIR YOUR-PATH-TO-LITE-INSTALL)
find_package(lite.ai.toolkit REQUIRED PATHS ${lite.ai.toolkit_DIR})
add_executable(lite_yolov5 test_lite_yolov5.cpp)
target_link_libraries(lite_yolov5 ${lite.ai.toolkit_LIBS})

Mixed with MNN or ONNXRuntime 👇👇

The goal of lite.ai.toolkit is not to abstract on top of MNN and ONNXRuntime. So, you can use lite.ai.toolkit mixed with MNN(-DENABLE_MNN=ON, default OFF) or ONNXRuntime(-DENABLE_ONNXRUNTIME=ON, default ON). The lite.ai.toolkit installation package contains complete MNN and ONNXRuntime. The workflow may looks like:

#include "lite/lite.h"
// 0. use yolov5 from lite.ai.toolkit to detect objs.
auto *yolov5 = new lite::cv::detection::YoloV5(onnx_path);
// 1. use OnnxRuntime or MNN to implement your own classfier.
interpreter = std::shared_ptr<MNN::Interpreter>(MNN::Interpreter::createFromFile(mnn_path));
// or: session = new Ort::Session(ort_env, onnx_path, session_options);
classfier = interpreter->createSession(schedule_config);
// 2. then, classify the detected objs use your own classfier ...

The included headers of MNN and ONNXRuntime can be found at mnn_config.h and ort_config.h.

🔑️ Check the detailed Quick Start!Click here!

Download resources

You can download the prebuilt lite.ai.tooklit library and test resources from tag/v0.2.0.

export LITE_AI_TAG_URL=https://github.com/DefTruth/lite.ai.toolkit/releases/download/v0.2.0
wget ${LITE_AI_TAG_URL}/lite-ort1.17.1+ocv4.9.0+ffmpeg4.2.2-linux-x86_64.tgz
wget ${LITE_AI_TAG_URL}/yolov5s.onnx && wget ${LITE_AI_TAG_URL}/test_yolov5.jpg
tar -zxvf lite-ort1.17.1+ocv4.9.0+ffmpeg4.2.2-linux-x86_64.tgz

Write test code

write YOLOv5 example codes and name it test_lite_yolov5.cpp:

#include "lite/lite.h"

int main(int argc, char *argv[]) {
  std::string onnx_path = "yolov5s.onnx";
  std::string test_img_path = "test_yolov5.jpg";
  std::string save_img_path = "test_results.jpg";

  auto *yolov5 = new lite::cv::detection::YoloV5(onnx_path); 
  std::vector<lite::types::Boxf> detected_boxes;
  cv::Mat img_bgr = cv::imread(test_img_path);
  yolov5->detect(img_bgr, detected_boxes);
  
  lite::utils::draw_boxes_inplace(img_bgr, detected_boxes);
  cv::imwrite(save_img_path, img_bgr);  
  delete yolov5;
  return 0;
}

Setup CMakeLists.txt

cmake_minimum_required(VERSION 3.10)
project(lite_yolov5)
set(CMAKE_CXX_STANDARD 17)

set(lite.ai.toolkit_DIR YOUR-PATH-TO-LITE-INSTALL)
find_package(lite.ai.toolkit REQUIRED PATHS ${lite.ai.toolkit_DIR})
if (lite.ai.toolkit_Found)
    message(STATUS "lite.ai.toolkit_INCLUDE_DIRS: ${lite.ai.toolkit_INCLUDE_DIRS}")
    message(STATUS "        lite.ai.toolkit_LIBS: ${lite.ai.toolkit_LIBS}")
    message(STATUS "   lite.ai.toolkit_LIBS_DIRS: ${lite.ai.toolkit_LIBS_DIRS}")
endif()
add_executable(lite_yolov5 test_lite_yolov5.cpp)
target_link_libraries(lite_yolov5 ${lite.ai.toolkit_LIBS})

Build example

mkdir build && cd build && cmake .. && make -j1

Then, export the lib paths to LD_LIBRARY_PATH which listed by lite.ai.toolkit_LIBS_DIRS.

export LD_LIBRARY_PATH=YOUR-PATH-TO-LITE-INSTALL/lib:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=YOUR-PATH-TO-LITE-INSTALL/third_party/opencv/lib:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=YOUR-PATH-TO-LITE-INSTALL/third_party/onnxruntime/lib:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=YOUR-PATH-TO-LITE-INSTALL/third_party/MNN/lib:$LD_LIBRARY_PATH # if -DENABLE_MNN=ON

Run binary:

cp ../yolov5s.onnx ../test_yolov.jpg .
./lite_yolov5

The output logs:

LITEORT_DEBUG LogId: ../examples/hub/onnx/cv/yolov5s.onnx
=============== Input-Dims ==============
Name: images
Dims: 1
Dims: 3
Dims: 640
Dims: 640
=============== Output-Dims ==============
Output: 0 Name: pred Dim: 0 :1
Output: 0 Name: pred Dim: 1 :25200
Output: 0 Name: pred Dim: 2 :85
Output: 1 Name: output2 Dim: 0 :1
......
Output: 3 Name: output4 Dim: 1 :3
Output: 3 Name: output4 Dim: 2 :20
Output: 3 Name: output4 Dim: 3 :20
Output: 3 Name: output4 Dim: 4 :85
========================================
detected num_anchors: 25200
generate_bboxes num: 48

Supported Models Matrix

  • / = not supported now.
  • ✅ = known work and official supported now.
  • ✔️ = known work, but unofficial supported now.
  • ❔ = in my plan, but not coming soon, maybe a few months later.
Class Size Type Demo ONNXRuntime MNN NCNN TNN Linux MacOS Windows Android
YoloV5 28M detection demo ✔️ ✔️
YoloV3 236M detection demo / / / ✔️ ✔️ /
TinyYoloV3 33M detection demo / / / ✔️ ✔️ /
YoloV4 176M detection demo / / / ✔️ ✔️ /
SSD 76M detection demo / / / ✔️ ✔️ /
SSDMobileNetV1 27M detection demo / / / ✔️ ✔️ /
YoloX 3.5M detection demo ✔️ ✔️
TinyYoloV4VOC 22M detection demo / / / ✔️ ✔️ /
TinyYoloV4COCO 22M detection demo / / / ✔️ ✔️ /
YoloR 39M detection demo ✔️ ✔️
ScaledYoloV4 270M detection demo / / / ✔️ ✔️ /
EfficientDet 15M detection demo / / / ✔️ ✔️ /
EfficientDetD7 220M detection demo / / / ✔️ ✔️ /
EfficientDetD8 322M detection demo / / / ✔️ ✔️ /
YOLOP 30M detection demo ✔️ ✔️
NanoDet 1.1M detection demo ✔️ ✔️
NanoDetPlus 4.5M detection demo ✔️ ✔️
NanoDetEffi... 12M detection demo ✔️ ✔️
YoloX_V_0_1_1 3.5M detection demo ✔️ ✔️
YoloV5_V_6_0 7.5M detection demo ✔️ ✔️
GlintArcFace 92M faceid demo ✔️ ✔️
GlintCosFace 92M faceid demo ✔️ ✔️ /
GlintPartialFC 170M faceid demo ✔️ ✔️ /
FaceNet 89M faceid demo ✔️ ✔️ /
FocalArcFace 166M faceid demo ✔️ ✔️ /
FocalAsiaArcFace 166M faceid demo ✔️ ✔️ /
TencentCurricularFace 249M faceid demo ✔️ ✔️ /
TencentCifpFace 130M faceid demo ✔️ ✔️ /
CenterLossFace 280M faceid demo ✔️ ✔️ /
SphereFace 80M faceid demo ✔️ ✔️ /
PoseRobustFace 92M faceid demo / / / ✔️ ✔️ /
NaivePoseRobustFace 43M faceid demo / / / ✔️ ✔️ /
MobileFaceNet 3.8M faceid demo ✔️ ✔️
CavaGhostArcFace 15M faceid demo ✔️ ✔️
CavaCombinedFace 250M faceid demo ✔️ ✔️ /
MobileSEFocalFace 4.5M faceid demo ✔️ ✔️
RobustVideoMatting 14M matting demo / ✔️ ✔️
MGMatting 113M matting demo / ✔️ ✔️ /
MODNet 24M matting demo ✔️ ✔️ /
MODNetDyn 24M matting demo / / / ✔️ ✔️ /
BackgroundMattingV2 20M matting demo / ✔️ ✔️ /
BackgroundMattingV2Dyn 20M matting demo / / / ✔️ ✔️ /
UltraFace 1.1M face::detect demo ✔️ ✔️
RetinaFace 1.6M face::detect demo ✔️ ✔️
FaceBoxes 3.8M face::detect demo ✔️ ✔️
FaceBoxesV2 3.8M face::detect demo ✔️ ✔️
SCRFD 2.5M face::detect demo ✔️ ✔️
YOLO5Face 4.8M face::detect demo ✔️ ✔️
PFLD 1.0M face::align demo ✔️ ✔️
PFLD98 4.8M face::align demo ✔️ ✔️
MobileNetV268 9.4M face::align demo ✔️ ✔️
MobileNetV2SE68 11M face::align demo ✔️ ✔️
PFLD68 2.8M face::align demo ✔️ ✔️
FaceLandmark1000 2.0M face::align demo ✔️ ✔️
PIPNet98 44.0M face::align demo ✔️ ✔️
PIPNet68 44.0M face::align demo ✔️ ✔️
PIPNet29 44.0M face::align demo ✔️ ✔️
PIPNet19 44.0M face::align demo ✔️ ✔️
FSANet 1.2M face::pose demo / ✔️ ✔️
AgeGoogleNet 23M face::attr demo ✔️ ✔️
GenderGoogleNet 23M face::attr demo ✔️ ✔️
EmotionFerPlus 33M face::attr demo ✔️ ✔️
VGG16Age 514M face::attr demo ✔️ ✔️ /
VGG16Gender 512M face::attr demo ✔️ ✔️ /
SSRNet 190K face::attr demo / ✔️ ✔️
EfficientEmotion7 15M face::attr demo ✔️ ✔️
EfficientEmotion8 15M face::attr demo ✔️ ✔️
MobileEmotion7 13M face::attr demo ✔️ ✔️
ReXNetEmotion7 30M face::attr demo / ✔️ ✔️ /
EfficientNetLite4 49M classification demo / ✔️ ✔️ /
ShuffleNetV2 8.7M classification demo ✔️ ✔️
DenseNet121 30.7M classification demo ✔️ ✔️ /
GhostNet 20M classification demo ✔️ ✔️
HdrDNet 13M classification demo ✔️ ✔️
IBNNet 97M classification demo ✔️ ✔️ /
MobileNetV2 13M classification demo ✔️ ✔️
ResNet 44M classification demo ✔️ ✔️ /
ResNeXt 95M classification demo ✔️ ✔️ /
DeepLabV3ResNet101 232M segmentation demo ✔️ ✔️ /
FCNResNet101 207M segmentation demo ✔️ ✔️ /
FastStyleTransfer 6.4M style demo ✔️ ✔️
Colorizer 123M colorization demo / ✔️ ✔️ /
SubPixelCNN 234K resolution demo / ✔️ ✔️
SubPixelCNN 234K resolution demo / ✔️ ✔️
InsectDet 27M detection demo / ✔️ ✔️
InsectID 22M classification demo ✔️ ✔️ ✔️
PlantID 30M classification demo ✔️ ✔️ ✔️
YOLOv5BlazeFace 3.4M face::detect demo / / ✔️ ✔️
YoloV5_V_6_1 7.5M detection demo / / ✔️ ✔️
HeadSeg 31M segmentation demo / ✔️ ✔️
FemalePhoto2Cartoon 15M style demo / ✔️ ✔️
FastPortraitSeg 400k segmentation demo / / ✔️ ✔️
PortraitSegSINet 380k segmentation demo / / ✔️ ✔️
PortraitSegExtremeC3Net 180k segmentation demo / / ✔️ ✔️
FaceHairSeg 18M segmentation demo / / ✔️ ✔️
HairSeg 18M segmentation demo / / ✔️ ✔️
MobileHumanMatting 3M matting demo / / ✔️ ✔️
MobileHairSeg 14M segmentation demo / / ✔️ ✔️
YOLOv6 17M detection demo ✔️ ✔️
FaceParsingBiSeNet 50M segmentation demo ✔️ ✔️
FaceParsingBiSeNetDyn 50M segmentation demo / / / / ✔️ ✔️
🔑️ Model Zoo!Click here!

Model Zoo.

Lite.Ai.ToolKit contains almost 100+ AI models with 500+ frozen pretrained files now. Most of the files are converted by myself. You can use it through lite::cv::Type::Class syntax, such as lite::cv::detection::YoloV5. More details can be found at Examples for Lite.Ai.ToolKit. Note, for Google Drive, I can not upload all the *.onnx files because of the storage limitation (15G).

File Baidu Drive Google Drive Docker Hub Hub (Docs)
ONNX Baidu Drive code: 8gin Google Drive ONNX Docker v0.1.22.01.08 (28G), v0.1.22.02.02 (400M) ONNX Hub
MNN Baidu Drive code: 9v63 MNN Docker v0.1.22.01.08 (11G), v0.1.22.02.02 (213M) MNN Hub
NCNN Baidu Drive code: sc7f NCNN Docker v0.1.22.01.08 (9G), v0.1.22.02.02 (197M) NCNN Hub
TNN Baidu Drive code: 6o6k TNN Docker v0.1.22.01.08 (11G), v0.1.22.02.02 (217M) TNN Hub
  docker pull qyjdefdocker/lite.ai.toolkit-onnx-hub:v0.1.22.01.08  # (28G)
  docker pull qyjdefdocker/lite.ai.toolkit-mnn-hub:v0.1.22.01.08   # (11G)
  docker pull qyjdefdocker/lite.ai.toolkit-ncnn-hub:v0.1.22.01.08  # (9G)
  docker pull qyjdefdocker/lite.ai.toolkit-tnn-hub:v0.1.22.01.08   # (11G)
  docker pull qyjdefdocker/lite.ai.toolkit-onnx-hub:v0.1.22.02.02  # (400M) + YOLO5Face
  docker pull qyjdefdocker/lite.ai.toolkit-mnn-hub:v0.1.22.02.02   # (213M) + YOLO5Face
  docker pull qyjdefdocker/lite.ai.toolkit-ncnn-hub:v0.1.22.02.02  # (197M) + YOLO5Face
  docker pull qyjdefdocker/lite.ai.toolkit-tnn-hub:v0.1.22.02.02   # (217M) + YOLO5Face

🔑️ How to download Model Zoo from Docker Hub?

  • Firstly, pull the image from docker hub.
    docker pull qyjdefdocker/lite.ai.toolkit-mnn-hub:v0.1.22.01.08 # (11G)
    docker pull qyjdefdocker/lite.ai.toolkit-ncnn-hub:v0.1.22.01.08 # (9G)
    docker pull qyjdefdocker/lite.ai.toolkit-tnn-hub:v0.1.22.01.08 # (11G)
    docker pull qyjdefdocker/lite.ai.toolkit-onnx-hub:v0.1.22.01.08 # (28G)
  • Secondly, run the container with local share dir using docker run -idt xxx. A minimum example will show you as follows.
    • make a share dir in your local device.
    mkdir share # any name is ok.
    • write run_mnn_docker_hub.sh script like:
    #!/bin/bash  
    PORT1=6072
    PORT2=6084
    SERVICE_DIR=/Users/xxx/Desktop/your-path-to/share
    CONRAINER_DIR=/home/hub/share
    CONRAINER_NAME=mnn_docker_hub_d
    
    docker run -idt -p ${PORT2}:${PORT1} -v ${SERVICE_DIR}:${CONRAINER_DIR} --shm-size=16gb --name ${CONRAINER_NAME} qyjdefdocker/lite.ai.toolkit-mnn-hub:v0.1.22.01.08
    
  • Finally, copy the model weights from /home/hub/mnn/cv to your local share dir.
    # activate mnn docker.
    sh ./run_mnn_docker_hub.sh
    docker exec -it mnn_docker_hub_d /bin/bash
    # copy the models to the share dir.
    cd /home/hub 
    cp -rf mnn/cv share/

Model Hubs

The pretrained and converted ONNX files provide by lite.ai.toolkit are listed as follows. Also, see Model Zoo and ONNX Hub, MNN Hub, TNN Hub, NCNN Hub for more details.

🔑️ More Examples!Click here!

Examples.

More examples can be found at examples.

Example0: Object Detection using YOLOv5. Download model from Model-Zoo2.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../examples/hub/onnx/cv/yolov5s.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_yolov5_1.jpg";
  std::string save_img_path = "../../../examples/logs/test_lite_yolov5_1.jpg";

  auto *yolov5 = new lite::cv::detection::YoloV5(onnx_path); 
  std::vector<lite::types::Boxf> detected_boxes;
  cv::Mat img_bgr = cv::imread(test_img_path);
  yolov5->detect(img_bgr, detected_boxes);
  
  lite::utils::draw_boxes_inplace(img_bgr, detected_boxes);
  cv::imwrite(save_img_path, img_bgr);  
  
  delete yolov5;
}

The output is:

Or you can use Newest 🔥🔥 ! YOLO series's detector YOLOX or YoloR. They got the similar results.

More classes for general object detection (80 classes, COCO).

auto *detector = new lite::cv::detection::YoloX(onnx_path);  // Newest YOLO detector !!! 2021-07
auto *detector = new lite::cv::detection::YoloV4(onnx_path); 
auto *detector = new lite::cv::detection::YoloV3(onnx_path); 
auto *detector = new lite::cv::detection::TinyYoloV3(onnx_path); 
auto *detector = new lite::cv::detection::SSD(onnx_path); 
auto *detector = new lite::cv::detection::YoloV5(onnx_path); 
auto *detector = new lite::cv::detection::YoloR(onnx_path);  // Newest YOLO detector !!! 2021-05
auto *detector = new lite::cv::detection::TinyYoloV4VOC(onnx_path); 
auto *detector = new lite::cv::detection::TinyYoloV4COCO(onnx_path); 
auto *detector = new lite::cv::detection::ScaledYoloV4(onnx_path); 
auto *detector = new lite::cv::detection::EfficientDet(onnx_path); 
auto *detector = new lite::cv::detection::EfficientDetD7(onnx_path); 
auto *detector = new lite::cv::detection::EfficientDetD8(onnx_path); 
auto *detector = new lite::cv::detection::YOLOP(onnx_path);
auto *detector = new lite::cv::detection::NanoDet(onnx_path); // Super fast and tiny!
auto *detector = new lite::cv::detection::NanoDetPlus(onnx_path); // Super fast and tiny! 2021/12/25
auto *detector = new lite::cv::detection::NanoDetEfficientNetLite(onnx_path); // Super fast and tiny!
auto *detector = new lite::cv::detection::YoloV5_V_6_0(onnx_path); 
auto *detector = new lite::cv::detection::YoloV5_V_6_1(onnx_path); 
auto *detector = new lite::cv::detection::YoloX_V_0_1_1(onnx_path);  // Newest YOLO detector !!! 2021-07
auto *detector = new lite::cv::detection::YOLOv6(onnx_path);  // Newest 2022 YOLO detector !!!

Example1: Video Matting using RobustVideoMatting2021🔥🔥🔥. Download model from Model-Zoo2.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../examples/hub/onnx/cv/rvm_mobilenetv3_fp32.onnx";
  std::string video_path = "../../../examples/lite/resources/test_lite_rvm_0.mp4";
  std::string output_path = "../../../examples/logs/test_lite_rvm_0.mp4";
  std::string background_path = "../../../examples/lite/resources/test_lite_matting_bgr.jpg";
  
  auto *rvm = new lite::cv::matting::RobustVideoMatting(onnx_path, 16); // 16 threads
  std::vector<lite::types::MattingContent> contents;
  
  // 1. video matting.
  cv::Mat background = cv::imread(background_path);
  rvm->detect_video(video_path, output_path, contents, false, 0.4f,
                    20, true, true, background);
  
  delete rvm;
}

The output is:


More classes for matting (image matting, video matting, trimap/mask-free, trimap/mask-based)

auto *matting = new lite::cv::matting::RobustVideoMatting:(onnx_path);  //  WACV 2022.
auto *matting = new lite::cv::matting::MGMatting(onnx_path); // CVPR 2021
auto *matting = new lite::cv::matting::MODNet(onnx_path); // AAAI 2022
auto *matting = new lite::cv::matting::MODNetDyn(onnx_path); // AAAI 2022 Dynamic Shape Inference.
auto *matting = new lite::cv::matting::BackgroundMattingV2(onnx_path); // CVPR 2020 
auto *matting = new lite::cv::matting::BackgroundMattingV2Dyn(onnx_path); // CVPR 2020 Dynamic Shape Inference.
auto *matting = new lite::cv::matting::MobileHumanMatting(onnx_path); // 3Mb only !!!

Example2: 1000 Facial Landmarks Detection using FaceLandmarks1000. Download model from Model-Zoo2.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../examples/hub/onnx/cv/FaceLandmark1000.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_face_landmarks_0.png";
  std::string save_img_path = "../../../examples/logs/test_lite_face_landmarks_1000.jpg";
    
  auto *face_landmarks_1000 = new lite::cv::face::align::FaceLandmark1000(onnx_path);

  lite::types::Landmarks landmarks;
  cv::Mat img_bgr = cv::imread(test_img_path);
  face_landmarks_1000->detect(img_bgr, landmarks);
  lite::utils::draw_landmarks_inplace(img_bgr, landmarks);
  cv::imwrite(save_img_path, img_bgr);
  
  delete face_landmarks_1000;
}

The output is:

More classes for face alignment (68 points, 98 points, 106 points, 1000 points)

auto *align = new lite::cv::face::align::PFLD(onnx_path);  // 106 landmarks, 1.0Mb only!
auto *align = new lite::cv::face::align::PFLD98(onnx_path);  // 98 landmarks, 4.8Mb only!
auto *align = new lite::cv::face::align::PFLD68(onnx_path);  // 68 landmarks, 2.8Mb only!
auto *align = new lite::cv::face::align::MobileNetV268(onnx_path);  // 68 landmarks, 9.4Mb only!
auto *align = new lite::cv::face::align::MobileNetV2SE68(onnx_path);  // 68 landmarks, 11Mb only!
auto *align = new lite::cv::face::align::FaceLandmark1000(onnx_path);  // 1000 landmarks, 2.0Mb only!
auto *align = new lite::cv::face::align::PIPNet98(onnx_path);  // 98 landmarks, CVPR2021!
auto *align = new lite::cv::face::align::PIPNet68(onnx_path);  // 68 landmarks, CVPR2021!
auto *align = new lite::cv::face::align::PIPNet29(onnx_path);  // 29 landmarks, CVPR2021!
auto *align = new lite::cv::face::align::PIPNet19(onnx_path);  // 19 landmarks, CVPR2021!

Example3: Colorization using colorization. Download model from Model-Zoo2.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../examples/hub/onnx/cv/eccv16-colorizer.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_colorizer_1.jpg";
  std::string save_img_path = "../../../examples/logs/test_lite_eccv16_colorizer_1.jpg";
  
  auto *colorizer = new lite::cv::colorization::Colorizer(onnx_path);
  
  cv::Mat img_bgr = cv::imread(test_img_path);
  lite::types::ColorizeContent colorize_content;
  colorizer->detect(img_bgr, colorize_content);
  
  if (colorize_content.flag) cv::imwrite(save_img_path, colorize_content.mat);
  delete colorizer;
}

The output is:


More classes for colorization (gray to rgb)

auto *colorizer = new lite::cv::colorization::Colorizer(onnx_path);

Example4: Face Recognition using ArcFace. Download model from Model-Zoo2.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../examples/hub/onnx/cv/ms1mv3_arcface_r100.onnx";
  std::string test_img_path0 = "../../../examples/lite/resources/test_lite_faceid_0.png";
  std::string test_img_path1 = "../../../examples/lite/resources/test_lite_faceid_1.png";
  std::string test_img_path2 = "../../../examples/lite/resources/test_lite_faceid_2.png";

  auto *glint_arcface = new lite::cv::faceid::GlintArcFace(onnx_path);

  lite::types::FaceContent face_content0, face_content1, face_content2;
  cv::Mat img_bgr0 = cv::imread(test_img_path0);
  cv::Mat img_bgr1 = cv::imread(test_img_path1);
  cv::Mat img_bgr2 = cv::imread(test_img_path2);
  glint_arcface->detect(img_bgr0, face_content0);
  glint_arcface->detect(img_bgr1, face_content1);
  glint_arcface->detect(img_bgr2, face_content2);

  if (face_content0.flag && face_content1.flag && face_content2.flag)
  {
    float sim01 = lite::utils::math::cosine_similarity<float>(
        face_content0.embedding, face_content1.embedding);
    float sim02 = lite::utils::math::cosine_similarity<float>(
        face_content0.embedding, face_content2.embedding);
    std::cout << "Detected Sim01: " << sim  << " Sim02: " << sim02 << std::endl;
  }

  delete glint_arcface;
}

The output is:

Detected Sim01: 0.721159 Sim02: -0.0626267

More classes for face recognition (face id vector extract)

auto *recognition = new lite::cv::faceid::GlintCosFace(onnx_path);  // DeepGlint(insightface)
auto *recognition = new lite::cv::faceid::GlintArcFace(onnx_path);  // DeepGlint(insightface)
auto *recognition = new lite::cv::faceid::GlintPartialFC(onnx_path); // DeepGlint(insightface)
auto *recognition = new lite::cv::faceid::FaceNet(onnx_path);
auto *recognition = new lite::cv::faceid::FocalArcFace(onnx_path);
auto *recognition = new lite::cv::faceid::FocalAsiaArcFace(onnx_path);
auto *recognition = new lite::cv::faceid::TencentCurricularFace(onnx_path); // Tencent(TFace)
auto *recognition = new lite::cv::faceid::TencentCifpFace(onnx_path); // Tencent(TFace)
auto *recognition = new lite::cv::faceid::CenterLossFace(onnx_path);
auto *recognition = new lite::cv::faceid::SphereFace(onnx_path);
auto *recognition = new lite::cv::faceid::PoseRobustFace(onnx_path);
auto *recognition = new lite::cv::faceid::NaivePoseRobustFace(onnx_path);
auto *recognition = new lite::cv::faceid::MobileFaceNet(onnx_path); // 3.8Mb only !
auto *recognition = new lite::cv::faceid::CavaGhostArcFace(onnx_path);
auto *recognition = new lite::cv::faceid::CavaCombinedFace(onnx_path);
auto *recognition = new lite::cv::faceid::MobileSEFocalFace(onnx_path); // 4.5Mb only !

Example5: Face Detection using SCRFD 2021. Download model from Model-Zoo2.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../examples/hub/onnx/cv/scrfd_2.5g_bnkps_shape640x640.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_face_detector.jpg";
  std::string save_img_path = "../../../examples/logs/test_lite_scrfd.jpg";
  
  auto *scrfd = new lite::cv::face::detect::SCRFD(onnx_path);
  
  std::vector<lite::types::BoxfWithLandmarks> detected_boxes;
  cv::Mat img_bgr = cv::imread(test_img_path);
  scrfd->detect(img_bgr, detected_boxes);
  
  lite::utils::draw_boxes_with_landmarks_inplace(img_bgr, detected_boxes);
  cv::imwrite(save_img_path, img_bgr);
  
  delete scrfd;
}

The output is:

More classes for face detection (super fast face detection)

auto *detector = new lite::face::detect::UltraFace(onnx_path);  // 1.1Mb only !
auto *detector = new lite::face::detect::FaceBoxes(onnx_path);  // 3.8Mb only ! 
auto *detector = new lite::face::detect::FaceBoxesv2(onnx_path);  // 4.0Mb only ! 
auto *detector = new lite::face::detect::RetinaFace(onnx_path);  // 1.6Mb only ! CVPR2020
auto *detector = new lite::face::detect::SCRFD(onnx_path);  // 2.5Mb only ! CVPR2021, Super fast and accurate!!
auto *detector = new lite::face::detect::YOLO5Face(onnx_path);  // 2021, Super fast and accurate!!
auto *detector = new lite::face::detect::YOLOv5BlazeFace(onnx_path);  // 2021, Super fast and accurate!!

Example6: Object Segmentation using DeepLabV3ResNet101. Download model from Model-Zoo2.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../examples/hub/onnx/cv/deeplabv3_resnet101_coco.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_deeplabv3_resnet101.png";
  std::string save_img_path = "../../../examples/logs/test_lite_deeplabv3_resnet101.jpg";

  auto *deeplabv3_resnet101 = new lite::cv::segmentation::DeepLabV3ResNet101(onnx_path, 16); // 16 threads

  lite::types::SegmentContent content;
  cv::Mat img_bgr = cv::imread(test_img_path);
  deeplabv3_resnet101->detect(img_bgr, content);

  if (content.flag)
  {
    cv::Mat out_img;
    cv::addWeighted(img_bgr, 0.2, content.color_mat, 0.8, 0., out_img);
    cv::imwrite(save_img_path, out_img);
    if (!content.names_map.empty())
    {
      for (auto it = content.names_map.begin(); it != content.names_map.end(); ++it)
      {
        std::cout << it->first << " Name: " << it->second << std::endl;
      }
    }
  }
  delete deeplabv3_resnet101;
}

The output is:

More classes for object segmentation (general objects segmentation)

auto *segment = new lite::cv::segmentation::FCNResNet101(onnx_path);
auto *segment = new lite::cv::segmentation::DeepLabV3ResNet101(onnx_path);

Example7: Age Estimation using SSRNet . Download model from Model-Zoo2.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../examples/hub/onnx/cv/ssrnet.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_ssrnet.jpg";
  std::string save_img_path = "../../../examples/logs/test_lite_ssrnet.jpg";

  auto *ssrnet = new lite::cv::face::attr::SSRNet(onnx_path);

  lite::types::Age age;
  cv::Mat img_bgr = cv::imread(test_img_path);
  ssrnet->detect(img_bgr, age);
  lite::utils::draw_age_inplace(img_bgr, age);
  cv::imwrite(save_img_path, img_bgr);

  delete ssrnet;
}

The output is:

More classes for face attributes analysis (age, gender, emotion)

auto *attribute = new lite::cv::face::attr::AgeGoogleNet(onnx_path);  
auto *attribute = new lite::cv::face::attr::GenderGoogleNet(onnx_path); 
auto *attribute = new lite::cv::face::attr::EmotionFerPlus(onnx_path);
auto *attribute = new lite::cv::face::attr::VGG16Age(onnx_path);
auto *attribute = new lite::cv::face::attr::VGG16Gender(onnx_path);
auto *attribute = new lite::cv::face::attr::EfficientEmotion7(onnx_path); // 7 emotions, 15Mb only!
auto *attribute = new lite::cv::face::attr::EfficientEmotion8(onnx_path); // 8 emotions, 15Mb only!
auto *attribute = new lite::cv::face::attr::MobileEmotion7(onnx_path); // 7 emotions, 13Mb only!
auto *attribute = new lite::cv::face::attr::ReXNetEmotion7(onnx_path); // 7 emotions
auto *attribute = new lite::cv::face::attr::SSRNet(onnx_path); // age estimation, 190kb only!!!

Example8: 1000 Classes Classification using DenseNet. Download model from Model-Zoo2.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../examples/hub/onnx/cv/densenet121.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_densenet.jpg";

  auto *densenet = new lite::cv::classification::DenseNet(onnx_path);

  lite::types::ImageNetContent content;
  cv::Mat img_bgr = cv::imread(test_img_path);
  densenet->detect(img_bgr, content);
  if (content.flag)
  {
    const unsigned int top_k = content.scores.size();
    if (top_k > 0)
    {
      for (unsigned int i = 0; i < top_k; ++i)
        std::cout << i + 1
                  << ": " << content.labels.at(i)
                  << ": " << content.texts.at(i)
                  << ": " << content.scores.at(i)
                  << std::endl;
    }
  }
  delete densenet;
}

The output is:

More classes for image classification (1000 classes)

auto *classifier = new lite::cv::classification::EfficientNetLite4(onnx_path);  
auto *classifier = new lite::cv::classification::ShuffleNetV2(onnx_path); // 8.7Mb only!
auto *classifier = new lite::cv::classification::GhostNet(onnx_path);
auto *classifier = new lite::cv::classification::HdrDNet(onnx_path);
auto *classifier = new lite::cv::classification::IBNNet(onnx_path);
auto *classifier = new lite::cv::classification::MobileNetV2(onnx_path); // 13Mb only!
auto *classifier = new lite::cv::classification::ResNet(onnx_path); 
auto *classifier = new lite::cv::classification::ResNeXt(onnx_path);

Example9: Head Pose Estimation using FSANet. Download model from Model-Zoo2.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../examples/hub/onnx/cv/fsanet-var.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_fsanet.jpg";
  std::string save_img_path = "../../../examples/logs/test_lite_fsanet.jpg";

  auto *fsanet = new lite::cv::face::pose::FSANet(onnx_path);
  cv::Mat img_bgr = cv::imread(test_img_path);
  lite::types::EulerAngles euler_angles;
  fsanet->detect(img_bgr, euler_angles);
  
  if (euler_angles.flag)
  {
    lite::utils::draw_axis_inplace(img_bgr, euler_angles);
    cv::imwrite(save_img_path, img_bgr);
    std::cout << "yaw:" << euler_angles.yaw << " pitch:" << euler_angles.pitch << " row:" << euler_angles.roll << std::endl;
  }
  delete fsanet;
}

The output is:

More classes for head pose estimation (euler angle, yaw, pitch, roll)

auto *pose = new lite::cv::face::pose::FSANet(onnx_path); // 1.2Mb only!

Example10: Style Transfer using FastStyleTransfer. Download model from Model-Zoo2.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../examples/hub/onnx/cv/style-candy-8.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_fast_style_transfer.jpg";
  std::string save_img_path = "../../../examples/logs/test_lite_fast_style_transfer_candy.jpg";
  
  auto *fast_style_transfer = new lite::cv::style::FastStyleTransfer(onnx_path);
 
  lite::types::StyleContent style_content;
  cv::Mat img_bgr = cv::imread(test_img_path);
  fast_style_transfer->detect(img_bgr, style_content);

  if (style_content.flag) cv::imwrite(save_img_path, style_content.mat);
  delete fast_style_transfer;
}

The output is:


More classes for style transfer (neural style transfer, others)

auto *transfer = new lite::cv::style::FastStyleTransfer(onnx_path); // 6.4Mb only

Example11: Human Head Segmentation using HeadSeg. Download model from Model-Zoo2.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../examples/hub/onnx/cv/minivision_head_seg.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_head_seg.png";
  std::string save_img_path = "../../../examples/logs/test_lite_head_seg.jpg";

  auto *head_seg = new lite::cv::segmentation::HeadSeg(onnx_path, 4); // 4 threads

  lite::types::HeadSegContent content;
  cv::Mat img_bgr = cv::imread(test_img_path);
  head_seg->detect(img_bgr, content);
  if (content.flag) cv::imwrite(save_img_path, content.mask * 255.f);

  delete head_seg;
}

The output is:

More classes for human segmentation (head, portrait, hair, others)

auto *segment = new lite::cv::segmentation::HeadSeg(onnx_path); // 31Mb
auto *segment = new lite::cv::segmentation::FastPortraitSeg(onnx_path); // <= 400Kb !!! 
auto *segment = new lite::cv::segmentation::PortraitSegSINet(onnx_path); // <= 380Kb !!!
auto *segment = new lite::cv::segmentation::PortraitSegExtremeC3Net(onnx_path); // <= 180Kb !!! Extreme Tiny !!!
auto *segment = new lite::cv::segmentation::FaceHairSeg(onnx_path); // 18M
auto *segment = new lite::cv::segmentation::HairSeg(onnx_path); // 18M
auto *segment = new lite::cv::segmentation::MobileHairSeg(onnx_path); // 14M

Example12: Photo transfer to Cartoon Photo2Cartoon. Download model from Model-Zoo2.

#include "lite/lite.h"

static void test_default()
{
  std::string head_seg_onnx_path = "../../../examples/hub/onnx/cv/minivision_head_seg.onnx";
  std::string cartoon_onnx_path = "../../../examples/hub/onnx/cv/minivision_female_photo2cartoon.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_female_photo2cartoon.jpg";
  std::string save_mask_path = "../../../examples/logs/test_lite_female_photo2cartoon_seg.jpg";
  std::string save_cartoon_path = "../../../examples/logs/test_lite_female_photo2cartoon_cartoon.jpg";

  auto *head_seg = new lite::cv::segmentation::HeadSeg(head_seg_onnx_path, 4); // 4 threads
  auto *female_photo2cartoon = new lite::cv::style::FemalePhoto2Cartoon(cartoon_onnx_path, 4); // 4 threads

  lite::types::HeadSegContent head_seg_content;
  cv::Mat img_bgr = cv::imread(test_img_path);
  head_seg->detect(img_bgr, head_seg_content);

  if (head_seg_content.flag && !head_seg_content.mask.empty())
  {
    cv::imwrite(save_mask_path, head_seg_content.mask * 255.f);
    // Female Photo2Cartoon Style Transfer
    lite::types::FemalePhoto2CartoonContent female_cartoon_content;
    female_photo2cartoon->detect(img_bgr, head_seg_content.mask, female_cartoon_content);
    
    if (female_cartoon_content.flag && !female_cartoon_content.cartoon.empty())
      cv::imwrite(save_cartoon_path, female_cartoon_content.cartoon);
  }

  delete head_seg;
  delete female_photo2cartoon;
}

The output is:

More classes for photo style transfer.

auto *transfer = new lite::cv::style::FemalePhoto2Cartoon(onnx_path);

Example13: Face Parsing using FaceParsing. Download model from Model-Zoo2.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../examples/hub/onnx/cv/face_parsing_512x512.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_face_parsing.png";
  std::string save_img_path = "../../../examples/logs/test_lite_face_parsing_bisenet.jpg";

  auto *face_parsing_bisenet = new lite::cv::segmentation::FaceParsingBiSeNet(onnx_path, 8); // 8 threads

  lite::types::FaceParsingContent content;
  cv::Mat img_bgr = cv::imread(test_img_path);
  face_parsing_bisenet->detect(img_bgr, content);

  if (content.flag && !content.merge.empty())
    cv::imwrite(save_img_path, content.merge);
  
  delete face_parsing_bisenet;
}

The output is:

More classes for face parsing (hair, eyes, nose, mouth, others)

auto *segment = new lite::cv::segmentation::FaceParsingBiSeNet(onnx_path); // 50Mb
auto *segment = new lite::cv::segmentation::FaceParsingBiSeNetDyn(onnx_path); // Dynamic Shape Inference.

Citations 🎉🎉

@misc{lite.ai.toolkit@2021,
  title={lite.ai.toolkit: A lite C++ toolkit of awesome AI models.},
  url={https://github.com/DefTruth/lite.ai.toolkit},
  note={Open-source software available at https://github.com/DefTruth/lite.ai.toolkit},
  author={Yanjun Qiu},
  year={2021}
}

lite.ai.toolkit's People

Contributors

avensun avatar deftruth avatar lee1221ee avatar ysc3839 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lite.ai.toolkit's Issues

yolov5 代码问题请教

yolov5是一个基于anchor的算法,再前向推理的过程中应该涉及anchor的计算,但是在代码中没有看到andhor的任何信息,请问是怎么处理的?

模型都是怎么转换的?

我这边自定义训练的一个resnet数据模型是pt模型,怎么转onnx模型呢,我在您链接的vison里面没找到怎么转换?

Detectron2

Hi,
Thanks for sharing it. Could you please support the detectron2?

Thank you in advance

Cannot use GPU on windows..

Hi again! I am running this on windows, and I see the following message:

2021-10-11 19:50:28.8453389 [W:onnxruntime:, fallback_cpu_capability.cc:135 onnxruntime::GetCpuPreferredNodes] Force fallback to CPU execution for node: Slice_339

Why is this happening? Should i be able to run this on the GPU?

Thanks!

RVM onyx models to Google drive?

Hi! thanks for making this code available. Is it possible to upload the rvm models to google drive? I am unable to access Baidu where i am. Thank you!

yolox_nano速度问题

我使用您的代码框架测试了一下yolox系列的推理速度,yolox_nano以外的模型推理速度都很正常,但是使用nano模型时,推理速度甚至低于yolox_s。所用的onnx文件均为利用官方coco数据集训练出来的pth文件转化得到。
我注意到yolox在定义nano模型时,有一段额外代码(./exps/default/nano.py中),如下图所示
image
这是否会有影响?

支持resnet50算法吗?

我转换的一个resnet50的算法,但是加载的时候,报错:
0x00007FF977504ED9 处(位于 xx.exe 中)引发的异常: Microsoft C++ 异常: std::length_error,位于内存位置 0x0000006D50B7EC30 处。
0x00007FF977504ED9 处(位于 xx.exe 中)有未经处理的异常: Microsoft C++ 异常: std::length_error,位于内存位置 0x0000006D50B7EC30 处。

图片

How to use your toolkit with onnxruntime-gpu with Linux Ubuntu

Hello @DefTruth
Thank for your works. I tested some face detections in your toolkit and they work well base on CPU in Ubuntu 16.04
I would like to use GPU so I download onnxruntime-linux-x64-gpu-1.7.0.tgz
I followed your suggestion:
cp you-path-to-downloaded-or-built-onnxruntime/lib/onnxruntime lite.ai.toolkit/lib
and use the headers offer by this repo, I just let these directories unchanged only copy lib
but when I check the results so they did not run on GPU
Can you give me some suggestions?

contribute-lite.ai-cv-detection-template

  • model information: The information for the model is listed below.
Project Address Author Model File Inference
yolov5 (🔥🔥💥↑) ultralytics yolov5-model-pytorch-hub detect.py

Note: this is a template issue for how to contribute you models. Just replace the "template" as your model or project name, such as contribute-lite.ai-cv-detection-YoloV5 .

How to build for windows 10 ? Many errors.

Hi I have a lot of errors when try to build on windows 10 with Qt (cmake and mingw64-make)
Screen Shot 2021-10-05 at 18 55 03

If there is precompiled DLL fow windows10 64 / 32 bit it would be perfect :)

windows vs2019编译报错:

图片
core\ort_types.h(272,1): error C2440: “初始化”: 无法从“ortcv::types::BoundingBoxType<int,double>”转换为“ortcv::types::BoundingBoxType<int,float>”

图片

Ubuntu下编译错误,怎么解决呢

ubuntu@ubuntu-M12SWA-TF:~/lite.ai.toolkit$ sh ./build.sh
build directory exist! clearing ...
clear built files done ! & rebuilding ...
-- The C compiler identification is GNU 7.5.0
-- The CXX compiler identification is GNU 7.5.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
########## Checking Platform for: /home/ubuntu/lite.ai.toolkit ###########
==================================== Lite.AI.ToolKit 0.1.0 =============================
Project: lite.ai.toolkit
Version: 0.1.0
SO Version: 0.1.0
Build Type: MinSizeRel
Platform Name: linux
Root Path: /home/ubuntu/lite.ai.toolkit

################################### Engines Enable Details ... #######################################
-- INCLUDE_OPENCV: ON
-- ENABLE_ONNXRUNTIME: ON
-- ENABLE_MNN: OFF
-- ENABLE_NCNN: OFF
-- ENABLE_TNN: OFF
######################################################################################################
########## Setting up OpenCV libs for: /home/ubuntu/lite.ai.toolkit ###########
###########################################################################################
Installing Lite.AI.ToolKit Headers for ONNXRuntime Backend ...
-- Installing: /home/ubuntu/lite.ai.toolkit/build/lite.ai.toolkit/include/lite/ort/core/ort_config.h
··················
-- Configuring done
-- Generating done
-- Build files have been written to: /home/ubuntu/lite.ai.toolkit/build
[ 0%] Building CXX object CMakeFiles/lite.ai.toolkit.dir/lite/utils.cpp.o
[ 0%] Building CXX object CMakeFiles/lite.ai.toolkit.dir/lite/ort/cv/cava_ghost_arcface.cpp.o
[ 1%] Building CXX object CMakeFiles/lite.ai.toolkit.dir/lite/ort/cv/center_loss_face.cpp.o
[ 2%] Building CXX object CMakeFiles/lite.ai.toolkit.dir/lite/ort/cv/deeplabv3_resnet101.cpp.o
[ 2%] Building CXX object CMakeFiles/lite.ai.toolkit.dir/lite/ort/cv/colorizer.cpp.o
[ 3%] Building CXX object CMakeFiles/lite.ai.toolkit.dir/lite/ort/cv/age_googlenet.cpp.o
[ 3%] Building CXX object CMakeFiles/lite.ai.toolkit.dir/lite/ort/cv/cava_combined_face.cpp.o
[ 3%] Building CXX object CMakeFiles/lite.ai.toolkit.dir/lite/ort/cv/densenet.cpp.o
In file included from /home/ubuntu/lite.ai.toolkit/lite/utils.cpp:5:0:
/home/ubuntu/lite.ai.toolkit/lite/utils.h: In function ‘std::vector<_Tp> lite::utils::math::softmax(const T*, unsigned int, unsigned int&)’:
/home/ubuntu/lite.ai.toolkit/lite/utils.h:94:29: error: ‘expf’ is not a member of ‘std’
softmax_probs[i] = std::expf(logits[i]);
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.h:94:29: note: suggested alternative: ‘exp’
softmax_probs[i] = std::expf(logits[i]);
^~~~
exp
/home/ubuntu/lite.ai.toolkit/lite/utils.h: In function ‘std::vector<_Tp> lite::utils::math::softmax(const std::vector<_Tp>&, unsigned int&)’:
/home/ubuntu/lite.ai.toolkit/lite/utils.h:119:29: error: ‘expf’ is not a member of ‘std’
softmax_probs[i] = std::expf(logits[i]);
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.h:119:29: note: suggested alternative: ‘exp’
softmax_probs[i] = std::expf(logits[i]);
^~~~
exp
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp: In function ‘void lite::utils::draw_axis_inplace(cv::Mat&, const EulerAngles&, float, int)’:
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:48:47: error: ‘cosf’ is not a member of ‘std’
const int x1 = static_cast(size * std::cosf(yaw) * std::cosf(roll)) + tdx;
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:48:47: note: suggested alternative: ‘cosh’
const int x1 = static_cast(size * std::cosf(yaw) * std::cosf(roll)) + tdx;
^~~~
cosh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:48:64: error: ‘cosf’ is not a member of ‘std’
const int x1 = static_cast(size * std::cosf(yaw) * std::cosf(roll)) + tdx;
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:48:64: note: suggested alternative: ‘cosh’
const int x1 = static_cast(size * std::cosf(yaw) * std::cosf(roll)) + tdx;
^~~~
cosh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:50:35: error: ‘cosf’ is not a member of ‘std’
size * (std::cosf(pitch) * std::sinf(roll)
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:50:35: note: suggested alternative: ‘cosh’
size * (std::cosf(pitch) * std::sinf(roll)
^~~~
cosh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:50:54: error: ‘sinf’ is not a member of ‘std’
size * (std::cosf(pitch) * std::sinf(roll)
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:50:54: note: suggested alternative: ‘sinh’
size * (std::cosf(pitch) * std::sinf(roll)
^~~~
sinh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:51:37: error: ‘cosf’ is not a member of ‘std’
+ std::cosf(roll) * std::sinf(pitch) * std::sinf(yaw))
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:51:37: note: suggested alternative: ‘cosh’
+ std::cosf(roll) * std::sinf(pitch) * std::sinf(yaw))
^~~~
cosh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:51:55: error: ‘sinf’ is not a member of ‘std’
+ std::cosf(roll) * std::sinf(pitch) * std::sinf(yaw))
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:51:55: note: suggested alternative: ‘sinh’
+ std::cosf(roll) * std::sinf(pitch) * std::sinf(yaw))
^~~~
sinh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:51:74: error: ‘sinf’ is not a member of ‘std’
+ std::cosf(roll) * std::sinf(pitch) * std::sinf(yaw))
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:51:74: note: suggested alternative: ‘sinh’
+ std::cosf(roll) * std::sinf(pitch) * std::sinf(yaw))
^~~~
sinh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:54:48: error: ‘cosf’ is not a member of ‘std’
const int x2 = static_cast(-size * std::cosf(yaw) * std::sinf(roll)) + tdx;
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:54:48: note: suggested alternative: ‘cosh’
const int x2 = static_cast(-size * std::cosf(yaw) * std::sinf(roll)) + tdx;
^~~~
cosh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:54:65: error: ‘sinf’ is not a member of ‘std’
const int x2 = static_cast(-size * std::cosf(yaw) * std::sinf(roll)) + tdx;
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:54:65: note: suggested alternative: ‘sinh’
const int x2 = static_cast(-size * std::cosf(yaw) * std::sinf(roll)) + tdx;
^~~~
sinh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:56:35: error: ‘cosf’ is not a member of ‘std’
size * (std::cosf(pitch) * std::cosf(roll)
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:56:35: note: suggested alternative: ‘cosh’
size * (std::cosf(pitch) * std::cosf(roll)
^~~~
cosh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:56:54: error: ‘cosf’ is not a member of ‘std’
size * (std::cosf(pitch) * std::cosf(roll)
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:56:54: note: suggested alternative: ‘cosh’
size * (std::cosf(pitch) * std::cosf(roll)
^~~~
cosh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:57:37: error: ‘sinf’ is not a member of ‘std’
- std::sinf(pitch) * std::sinf(yaw) * std::sinf(roll))
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:57:37: note: suggested alternative: ‘sinh’
- std::sinf(pitch) * std::sinf(yaw) * std::sinf(roll))
^~~~
sinh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:57:56: error: ‘sinf’ is not a member of ‘std’
- std::sinf(pitch) * std::sinf(yaw) * std::sinf(roll))
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:57:56: note: suggested alternative: ‘sinh’
- std::sinf(pitch) * std::sinf(yaw) * std::sinf(roll))
^~~~
sinh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:57:73: error: ‘sinf’ is not a member of ‘std’
- std::sinf(pitch) * std::sinf(yaw) * std::sinf(roll))
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:57:73: note: suggested alternative: ‘sinh’
- std::sinf(pitch) * std::sinf(yaw) * std::sinf(roll))
^~~~
sinh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:60:47: error: ‘sinf’ is not a member of ‘std’
const int x3 = static_cast(size * std::sinf(yaw)) + tdx;
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:60:47: note: suggested alternative: ‘sinh’
const int x3 = static_cast(size * std::sinf(yaw)) + tdx;
^~~~
sinh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:61:48: error: ‘cosf’ is not a member of ‘std’
const int y3 = static_cast(-size * std::cosf(yaw) * std::sinf(pitch)) + tdy;
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:61:48: note: suggested alternative: ‘cosh’
const int y3 = static_cast(-size * std::cosf(yaw) * std::sinf(pitch)) + tdy;
^~~~
cosh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:61:65: error: ‘sinf’ is not a member of ‘std’
const int y3 = static_cast(-size * std::cosf(yaw) * std::sinf(pitch)) + tdy;
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:61:65: note: suggested alternative: ‘sinh’
const int y3 = static_cast(-size * std::cosf(yaw) * std::sinf(pitch)) + tdy;
^~~~
sinh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp: In function ‘cv::Mat lite::utils::draw_axis(const cv::Mat&, const EulerAngles&, float, int)’:
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:86:47: error: ‘cosf’ is not a member of ‘std’
const int x1 = static_cast(size * std::cosf(yaw) * std::cosf(roll)) + tdx;
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:86:47: note: suggested alternative: ‘cosh’
const int x1 = static_cast(size * std::cosf(yaw) * std::cosf(roll)) + tdx;
^~~~
cosh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:86:64: error: ‘cosf’ is not a member of ‘std’
const int x1 = static_cast(size * std::cosf(yaw) * std::cosf(roll)) + tdx;
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:86:64: note: suggested alternative: ‘cosh’
const int x1 = static_cast(size * std::cosf(yaw) * std::cosf(roll)) + tdx;
^~~~
cosh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:88:35: error: ‘cosf’ is not a member of ‘std’
size * (std::cosf(pitch) * std::sinf(roll)
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:88:35: note: suggested alternative: ‘cosh’
size * (std::cosf(pitch) * std::sinf(roll)
^~~~
cosh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:88:54: error: ‘sinf’ is not a member of ‘std’
size * (std::cosf(pitch) * std::sinf(roll)
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:88:54: note: suggested alternative: ‘sinh’
size * (std::cosf(pitch) * std::sinf(roll)
^~~~
sinh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:89:37: error: ‘cosf’ is not a member of ‘std’
+ std::cosf(roll) * std::sinf(pitch) * std::sinf(yaw))
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:89:37: note: suggested alternative: ‘cosh’
+ std::cosf(roll) * std::sinf(pitch) * std::sinf(yaw))
^~~~
cosh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:89:55: error: ‘sinf’ is not a member of ‘std’
+ std::cosf(roll) * std::sinf(pitch) * std::sinf(yaw))
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:89:55: note: suggested alternative: ‘sinh’
+ std::cosf(roll) * std::sinf(pitch) * std::sinf(yaw))
^~~~
sinh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:89:74: error: ‘sinf’ is not a member of ‘std’
+ std::cosf(roll) * std::sinf(pitch) * std::sinf(yaw))
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:89:74: note: suggested alternative: ‘sinh’
+ std::cosf(roll) * std::sinf(pitch) * std::sinf(yaw))
^~~~
sinh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:92:48: error: ‘cosf’ is not a member of ‘std’
const int x2 = static_cast(-size * std::cosf(yaw) * std::sinf(roll)) + tdx;
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:92:48: note: suggested alternative: ‘cosh’
const int x2 = static_cast(-size * std::cosf(yaw) * std::sinf(roll)) + tdx;
^~~~
cosh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:92:65: error: ‘sinf’ is not a member of ‘std’
const int x2 = static_cast(-size * std::cosf(yaw) * std::sinf(roll)) + tdx;
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:92:65: note: suggested alternative: ‘sinh’
const int x2 = static_cast(-size * std::cosf(yaw) * std::sinf(roll)) + tdx;
^~~~
sinh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:94:35: error: ‘cosf’ is not a member of ‘std’
size * (std::cosf(pitch) * std::cosf(roll)
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:94:35: note: suggested alternative: ‘cosh’
size * (std::cosf(pitch) * std::cosf(roll)
^~~~
cosh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:94:54: error: ‘cosf’ is not a member of ‘std’
size * (std::cosf(pitch) * std::cosf(roll)
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:94:54: note: suggested alternative: ‘cosh’
size * (std::cosf(pitch) * std::cosf(roll)
^~~~
cosh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:95:37: error: ‘sinf’ is not a member of ‘std’
- std::sinf(pitch) * std::sinf(yaw) * std::sinf(roll))
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:95:37: note: suggested alternative: ‘sinh’
- std::sinf(pitch) * std::sinf(yaw) * std::sinf(roll))
^~~~
sinh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:95:56: error: ‘sinf’ is not a member of ‘std’
- std::sinf(pitch) * std::sinf(yaw) * std::sinf(roll))
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:95:56: note: suggested alternative: ‘sinh’
- std::sinf(pitch) * std::sinf(yaw) * std::sinf(roll))
^~~~
sinh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:95:73: error: ‘sinf’ is not a member of ‘std’
- std::sinf(pitch) * std::sinf(yaw) * std::sinf(roll))
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:95:73: note: suggested alternative: ‘sinh’
- std::sinf(pitch) * std::sinf(yaw) * std::sinf(roll))
^~~~
sinh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:98:47: error: ‘sinf’ is not a member of ‘std’
const int x3 = static_cast(size * std::sinf(yaw)) + tdx;
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:98:47: note: suggested alternative: ‘sinh’
const int x3 = static_cast(size * std::sinf(yaw)) + tdx;
^~~~
sinh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:99:48: error: ‘cosf’ is not a member of ‘std’
const int y3 = static_cast(-size * std::cosf(yaw) * std::sinf(pitch)) + tdy;
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:99:48: note: suggested alternative: ‘cosh’
const int y3 = static_cast(-size * std::cosf(yaw) * std::sinf(pitch)) + tdy;
^~~~
cosh
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:99:65: error: ‘sinf’ is not a member of ‘std’
const int y3 = static_cast(-size * std::cosf(yaw) * std::sinf(pitch)) + tdy;
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:99:65: note: suggested alternative: ‘sinh’
const int y3 = static_cast(-size * std::cosf(yaw) * std::sinf(pitch)) + tdy;
^~~~
sinh
In file included from /home/ubuntu/lite.ai.toolkit/lite/ort/cv/deeplabv3_resnet101.cpp:6:0:
/home/ubuntu/lite.ai.toolkit/lite/ort/core/ort_utils.h:33:80: warning: dynamic exception specifications are deprecated in C++11 [-Wdeprecated]
unsigned int data_format = CHW) throw(std::runtime_error);
^~~~~
In file included from /home/ubuntu/lite.ai.toolkit/lite/ort/cv/densenet.cpp:6:0:
/home/ubuntu/lite.ai.toolkit/lite/ort/core/ort_utils.h:33:80: warning: dynamic exception specifications are deprecated in C++11 [-Wdeprecated]
unsigned int data_format = CHW) throw(std::runtime_error);
^~~~~
In file included from /home/ubuntu/lite.ai.toolkit/lite/ort/cv/densenet.cpp:7:0:
/home/ubuntu/lite.ai.toolkit/lite/utils.h: In function ‘std::vector<_Tp> lite::utils::math::softmax(const T*, unsigned int, unsigned int&)’:
/home/ubuntu/lite.ai.toolkit/lite/utils.h:94:29: error: ‘expf’ is not a member of ‘std’
softmax_probs[i] = std::expf(logits[i]);
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.h:94:29: note: suggested alternative: ‘exp’
softmax_probs[i] = std::expf(logits[i]);
^~~~
exp
/home/ubuntu/lite.ai.toolkit/lite/utils.h: In function ‘std::vector<_Tp> lite::utils::math::softmax(const std::vector<_Tp>&, unsigned int&)’:
/home/ubuntu/lite.ai.toolkit/lite/utils.h:119:29: error: ‘expf’ is not a member of ‘std’
softmax_probs[i] = std::expf(logits[i]);
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.h:119:29: note: suggested alternative: ‘exp’
softmax_probs[i] = std::expf(logits[i]);
^~~~
exp
In file included from /home/ubuntu/lite.ai.toolkit/lite/ort/cv/colorizer.cpp:6:0:
/home/ubuntu/lite.ai.toolkit/lite/ort/core/ort_utils.h:33:80: warning: dynamic exception specifications are deprecated in C++11 [-Wdeprecated]
unsigned int data_format = CHW) throw(std::runtime_error);
^~~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp: In function ‘void lite::utils::blending_nms(std::vector<lite::types::BoundingBoxType<float, float> >&, std::vector<lite::types::BoundingBoxType<float, float> >&, float, unsigned int)’:
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:327:21: error: ‘expf’ is not a member of ‘std’
total += std::expf(buf[k].score);
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:327:21: note: suggested alternative: ‘exp’
total += std::expf(buf[k].score);
^~~~
exp
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:332:25: error: ‘expf’ is not a member of ‘std’
float rate = std::expf(buf[l].score) / total;
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.cpp:332:25: note: suggested alternative: ‘exp’
float rate = std::expf(buf[l].score) / total;
^~~~
exp
In file included from /home/ubuntu/lite.ai.toolkit/lite/ort/cv/cava_combined_face.cpp:6:0:
/home/ubuntu/lite.ai.toolkit/lite/ort/core/ort_utils.h:33:80: warning: dynamic exception specifications are deprecated in C++11 [-Wdeprecated]
unsigned int data_format = CHW) throw(std::runtime_error);
^~~~~
In file included from /home/ubuntu/lite.ai.toolkit/lite/ort/cv/cava_combined_face.cpp:7:0:
/home/ubuntu/lite.ai.toolkit/lite/utils.h: In function ‘std::vector<_Tp> lite::utils::math::softmax(const T*, unsigned int, unsigned int&)’:
/home/ubuntu/lite.ai.toolkit/lite/utils.h:94:29: error: ‘expf’ is not a member of ‘std’
softmax_probs[i] = std::expf(logits[i]);
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.h:94:29: note: suggested alternative: ‘exp’
softmax_probs[i] = std::expf(logits[i]);
^~~~
exp
/home/ubuntu/lite.ai.toolkit/lite/utils.h: In function ‘std::vector<_Tp> lite::utils::math::softmax(const std::vector<_Tp>&, unsigned int&)’:
/home/ubuntu/lite.ai.toolkit/lite/utils.h:119:29: error: ‘expf’ is not a member of ‘std’
softmax_probs[i] = std::expf(logits[i]);
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.h:119:29: note: suggested alternative: ‘exp’
softmax_probs[i] = std::expf(logits[i]);
^~~~
exp
In file included from /home/ubuntu/lite.ai.toolkit/lite/ort/cv/center_loss_face.cpp:6:0:
/home/ubuntu/lite.ai.toolkit/lite/ort/core/ort_utils.h:33:80: warning: dynamic exception specifications are deprecated in C++11 [-Wdeprecated]
unsigned int data_format = CHW) throw(std::runtime_error);
^~~~~
In file included from /home/ubuntu/lite.ai.toolkit/lite/ort/cv/age_googlenet.cpp:5:0:
/home/ubuntu/lite.ai.toolkit/lite/ort/core/ort_utils.h:33:80: warning: dynamic exception specifications are deprecated in C++11 [-Wdeprecated]
unsigned int data_format = CHW) throw(std::runtime_error);
^~~~~
In file included from /home/ubuntu/lite.ai.toolkit/lite/ort/cv/age_googlenet.cpp:6:0:
/home/ubuntu/lite.ai.toolkit/lite/utils.h: In function ‘std::vector<_Tp> lite::utils::math::softmax(const T*, unsigned int, unsigned int&)’:
/home/ubuntu/lite.ai.toolkit/lite/utils.h:94:29: error: ‘expf’ is not a member of ‘std’
softmax_probs[i] = std::expf(logits[i]);
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.h:94:29: note: suggested alternative: ‘exp’
softmax_probs[i] = std::expf(logits[i]);
^~~~
exp
/home/ubuntu/lite.ai.toolkit/lite/utils.h: In function ‘std::vector<_Tp> lite::utils::math::softmax(const std::vector<_Tp>&, unsigned int&)’:
/home/ubuntu/lite.ai.toolkit/lite/utils.h:119:29: error: ‘expf’ is not a member of ‘std’
softmax_probs[i] = std::expf(logits[i]);
^~~~
/home/ubuntu/lite.ai.toolkit/lite/utils.h:119:29: note: suggested alternative: ‘exp’
softmax_probs[i] = std::expf(logits[i]);
^~~~
exp
In file included from /home/ubuntu/lite.ai.toolkit/lite/ort/cv/cava_ghost_arcface.cpp:6:0:
/home/ubuntu/lite.ai.toolkit/lite/ort/core/ort_utils.h:33:80: warning: dynamic exception specifications are deprecated in C++11 [-Wdeprecated]
unsigned int data_format = CHW) throw(std::runtime_error);
^~~~~
CMakeFiles/lite.ai.toolkit.dir/build.make:75: recipe for target 'CMakeFiles/lite.ai.toolkit.dir/lite/utils.cpp.o' failed
make[2]: *** [CMakeFiles/lite.ai.toolkit.dir/lite/utils.cpp.o] Error 1
make[2]: *** 正在等待未完成的任务....
CMakeFiles/lite.ai.toolkit.dir/build.make:173: recipe for target 'CMakeFiles/lite.ai.toolkit.dir/lite/ort/cv/densenet.cpp.o' failed
make[2]: *** [CMakeFiles/lite.ai.toolkit.dir/lite/ort/cv/densenet.cpp.o] Error 1
CMakeFiles/lite.ai.toolkit.dir/build.make:103: recipe for target 'CMakeFiles/lite.ai.toolkit.dir/lite/ort/cv/cava_combined_face.cpp.o' failed
make[2]: *** [CMakeFiles/lite.ai.toolkit.dir/lite/ort/cv/cava_combined_face.cpp.o] Error 1
CMakeFiles/lite.ai.toolkit.dir/build.make:89: recipe for target 'CMakeFiles/lite.ai.toolkit.dir/lite/ort/cv/age_googlenet.cpp.o' failed
make[2]: *** [CMakeFiles/lite.ai.toolkit.dir/lite/ort/cv/age_googlenet.cpp.o] Error 1
CMakeFiles/Makefile2:237: recipe for target 'CMakeFiles/lite.ai.toolkit.dir/all' failed
make[1]: *** [CMakeFiles/lite.ai.toolkit.dir/all] Error 2
Makefile:90: recipe for target 'all' failed
make: *** [all] Error 2

Running models in half precision FP16

I am trying to run the FP16 version of the model "rvm_mobilenetv3_fp16.onnx"

I am trying to write a FP16 version of the helper function
Ort::Value ortcv::utils::transform::create_tensor()

I understand I have to use the function:
inline Value Value::CreateTensor(const OrtMemoryInfo* info, void* p_data, size_t p_data_byte_count, const int64_t* shape, size_t shape_len, ONNXTensorElementDataType type)

With ONNXTensorElementDataType = ONNX_TENSOR_ELEMENT_DATA_TYPE_BFLOAT16 // Non-IEEE floating-point format based on IEEE754 single-precision

But i am stuck working out how to handle half resolution vectors in c++

Probably it is necessary to handle uint16 types to handle the pointers and make some conversion to half floats at some point, but I am lost about how to handle this.

big bug!发现了一个大bug

比如,在yolov5里面,推理时,预处理为,直接把图像缩放到输入大小,比如640*640,这样会导致很多图像扭曲变形,导致识别不准确:
Ort::Value YoloV5::transform(const cv::Mat &mat)
{
cv::Mat canva = mat.clone();
cv::cvtColor(canva, canva, cv::COLOR_BGR2RGB);
cv::resize(canva, canva, cv::Size(input_node_dims.at(3),
input_node_dims.at(2)));
// (1,3,640,640) 1xCXHXW

ortcv::utils::transform::normalize_inplace(canva, mean_val, scale_val); // float32
return ortcv::utils::transform::create_tensor(
canva, input_node_dims, memory_info_handler,
input_values_handler, ortcv::utils::transform::CHW);
}

而在python里面的代码,确是求的一个最小缩放比例,然后把原图安装缩放比例缩放,然后进行不够640的,补边处理,这样就不会对图像里面的进行不等比例缩放,图像不会扭曲;还原识别框的时候,进行反操作,这样的:

def letterbox(img, new_shape=(416, 416), color=(114, 114, 114), auto=False, scaleFill=False, scaleup=True):
shape = img.shape[:2] # current shape [height, width]
if isinstance(new_shape, int):
new_shape = (new_shape, new_shape)

r = min(new_shape[0] / shape[0], new_shape[1] / shape[1])
if not scaleup:
    r = min(r, 1.0)

ratio = r, r  # width, height ratios
new_unpad = int(round(shape[1] * r)), int(round(shape[0] * r))
dw, dh = new_shape[1] - new_unpad[0], new_shape[0] - new_unpad[1]  # wh padding
if auto:  # minimum rectangle
    dw, dh = np.mod(dw, 64), np.mod(dh, 64)  # wh padding
elif scaleFill:  # stretch
    dw, dh = 0.0, 0.0
    new_unpad = (new_shape[1], new_shape[0])
    ratio = new_shape[1] / shape[1], new_shape[0] / shape[0]  # width, height ratios

dw /= 2  # divide padding into 2 sides
dh /= 2
if shape[::-1] != new_unpad:  # resize
    img = cv2.resize(img, new_unpad, interpolation=cv2.INTER_LINEAR)
top, bottom = int(round(dh - 0.1)), int(round(dh + 0.1))
left, right = int(round(dw - 0.1)), int(round(dw + 0.1))
img = cv2.copyMakeBorder(img, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color)  # add border
return img, ratio, (dw, dh)

def scale_coords(img1_shape, coords, img0_shape, ratio_pad=None):
# Rescale coords (xyxy) from img1_shape to img0_shape
if ratio_pad is None: # calculate from img0_shape
gain = min(img1_shape[0] / img0_shape[0], img1_shape[1] / img0_shape[1]) # gain = old / new
pad = (img1_shape[1] - img0_shape[1] * gain) / 2, (img1_shape[0] - img0_shape[0] * gain) / 2 # wh padding
else:
gain = ratio_pad[0][0]
pad = ratio_pad[1]

coords[:, [0, 2]] -= pad[0]  # x padding
coords[:, [1, 3]] -= pad[1]  # y padding
coords[:, :4] /= gain
clip_coords(coords, img0_shape)
return coords

VS015 编译报错

图片
vs2019时正常,但是,项目需要vs2015,改成VS015活,编译报错,貌似对模板什么的支持不好

Not working under windows 10

Hello,

I am trying to test it under Windows 10 with Visual Studio 2019.
It compiles fine, but it always gives me the same error :
the application was unable to start correctly 0xc000007b.
Already tried with Release, Debug, MinSizeRel and RelWithDebInfo.
Could you please help me ?

Thank you.

无法运行example里面的test yolox

报错了

错误 LNK2001 无法解析的外部符号 "public: void __cdecl ortcv::YoloX::detect(class cv::Mat const &,class std::vector<struct ortcv::types::BoundingBoxType<float,float>,class std::allocator<struct ortcv::types::BoundingBoxType<float,float> > > &,float,float,unsigned int,unsigned int)" (?detect@YoloX@ortcv@@QEAAXAEBVMat@cv@@aeav?$vector@U?$BoundingBoxType@MM@types@ortcv@@v?$allocator@U?$BoundingBoxType@MM@types@ortcv@@@std@@@std@@mmii@Z) gyy_ort_test D:\Download\lite.ai-main\gyy_test\gyy_ort_test\gyy_ort_test\source.obj 1
请问这种情况应该怎么解决啊

onnxruntime问题请教

Ort::Env m_env;
Ort::Session m_session;
请问这两个关系是怎么样的,之前看onnxruntime的文档介绍,Ort::Env是一个全局唯一的,如果要实现一个生产者消费者的推理模块来扩大推理引擎的并发性,是不是所有线程共用一个Ort::Env,每个消费者线程新建一个Ort::Session对象?麻烦不吝指教

Using onnxruntime iobinding to eliminate useless transfers

It would be great to be able to use onnxruntime iobinding to eliminate useless transfers to improve the performance.

For the RobustVideoMatting network the author explains how use the iobinding in python to keep the tensors of the recurrent states in the GPU (avoiding copying them to the CPU and back to the GPU in the next frame):
https://github.com/PeterL1n/RobustVideoMatting/blob/master/documentation/inference.md

I've been looking for information to implement this iobinding in c++ but I couldn't find any reference.

I've figured out how to get the iobinding object from a pointer of the session:
ort_iobinding = new Ort::IoBinding(*ort_session);

To bind the outputs there are to methods:
void BindOutput(const char* name, const Value&);
void BindOutput(const char* name, const MemoryInfo&);

I think the correct way would be to create a MemoryInfo for CUDA device, but not sure if the following way would be correct:
Ort::MemoryInfo info_cuda("Cuda", OrtAllocatorType::OrtArenaAllocator, 0, OrtMemTypeDefault);
for (int i = 0; i < num_outputs; i++) ort_iobinding->BindOutput(output_node_names[i], info_cuda);

To bind the outputs there is only one method:
void BindInput(const char* name, const Value&);

To create the Ort::Value for the "src" input and bind it I think we can do:
Ort::Value srcTensor = Ort::Value::CreateTensor(memory_info_handler, src_values.data(), src_size, src_dims.data(), src_dims.size());
ort_iobinding->BindInput(input_node_names[0], srcTensor);

But I haven't been able to figure out how to create the tensors for the recurrent states as CUDA data.

I've tried the following code putting everything in the CPU just to check it works:

-Once at the begining:
ort_iobinding = new Ort::IoBinding(*ort_session);
for (int i = 0; i < num_outputs; i++) ort_iobinding->BindOutput(output_node_names[i], memory_info_handler);

  • Every frame:
    Ort::Value srcTensor = Ort::Value::CreateTensor(memory_info_handler, src_values.data(), src_size, src_dims.data(), src_dims.size());
    Ort::Value r1iTensor = Ort::Value::CreateTensor(memory_info_handler, r1i_values.data(), r1i_size, r1i_dims.data(), r1i_dims.size());
    Ort::Value r2iTensor = Ort::Value::CreateTensor(memory_info_handler, r2i_values.data(), r2i_size, r2i_dims.data(), r2i_dims.size());
    Ort::Value r3iTensor = Ort::Value::CreateTensor(memory_info_handler, r3i_values.data(), r3i_size, r3i_dims.data(), r3i_dims.size());
    Ort::Value r4iTensor = Ort::Value::CreateTensor(memory_info_handler, r4i_values.data(), r4i_size, r4i_dims.data(), r4i_dims.size());
    Ort::Value dsrTensor = Ort::Value::CreateTensor(memory_info_handler, dsr_values.data(), dsr_size, dsr_dims.data(), dsr_dims.size());
    ort_iobinding->BindInput(input_node_names[0], srcTensor);
    ort_iobinding->BindInput(input_node_names[1], r1iTensor);
    ort_iobinding->BindInput(input_node_names[2], r2iTensor);
    ort_iobinding->BindInput(input_node_names[3], r3iTensor);
    ort_iobinding->BindInput(input_node_names[4], r4iTensor);
    ort_iobinding->BindInput(input_node_names[5], dsrTensor);
    ort_session->Run(Ort::RunOptions{ nullptr }, *ort_iobinding);
    auto output_tensors = ort_iobinding->GetOutputValues();

And it works correctly, but obviously at the same performance. The point would be to figure out how to keep the tensors of the recurrent states always in the GPU memory.

yolox更新问题

自从yolox更新之后 似乎不再适用新版模型了 请问哪里需要进行调整呢

ort_types.h Ubuntu gcc5.4 编译错误

ort_types.h:279:64: error: conversion from ‘ortcv::types::BoundingBoxType<int, double>’ to non-scalar type ‘ortcv::types::BoundingBoxType<int, float>’ requested
  BoundingBoxType<int> boxi = this->template convert_type<int>();

请问大概是什么原因?

头文件包含问题

我另外一个工程要调用lite.ai.toolkit库时,我仍然需要包含onnxruntime mnn和ncnn库的头文件,其实,调用的时候,我只需要接口即可,可不可以尽量把onnxruntime mnn和ncnn库头文件目录的包含只限制到lite.ai.toolkit里面,其他程序调用lite.ai.toolkit时,不需要包含onnxruntime mnn和ncnn库头文件??

yolov5 代码问题请教

yolov5是一个基于anchor的算法,再前向推理的过程中应该涉及anchor的计算,但是在代码中没有看到andhor的任何信息,请问是怎么处理的?

Access to models

Hi,

Thanks for your work. Could you please share the models on GDrive? because I can't access Baidu.

Thanks

Runtime Version Detected Sim always same even changed the person

Hi,

I succesully compiled on the Mac Os x.

trying face rec. algorithms and recognized that ONNX
Runtime Version Detected Sim always same even changed the person.

ie : lite_glint_arcface.cpp
model : std::string onnx_path = "../../../hub/onnx/cv/ms1mv3_arcface_r100.onnx";

person a - person b
:

/var/folders/h6/7d637725049b0nf7_xqjkf640000gn/T/tmpl3pFGJ ; exit;
LITEORT_DEBUG LogId: ../../../hub/onnx/cv/ms1mv3_arcface_r100.onnx
=============== Input-Dims ==============
input_node_dims: 1
input_node_dims: 3
input_node_dims: 112
input_node_dims: 112
=============== Output-Dims ==============
Output: 0 Name: embedding Dim: 0 :1
Output: 0 Name: embedding Dim: 1 :512
[ WARN:0] global /Users/yanjunqiu/Desktop/third_party/library/opencv/modules/core/src/matrix_expressions.cpp (1334) assign OpenCV/MatExpr: processing of multi-channel arrays might be changed in the future: opencv/opencv#16739
Default Version Detected Sim: 0.415043
Default Version Detected Dist: 1.08163
LITEORT_DEBUG LogId: ../../../hub/onnx/cv/ms1mv3_arcface_r100.onnx
=============== Input-Dims ==============
input_node_dims: 1
input_node_dims: 3
input_node_dims: 112
input_node_dims: 112
=============== Output-Dims ==============
Output: 0 Name: embedding Dim: 0 :1
Output: 0 Name: embedding Dim: 1 :512
ONNXRuntime Version Detected Sim: 0.0349244

person-x personc

/var/folders/h6/7d637725049b0nf7_xqjkf640000gn/T/tmpzFKmvz ; exit;
LITEORT_DEBUG LogId: ../../../hub/onnx/cv/ms1mv3_arcface_r100.onnx
=============== Input-Dims ==============
input_node_dims: 1
input_node_dims: 3
input_node_dims: 112
input_node_dims: 112
=============== Output-Dims ==============
Output: 0 Name: embedding Dim: 0 :1
Output: 0 Name: embedding Dim: 1 :512
[ WARN:0] global /Users/yanjunqiu/Desktop/third_party/library/opencv/modules/core/src/matrix_expressions.cpp (1334) assign OpenCV/MatExpr: processing of multi-channel arrays might be changed in the future: opencv/opencv#16739
Default Version Detected Sim: 0.0609607
Default Version Detected Dist: 1.37043
LITEORT_DEBUG LogId: ../../../hub/onnx/cv/ms1mv3_arcface_r100.onnx
=============== Input-Dims ==============
input_node_dims: 1
input_node_dims: 3
input_node_dims: 112
input_node_dims: 112
=============== Output-Dims ==============
Output: 0 Name: embedding Dim: 0 :1
Output: 0 Name: embedding Dim: 1 :512
ONNXRuntime Version Detected Sim: 0.0349244

Ubuntu 16.04 编译问题

opencv,onnxruntime 已按照指引编译配置妥当,但是运行sh ./build.sh出现这样的错误:
1
2
大佬抽空看看。或者给个详细教程,哈哈

有加入multiple object tracking的打算吗?

如题,有没有打算提供基于深度学习的算法,例如这一两年的FairMOT,JDE等。搜遍所有开源的c++项目,目前就这个lite.ai.toolkit的品质最高,最适合写跨平台的程式和部署了

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.