Giter VIP home page Giter VIP logo

lite.ai.toolkit's Introduction

πŸ…πŸ…Lite.AI.ToolKit: A lite C++ toolkit of awesome AI models.


English | δΈ­ζ–‡ζ–‡ζ‘£ | MacOS | Linux | Windows


πŸ…πŸ…Lite.AI.ToolKit: A lite C++ toolkit of awesome AI models which contains 70+ models now. It's a collection of personal interests. Such as RVM, YOLOX, YOLOP, YOLOR, YoloV5, DeepLabV3, ArcFace, etc. emmm😞 ... it's not perfect yet. For now, let's regard it as a large collection of application cases for inference engines. Lite.AI.ToolKit based on ONNXRuntime C++ by default. I do have plans to reimplement it with NCNN, MNN and TNN, some models are already supported.

Core FeaturesπŸ‘πŸ‘‹

❀️ Star πŸŒŸπŸ‘†πŸ» this repo if it does any helps to you, many thanks ~

Supported Models Matrix

  • / = not supported now.
  • βœ… = known work and official supported now.
  • βœ”οΈ = known work, but unofficial supported now.
  • ❔ = in my plan, but not coming soon, maybe a few months later.
Class Size Type Demo ONNXRuntime MNN NCNN TNN MacOS Linux Windows Android
YoloV5 28M detection demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
YoloV3 236M detection demo βœ… / / / βœ… βœ”οΈ βœ”οΈ /
TinyYoloV3 33M detection demo βœ… / / / βœ… βœ”οΈ βœ”οΈ /
YoloV4 176M detection demo βœ… / / / βœ… βœ”οΈ βœ”οΈ /
SSD 76M detection demo βœ… / / / βœ… βœ”οΈ βœ”οΈ /
SSDMobileNetV1 27M detection demo βœ… / / / βœ… βœ”οΈ βœ”οΈ /
YoloX 3.5M detection demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
TinyYoloV4VOC 22M detection demo βœ… / / / βœ… βœ”οΈ βœ”οΈ /
TinyYoloV4COCO 22M detection demo βœ… / / / βœ… βœ”οΈ βœ”οΈ /
YoloR 39M detection demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
ScaledYoloV4 270M detection demo βœ… / / / βœ… βœ”οΈ βœ”οΈ /
EfficientDet 15M detection demo βœ… / / / βœ… βœ”οΈ βœ”οΈ /
EfficientDetD7 220M detection demo βœ… / / / βœ… βœ”οΈ βœ”οΈ /
EfficientDetD8 322M detection demo βœ… / / / βœ… βœ”οΈ βœ”οΈ /
YOLOP 30M detection demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
NanoDet 1.1M detection demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
NanoDetEffi... 12M detection demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
YoloX_V_0_1_1 3.5M detection demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
YoloV5_V_6_0 7.5M detection demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
GlintArcFace 92M faceid demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
GlintCosFace 92M faceid demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ /
GlintPartialFC 170M faceid demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ /
FaceNet 89M faceid demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ /
FocalArcFace 166M faceid demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ /
FocalAsiaArcFace 166M faceid demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ /
TencentCurricularFace 249M faceid demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ /
TencentCifpFace 130M faceid demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ /
CenterLossFace 280M faceid demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ /
SphereFace 80M faceid demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ /
PoseRobustFace 92M faceid demo βœ… / / / βœ… βœ”οΈ βœ”οΈ /
NaivePoseRobustFace 43M faceid demo βœ… / / / βœ… βœ”οΈ βœ”οΈ /
MobileFaceNet 3.8M faceid demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
CavaGhostArcFace 15M faceid demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
CavaCombinedFace 250M faceid demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ /
MobileSEFocalFace 4.5M faceid demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
RobustVideoMatting 14M matting demo βœ… βœ… / βœ… βœ… βœ”οΈ βœ”οΈ ❔
MGMatting 113M matting demo βœ… βœ… / βœ… βœ… βœ”οΈ βœ”οΈ /
UltraFace 1.1M face::detect demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
RetinaFace 1.6M face::detect demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
FaceBoxes 3.8M face::detect demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
PFLD 1.0M face::align demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
PFLD98 4.8M face::align demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
MobileNetV268 9.4M face::align demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
MobileNetV2SE68 11M face::align demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
PFLD68 2.8M face::align demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
FaceLandmark1000 2.0M face::align demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
FSANet 1.2M face::pose demo βœ… βœ… / βœ… βœ… βœ”οΈ βœ”οΈ ❔
AgeGoogleNet 23M face::attr demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
GenderGoogleNet 23M face::attr demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
EmotionFerPlus 33M face::attr demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
VGG16Age 514M face::attr demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ /
VGG16Gender 512M face::attr demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ /
SSRNet 190K face::attr demo βœ… βœ… / βœ… βœ… βœ”οΈ βœ”οΈ ❔
EfficientEmotion7 15M face::attr demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
EfficientEmotion8 15M face::attr demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
MobileEmotion7 13M face::attr demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
ReXNetEmotion7 30M face::attr demo βœ… βœ… / βœ… βœ… βœ”οΈ βœ”οΈ /
EfficientNetLite4 49M classification demo βœ… βœ… / βœ… βœ… βœ”οΈ βœ”οΈ /
ShuffleNetV2 8.7M classification demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
DenseNet121 30.7M classification demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ /
GhostNet 20M classification demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
HdrDNet 13M classification demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
IBNNet 97M classification demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ /
MobileNetV2 13M classification demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
ResNet 44M classification demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ /
ResNeXt 95M classification demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ /
DeepLabV3ResNet101 232M segmentation demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ /
FCNResNet101 207M segmentation demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ /
FastStyleTransfer 6.4M style demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
Colorizer 123M colorization demo βœ… βœ… / βœ… βœ… βœ”οΈ βœ”οΈ /
SubPixelCNN 234K resolution demo βœ… βœ… / βœ… βœ… βœ”οΈ βœ”οΈ ❔

Updates!!

*【2021/12/08】Added MGMatting for Human Matting(CVPR2021). See [c++ demo][arXiv 2021][code].
*【2021/11/11】Added YoloV5_V_6_0 for object detection. See [c++ demo][doi][code].
*【2021/10/26】Added YoloX_V_0_1_1 for object detection See [c++ demo][arXiv 2021][code].
*【2021/10/02】Added NanoDet ⚑ super fast and 1.1Mb only! See [c++ demo][blog][code].
*【2021/09/20】Added RobustVideoMatting for Image and Video Matting! See [c++ demo][arXiv 2021][code].
*【2021/09/02】Added YOLOP for Panoptic πŸš— ! See [c++ demo][arXiv 2021][code].

Expand for More Notes.

More Notes !!!

Contents.

1. Build.

  • MacOS: Build the shared lib of Lite.AI.ToolKit for MacOS from sources. Note that Lite.AI.ToolKit uses onnxruntime as default backend, for the reason that onnxruntime supports the most of onnx's operators.
    git clone --depth=1 https://github.com/DefTruth/lite.ai.toolkit.git  # latest
    cd lite.ai.toolkit && sh ./build.sh  # On MacOS, you can use the built OpenCV, ONNXRuntime, MNN, NCNN and TNN libs in this repo.
πŸ’‘ Linux and Windows.

Linux and Windows.

⚠️ Lite.AI.ToolKit is not directly support Linux and Windows now. For Linux and Windows, you need to build or download(if have official builts) the shared libs of OpenCV、ONNXRuntime and any other Engines(like MNN, NCNN, TNN) firstly, then put the headers into the specific directories or just let these directories unchange(use the headers offer by this repo, the header file of the dependent library of this project is directly copied from the corresponding official library). However, the dynamic libraries under different operating systems need to be recompiled or downloaded. MacOS users can directly use the dynamic libraries of each dependent library provided by this project:

  • lite.ai.toolkit/opencv2
      cp -r you-path-to-downloaded-or-built-opencv/include/opencv4/opencv2 lite.ai.toolkit/opencv2
  • lite.ai.toolkit/onnxruntime
      cp -r you-path-to-downloaded-or-built-onnxruntime/include/onnxruntime lite.ai.toolkit/onnxruntime
  • lite.ai.toolkit/MNN
      cp -r you-path-to-downloaded-or-built-MNN/include/MNN lite.ai.toolkit/MNN
  • lite.ai.toolkit/ncnn
      cp -r you-path-to-downloaded-or-built-ncnn/include/ncnn lite.ai.toolkit/ncnn
  • lite.ai.toolkit/tnn
      cp -r you-path-to-downloaded-or-built-TNN/include/tnn lite.ai.toolkit/tnn

and put the libs into lite.ai.toolkit/lib directory. Please reference the build-docs1 for third_party.

  • lite.ai.toolkit/lib

      cp you-path-to-downloaded-or-built-opencv/lib/*opencv* lite.ai.toolkit/lib
      cp you-path-to-downloaded-or-built-onnxruntime/lib/*onnxruntime* lite.ai.toolkit/lib
      cp you-path-to-downloaded-or-built-MNN/lib/*MNN* lite.ai.toolkit/lib
      cp you-path-to-downloaded-or-built-ncnn/lib/*ncnn* lite.ai.toolkit/lib
      cp you-path-to-downloaded-or-built-TNN/lib/*TNN* lite.ai.toolkit/lib
  • Windows: You can reference to issue#6

  • Linux: The Docs and Docker image for Linux will be coming soon ~ issue#2

  • Happy News !!! : πŸš€ You can download the latest ONNXRuntime official built libs of Windows, Linux, MacOS and Arm !!! Both CPU and GPU versions are available. No more attentions needed pay to build it from source. Download the official built libs from v1.8.1. I have used version 1.7.0 for Lite.AI.ToolKit now, you can downlod it from v1.7.0, but version 1.8.1 should also work, I guess ~ πŸ™ƒπŸ€ͺπŸ€. For OpenCV, try to build from source(Linux) or down load the official built(Windows) from OpenCV 4.5.3. Then put the includes and libs into specific directory of Lite.AI.ToolKit.

  • GPU Compatibility for Windows: See issue#10.

  • GPU Compatibility for Linux: See issue#97.

πŸ”‘οΈ How to link Lite.AI.ToolKit? * To link Lite.AI.ToolKit, you can follow the CMakeLists.txt listed belows.
cmake_minimum_required(VERSION 3.17)
project(lite.ai.toolkit.demo)

set(CMAKE_CXX_STANDARD 11)

# setting up lite.ai.toolkit
set(LITE_AI_DIR ${CMAKE_SOURCE_DIR}/lite.ai.toolkit)
set(LITE_AI_INCLUDE_DIR ${LITE_AI_DIR}/include)
set(LITE_AI_LIBRARY_DIR ${LITE_AI_DIR}/lib)
include_directories(${LITE_AI_INCLUDE_DIR})
link_directories(${LITE_AI_LIBRARY_DIR})

set(OpenCV_LIBS
        opencv_highgui
        opencv_core
        opencv_imgcodecs
        opencv_imgproc
        opencv_video
        opencv_videoio
        )
# add your executable
set(EXECUTABLE_OUTPUT_PATH ${CMAKE_SOURCE_DIR}/examples/build)

add_executable(lite_rvm examples/test_lite_rvm.cpp)
target_link_libraries(lite_rvm
        lite.ai.toolkit
        onnxruntime
        MNN  # need, if built lite.ai.toolkit with ENABLE_MNN=ON,  default OFF
        ncnn # need, if built lite.ai.toolkit with ENABLE_NCNN=ON, default OFF 
        TNN  # need, if built lite.ai.toolkit with ENABLE_TNN=ON,  default OFF 
        ${OpenCV_LIBS})  # link lite.ai.toolkit & other libs.
cd ./build/lite.ai.toolkit/lib && otool -L liblite.ai.toolkit.0.0.1.dylib 
liblite.ai.toolkit.0.0.1.dylib:
        @rpath/liblite.ai.toolkit.0.0.1.dylib (compatibility version 0.0.1, current version 0.0.1)
        @rpath/libopencv_highgui.4.5.dylib (compatibility version 4.5.0, current version 4.5.2)
        @rpath/libonnxruntime.1.7.0.dylib (compatibility version 0.0.0, current version 1.7.0)
        ...
cd ../ && tree .
β”œβ”€β”€ bin
β”œβ”€β”€ include
β”‚Β Β  β”œβ”€β”€ lite
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ backend.h
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ config.h
β”‚Β Β  β”‚Β Β  └── lite.h
β”‚Β Β  └── ort
└── lib
    └── liblite.ai.toolkit.0.0.1.dylib
  • Run the built examples:
cd ./build/lite.ai.toolkit/bin && ls -lh | grep lite
-rwxr-xr-x  1 root  staff   301K Jun 26 23:10 liblite.ai.toolkit.0.0.1.dylib
...
-rwxr-xr-x  1 root  staff   196K Jun 26 23:10 lite_yolov4
-rwxr-xr-x  1 root  staff   196K Jun 26 23:10 lite_yolov5
...
./lite_yolov5
LITEORT_DEBUG LogId: ../../../hub/onnx/cv/yolov5s.onnx
=============== Input-Dims ==============
...
detected num_anchors: 25200
generate_bboxes num: 66
Default Version Detected Boxes Num: 5

To link lite.ai.toolkit shared lib. You need to make sure that OpenCV and onnxruntime are linked correctly. A minimum example to show you how to link the shared lib of Lite.AI.ToolKit correctly for your own project can be found at CMakeLists.txt.

2. Model Zoo.

Lite.AI.ToolKit contains 70+ AI models with 500+ frozen pretrained files now. Most of the files are converted by myself. You can use it through lite::cv::Type::Class syntax, such as lite::cv::detection::YoloV5. More details can be found at Examples for Lite.AI.ToolKit. Note, for Google Drive, I can not upload all the *.onnx files because of the storage limitation (15G).

File Baidu Drive Google Drive Hub
ONNX Baidu Drive code: 8gin Google Drive ONNX Hub
MNN Baidu Drive code: 9v63 ❔ MNN Hub
NCNN Baidu Drive code: sc7f ❔ NCNN Hub
TNN Baidu Drive code: 6o6k ❔ TNN Hub
Lite.AI.ToolKit modules.

Namespace and Lite.AI.ToolKit modules.

Namepace Details
lite::cv::detection Object Detection. one-stage and anchor-free detectors, YoloV5, YoloV4, SSD, etc. βœ…
lite::cv::classification Image Classification. DensNet, ShuffleNet, ResNet, IBNNet, GhostNet, etc. βœ…
lite::cv::faceid Face Recognition. ArcFace, CosFace, CurricularFace, etc. ❇️
lite::cv::face Face Analysis. detect, align, pose, attr, etc. ❇️
lite::cv::face::detect Face Detection. UltraFace, RetinaFace, FaceBoxes, PyramidBox, etc. ❇️
lite::cv::face::align Face Alignment. PFLD(106), FaceLandmark1000(1000 landmarks), PRNet, etc. ❇️
lite::cv::face::pose Head Pose Estimation. FSANet, etc. ❇️
lite::cv::face::attr Face Attributes. Emotion, Age, Gender. EmotionFerPlus, VGG16Age, etc. ❇️
lite::cv::segmentation Object Segmentation. Such as FCN, DeepLabV3, etc. ⚠️
lite::cv::style Style Transfer. Contains neural style transfer now, such as FastStyleTransfer. ⚠️
lite::cv::matting Image Matting. Object and Human matting. ⚠️
lite::cv::colorization Colorization. Make Gray image become RGB. ⚠️
lite::cv::resolution Super Resolution. ⚠️

Lite.AI.ToolKit's Classes and Pretrained Files.

Correspondence between the classes in Lite.AI.ToolKit and pretrained model files can be found at lite.ai.toolkit.hub.onnx.md. For examples, the pretrained model files for lite::cv::detection::YoloV5 and lite::cv::detection::YoloX are listed as follows.

Class Pretrained ONNX Files Rename or Converted From (Repo) Size
lite::cv::detection::YoloV5 yolov5l.onnx yolov5 (πŸ”₯πŸ”₯πŸ’₯↑) 188Mb
lite::cv::detection::YoloV5 yolov5m.onnx yolov5 (πŸ”₯πŸ”₯πŸ’₯↑) 85Mb
lite::cv::detection::YoloV5 yolov5s.onnx yolov5 (πŸ”₯πŸ”₯πŸ’₯↑) 29Mb
lite::cv::detection::YoloV5 yolov5x.onnx yolov5 (πŸ”₯πŸ”₯πŸ’₯↑) 351Mb
lite::cv::detection::YoloX yolox_x.onnx YOLOX (πŸ”₯πŸ”₯!!↑) 378Mb
lite::cv::detection::YoloX yolox_l.onnx YOLOX (πŸ”₯πŸ”₯!!↑) 207Mb
lite::cv::detection::YoloX yolox_m.onnx YOLOX (πŸ”₯πŸ”₯!!↑) 97Mb
lite::cv::detection::YoloX yolox_s.onnx YOLOX (πŸ”₯πŸ”₯!!↑) 34Mb
lite::cv::detection::YoloX yolox_tiny.onnx YOLOX (πŸ”₯πŸ”₯!!↑) 19Mb
lite::cv::detection::YoloX yolox_nano.onnx YOLOX (πŸ”₯πŸ”₯!!↑) 3.5Mb

It means that you can load the the any one yolov5*.onnx and yolox_*.onnx according to your application through the same Lite.AI.ToolKit's classes, such as YoloV5, YoloX, etc.

auto *yolov5 = new lite::cv::detection::YoloV5("yolov5x.onnx");  // for server
auto *yolov5 = new lite::cv::detection::YoloV5("yolov5l.onnx"); 
auto *yolov5 = new lite::cv::detection::YoloV5("yolov5m.onnx");  
auto *yolov5 = new lite::cv::detection::YoloV5("yolov5s.onnx");  // for mobile device 
auto *yolox = new lite::cv::detection::YoloX("yolox_x.onnx");  
auto *yolox = new lite::cv::detection::YoloX("yolox_l.onnx");  
auto *yolox = new lite::cv::detection::YoloX("yolox_m.onnx");  
auto *yolox = new lite::cv::detection::YoloX("yolox_s.onnx");  
auto *yolox = new lite::cv::detection::YoloX("yolox_tiny.onnx");  
auto *yolox = new lite::cv::detection::YoloX("yolox_nano.onnx");  // 3.5Mb only !

3. Examples.

More examples can be found at examples.

Example0: Object Detection using YoloV5. Download model from Model-Zoo2.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/yolov5s.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_yolov5_1.jpg";
  std::string save_img_path = "../../../logs/test_lite_yolov5_1.jpg";

  auto *yolov5 = new lite::cv::detection::YoloV5(onnx_path); 
  std::vector<lite::types::Boxf> detected_boxes;
  cv::Mat img_bgr = cv::imread(test_img_path);
  yolov5->detect(img_bgr, detected_boxes);
  
  lite::utils::draw_boxes_inplace(img_bgr, detected_boxes);
  cv::imwrite(save_img_path, img_bgr);  
  
  delete yolov5;
}

The output is:

Or you can use Newest πŸ”₯πŸ”₯ ! YOLO series's detector YOLOX or YoloR. They got the similar results.


Example1: Video Matting using RobustVideoMatting2021πŸ”₯πŸ”₯πŸ”₯. Download model from Model-Zoo2.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/rvm_mobilenetv3_fp32.onnx";
  std::string video_path = "../../../examples/lite/resources/test_lite_rvm_0.mp4";
  std::string output_path = "../../../logs/test_lite_rvm_0.mp4";
  
  auto *rvm = new lite::cv::matting::RobustVideoMatting(onnx_path, 16); // 16 threads
  std::vector<lite::types::MattingContent> contents;
  
  // 1. video matting.
  rvm->detect_video(video_path, output_path, contents, false, 0.4f);
  
  delete rvm;
}

The output is:



Example2: 1000 Facial Landmarks Detection using FaceLandmarks1000. Download model from Model-Zoo2.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/FaceLandmark1000.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_face_landmarks_0.png";
  std::string save_img_path = "../../../logs/test_lite_face_landmarks_1000.jpg";
    
  auto *face_landmarks_1000 = new lite::cv::face::align::FaceLandmark1000(onnx_path);

  lite::types::Landmarks landmarks;
  cv::Mat img_bgr = cv::imread(test_img_path);
  face_landmarks_1000->detect(img_bgr, landmarks);
  lite::utils::draw_landmarks_inplace(img_bgr, landmarks);
  cv::imwrite(save_img_path, img_bgr);
  
  delete face_landmarks_1000;
}

The output is:


Example3: Colorization using colorization. Download model from Model-Zoo2.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/eccv16-colorizer.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_colorizer_1.jpg";
  std::string save_img_path = "../../../logs/test_lite_eccv16_colorizer_1.jpg";
  
  auto *colorizer = new lite::cv::colorization::Colorizer(onnx_path);
  
  cv::Mat img_bgr = cv::imread(test_img_path);
  lite::types::ColorizeContent colorize_content;
  colorizer->detect(img_bgr, colorize_content);
  
  if (colorize_content.flag) cv::imwrite(save_img_path, colorize_content.mat);
  delete colorizer;
}

The output is:



Example4: Face Recognition using ArcFace. Download model from Model-Zoo2.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/ms1mv3_arcface_r100.onnx";
  std::string test_img_path0 = "../../../examples/lite/resources/test_lite_faceid_0.png";
  std::string test_img_path1 = "../../../examples/lite/resources/test_lite_faceid_1.png";
  std::string test_img_path2 = "../../../examples/lite/resources/test_lite_faceid_2.png";

  auto *glint_arcface = new lite::cv::faceid::GlintArcFace(onnx_path);

  lite::types::FaceContent face_content0, face_content1, face_content2;
  cv::Mat img_bgr0 = cv::imread(test_img_path0);
  cv::Mat img_bgr1 = cv::imread(test_img_path1);
  cv::Mat img_bgr2 = cv::imread(test_img_path2);
  glint_arcface->detect(img_bgr0, face_content0);
  glint_arcface->detect(img_bgr1, face_content1);
  glint_arcface->detect(img_bgr2, face_content2);

  if (face_content0.flag && face_content1.flag && face_content2.flag)
  {
    float sim01 = lite::utils::math::cosine_similarity<float>(
        face_content0.embedding, face_content1.embedding);
    float sim02 = lite::utils::math::cosine_similarity<float>(
        face_content0.embedding, face_content2.embedding);
    std::cout << "Detected Sim01: " << sim  << " Sim02: " << sim02 << std::endl;
  }

  delete glint_arcface;
}

The output is:

Detected Sim01: 0.721159 Sim02: -0.0626267


Example5: Face Detection using UltraFace. Download model from Model-Zoo2.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/ultraface-rfb-640.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_ultraface.jpg";
  std::string save_img_path = "../../../logs/test_lite_ultraface.jpg";

  auto *ultraface = new lite::cv::face::detect::UltraFace(onnx_path);

  std::vector<lite::types::Boxf> detected_boxes;
  cv::Mat img_bgr = cv::imread(test_img_path);
  ultraface->detect(img_bgr, detected_boxes);
  lite::utils::draw_boxes_inplace(img_bgr, detected_boxes);
  cv::imwrite(save_img_path, img_bgr);

  delete ultraface;
}

The output is:

⚠️ Expand All Examples for Each Topic in Lite.AI.ToolKit
3.1 Expand Examples for Object Detection.

3.1 Object Detection using YoloV5. Download model from Model-Zoo2.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/yolov5s.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_yolov5_1.jpg";
  std::string save_img_path = "../../../logs/test_lite_yolov5_1.jpg";
  
  auto *yolov5 = new lite::cv::detection::YoloV5(onnx_path);
  std::vector<lite::types::Boxf> detected_boxes;
  cv::Mat img_bgr = cv::imread(test_img_path);
  yolov5->detect(img_bgr, detected_boxes);
  
  lite::utils::draw_boxes_inplace(img_bgr, detected_boxes);
  cv::imwrite(save_img_path, img_bgr);
  
  delete yolov5;
}

The output is:

Or you can use Newest πŸ”₯πŸ”₯ ! YOLO series's detector YOLOX . They got the similar results.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/yolox_s.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_yolox_1.jpg";
  std::string save_img_path = "../../../logs/test_lite_yolox_1.jpg";

  auto *yolox = new lite::cv::detection::YoloX(onnx_path); 
  std::vector<lite::types::Boxf> detected_boxes;
  cv::Mat img_bgr = cv::imread(test_img_path);
  yolox->detect(img_bgr, detected_boxes);
  
  lite::utils::draw_boxes_inplace(img_bgr, detected_boxes);
  cv::imwrite(save_img_path, img_bgr);  
  
  delete yolox;
}

The output is:

More classes for general object detection.

auto *detector = new lite::cv::detection::YoloX(onnx_path);  // Newest YOLO detector !!! 2021-07
auto *detector = new lite::cv::detection::YoloV4(onnx_path); 
auto *detector = new lite::cv::detection::YoloV3(onnx_path); 
auto *detector = new lite::cv::detection::TinyYoloV3(onnx_path); 
auto *detector = new lite::cv::detection::SSD(onnx_path); 
auto *detector = new lite::cv::detection::YoloV5(onnx_path); 
auto *detector = new lite::cv::detection::YoloR(onnx_path);  // Newest YOLO detector !!! 2021-05
auto *detector = new lite::cv::detection::TinyYoloV4VOC(onnx_path); 
auto *detector = new lite::cv::detection::TinyYoloV4COCO(onnx_path); 
auto *detector = new lite::cv::detection::ScaledYoloV4(onnx_path); 
auto *detector = new lite::cv::detection::EfficientDet(onnx_path); 
auto *detector = new lite::cv::detection::EfficientDetD7(onnx_path); 
auto *detector = new lite::cv::detection::EfficientDetD8(onnx_path); 
auto *detector = new lite::cv::detection::YOLOP(onnx_path);
auto *detector = new lite::cv::detection::NanoDet(onnx_path); // Super fast and tiny!
auto *detector = new lite::cv::detection::NanoDetEfficientNetLite(onnx_path); // Super fast and tiny!
3.2 Expand Examples for Face Recognition.

3.2 Face Recognition using ArcFace. Download model from Model-Zoo2.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/ms1mv3_arcface_r100.onnx";
  std::string test_img_path0 = "../../../examples/lite/resources/test_lite_faceid_0.png";
  std::string test_img_path1 = "../../../examples/lite/resources/test_lite_faceid_1.png";
  std::string test_img_path2 = "../../../examples/lite/resources/test_lite_faceid_2.png";

  auto *glint_arcface = new lite::cv::faceid::GlintArcFace(onnx_path);

  lite::types::FaceContent face_content0, face_content1, face_content2;
  cv::Mat img_bgr0 = cv::imread(test_img_path0);
  cv::Mat img_bgr1 = cv::imread(test_img_path1);
  cv::Mat img_bgr2 = cv::imread(test_img_path2);
  glint_arcface->detect(img_bgr0, face_content0);
  glint_arcface->detect(img_bgr1, face_content1);
  glint_arcface->detect(img_bgr2, face_content2);

  if (face_content0.flag && face_content1.flag && face_content2.flag)
  {
    float sim01 = lite::utils::math::cosine_similarity<float>(
        face_content0.embedding, face_content1.embedding);
    float sim02 = lite::utils::math::cosine_similarity<float>(
        face_content0.embedding, face_content2.embedding);
    std::cout << "Detected Sim01: " << sim  << " Sim02: " << sim02 << std::endl;
  }

  delete glint_arcface;
}

The output is:

Detected Sim01: 0.721159 Sim02: -0.0626267

More classes for face recognition.

auto *recognition = new lite::cv::faceid::GlintCosFace(onnx_path);  // DeepGlint(insightface)
auto *recognition = new lite::cv::faceid::GlintArcFace(onnx_path);  // DeepGlint(insightface)
auto *recognition = new lite::cv::faceid::GlintPartialFC(onnx_path); // DeepGlint(insightface)
auto *recognition = new lite::cv::faceid::FaceNet(onnx_path);
auto *recognition = new lite::cv::faceid::FocalArcFace(onnx_path);
auto *recognition = new lite::cv::faceid::FocalAsiaArcFace(onnx_path);
auto *recognition = new lite::cv::faceid::TencentCurricularFace(onnx_path); // Tencent(TFace)
auto *recognition = new lite::cv::faceid::TencentCifpFace(onnx_path); // Tencent(TFace)
auto *recognition = new lite::cv::faceid::CenterLossFace(onnx_path);
auto *recognition = new lite::cv::faceid::SphereFace(onnx_path);
auto *recognition = new lite::cv::faceid::PoseRobustFace(onnx_path);
auto *recognition = new lite::cv::faceid::NaivePoseRobustFace(onnx_path);
auto *recognition = new lite::cv::faceid::MobileFaceNet(onnx_path); // 3.8Mb only !
auto *recognition = new lite::cv::faceid::CavaGhostArcFace(onnx_path);
auto *recognition = new lite::cv::faceid::CavaCombinedFace(onnx_path);
auto *recognition = new lite::cv::faceid::MobileSEFocalFace(onnx_path); // 4.5Mb only !
3.3 Expand Examples for Segmentation.

3.3 Segmentation using DeepLabV3ResNet101. Download model from Model-Zoo2.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/deeplabv3_resnet101_coco.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_deeplabv3_resnet101.png";
  std::string save_img_path = "../../../logs/test_lite_deeplabv3_resnet101.jpg";

  auto *deeplabv3_resnet101 = new lite::cv::segmentation::DeepLabV3ResNet101(onnx_path, 16); // 16 threads

  lite::types::SegmentContent content;
  cv::Mat img_bgr = cv::imread(test_img_path);
  deeplabv3_resnet101->detect(img_bgr, content);

  if (content.flag)
  {
    cv::Mat out_img;
    cv::addWeighted(img_bgr, 0.2, content.color_mat, 0.8, 0., out_img);
    cv::imwrite(save_img_path, out_img);
    if (!content.names_map.empty())
    {
      for (auto it = content.names_map.begin(); it != content.names_map.end(); ++it)
      {
        std::cout << it->first << " Name: " << it->second << std::endl;
      }
    }
  }
  delete deeplabv3_resnet101;
}

The output is:

More classes for segmentation.

auto *segment = new lite::cv::segmentation::FCNResNet101(onnx_path);
3.4 Expand Examples for Face Attributes Analysis.

3.4 Age Estimation using SSRNet . Download model from Model-Zoo2.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/ssrnet.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_ssrnet.jpg";
  std::string save_img_path = "../../../logs/test_lite_ssrnet.jpg";

  lite::cv::face::attr::SSRNet *ssrnet = new lite::cv::face::attr::SSRNet(onnx_path);

  lite::types::Age age;
  cv::Mat img_bgr = cv::imread(test_img_path);
  ssrnet->detect(img_bgr, age);
  lite::utils::draw_age_inplace(img_bgr, age);
  cv::imwrite(save_img_path, img_bgr);
  std::cout << "Default Version Done! Detected SSRNet Age: " << age.age << std::endl;

  delete ssrnet;
}

The output is:

More classes for face attributes analysis.

auto *attribute = new lite::cv::face::attr::AgeGoogleNet(onnx_path);  
auto *attribute = new lite::cv::face::attr::GenderGoogleNet(onnx_path); 
auto *attribute = new lite::cv::face::attr::EmotionFerPlus(onnx_path);
auto *attribute = new lite::cv::face::attr::VGG16Age(onnx_path);
auto *attribute = new lite::cv::face::attr::VGG16Gender(onnx_path);
auto *attribute = new lite::cv::face::attr::EfficientEmotion7(onnx_path); // 7 emotions, 15Mb only!
auto *attribute = new lite::cv::face::attr::EfficientEmotion8(onnx_path); // 8 emotions, 15Mb only!
auto *attribute = new lite::cv::face::attr::MobileEmotion7(onnx_path); // 7 emotions
auto *attribute = new lite::cv::face::attr::ReXNetEmotion7(onnx_path); // 7 emotions
3.5 Expand Examples for Image Classification.

3.5 1000 Classes Classification using DenseNet. Download model from Model-Zoo2.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/densenet121.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_densenet.jpg";

  auto *densenet = new lite::cv::classification::DenseNet(onnx_path);

  lite::types::ImageNetContent content;
  cv::Mat img_bgr = cv::imread(test_img_path);
  densenet->detect(img_bgr, content);
  if (content.flag)
  {
    const unsigned int top_k = content.scores.size();
    if (top_k > 0)
    {
      for (unsigned int i = 0; i < top_k; ++i)
        std::cout << i + 1
                  << ": " << content.labels.at(i)
                  << ": " << content.texts.at(i)
                  << ": " << content.scores.at(i)
                  << std::endl;
    }
  }
  delete densenet;
}

The output is:

More classes for image classification.

auto *classifier = new lite::cv::classification::EfficientNetLite4(onnx_path);  
auto *classifier = new lite::cv::classification::ShuffleNetV2(onnx_path); 
auto *classifier = new lite::cv::classification::GhostNet(onnx_path);
auto *classifier = new lite::cv::classification::HdrDNet(onnx_path);
auto *classifier = new lite::cv::classification::IBNNet(onnx_path);
auto *classifier = new lite::cv::classification::MobileNetV2(onnx_path); 
auto *classifier = new lite::cv::classification::ResNet(onnx_path); 
auto *classifier = new lite::cv::classification::ResNeXt(onnx_path);
3.6 Expand Examples for Face Detection.

3.6 Face Detection using UltraFace. Download model from Model-Zoo2.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/ultraface-rfb-640.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_ultraface.jpg";
  std::string save_img_path = "../../../logs/test_lite_ultraface.jpg";

  auto *ultraface = new lite::cv::face::detect::UltraFace(onnx_path);

  std::vector<lite::types::Boxf> detected_boxes;
  cv::Mat img_bgr = cv::imread(test_img_path);
  ultraface->detect(img_bgr, detected_boxes);
  lite::utils::draw_boxes_inplace(img_bgr, detected_boxes);
  cv::imwrite(save_img_path, img_bgr);

  delete ultraface;
}

The output is:

More classes for face detection.

auto *detector = new lite::face::detect::UltraFace(onnx_path);  // 1.1Mb only !
auto *detector = new lite::face::detect::FaceBoxes(onnx_path);  // 3.8Mb only ! 
auto *detector = new lite::face::detect::RetinaFace(onnx_path);  // 1.6Mb only ! CVPR2020
3.7 Expand Examples for Colorization.

3.7 Colorization using colorization. Download model from Model-Zoo2.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/eccv16-colorizer.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_colorizer_1.jpg";
  std::string save_img_path = "../../../logs/test_lite_eccv16_colorizer_1.jpg";
  
  auto *colorizer = new lite::cv::colorization::Colorizer(onnx_path);
  
  cv::Mat img_bgr = cv::imread(test_img_path);
  lite::types::ColorizeContent colorize_content;
  colorizer->detect(img_bgr, colorize_content);
  
  if (colorize_content.flag) cv::imwrite(save_img_path, colorize_content.mat);
  delete colorizer;
}

The output is:


3.8 Expand Examples for Head Pose Estimation.

3.8 Head Pose Estimation using FSANet. Download model from Model-Zoo2.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/fsanet-var.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_fsanet.jpg";
  std::string save_img_path = "../../../logs/test_lite_fsanet.jpg";

  auto *fsanet = new lite::cv::face::pose::FSANet(onnx_path);
  cv::Mat img_bgr = cv::imread(test_img_path);
  lite::types::EulerAngles euler_angles;
  fsanet->detect(img_bgr, euler_angles);
  
  if (euler_angles.flag)
  {
    lite::utils::draw_axis_inplace(img_bgr, euler_angles);
    cv::imwrite(save_img_path, img_bgr);
    std::cout << "yaw:" << euler_angles.yaw << " pitch:" << euler_angles.pitch << " row:" << euler_angles.roll << std::endl;
  }
  delete fsanet;
}

The output is:

3.9 Expand Examples for Face Alignment.

3.9 1000 Facial Landmarks Detection using FaceLandmarks1000. Download model from Model-Zoo2.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/FaceLandmark1000.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_face_landmarks_0.png";
  std::string save_img_path = "../../../logs/test_lite_face_landmarks_1000.jpg";
    
  auto *face_landmarks_1000 = new lite::cv::face::align::FaceLandmark1000(onnx_path);

  lite::types::Landmarks landmarks;
  cv::Mat img_bgr = cv::imread(test_img_path);
  face_landmarks_1000->detect(img_bgr, landmarks);
  lite::utils::draw_landmarks_inplace(img_bgr, landmarks);
  cv::imwrite(save_img_path, img_bgr);
  
  delete face_landmarks_1000;
}

The output is:

More classes for face alignment.

auto *align = new lite::cv::face::align::PFLD(onnx_path);  // 106 landmarks
auto *align = new lite::cv::face::align::PFLD98(onnx_path);  // 98 landmarks
auto *align = new lite::cv::face::align::PFLD68(onnx_path);  // 68 landmarks
auto *align = new lite::cv::face::align::MobileNetV268(onnx_path);  // 68 landmarks
auto *align = new lite::cv::face::align::MobileNetV2SE68(onnx_path);  // 68 landmarks
auto *align = new lite::cv::face::align::FaceLandmark1000(onnx_path);  // 1000 landmarks !
3.10 Expand Examples for Style Transfer.

3.10 Style Transfer using FastStyleTransfer. Download model from Model-Zoo2.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/style-candy-8.onnx";
  std::string test_img_path = "../../../examples/lite/resources/test_lite_fast_style_transfer.jpg";
  std::string save_img_path = "../../../logs/test_lite_fast_style_transfer_candy.jpg";
  
  auto *fast_style_transfer = new lite::cv::style::FastStyleTransfer(onnx_path);
 
  lite::types::StyleContent style_content;
  cv::Mat img_bgr = cv::imread(test_img_path);
  fast_style_transfer->detect(img_bgr, style_content);

  if (style_content.flag) cv::imwrite(save_img_path, style_content.mat);
  delete fast_style_transfer;
}

The output is:


3.11 Expand Examples for Image Matting.

3.11 Video Matting using RobustVideoMatting. Download model from Model-Zoo2.

#include "lite/lite.h"

static void test_default()
{
  std::string onnx_path = "../../../hub/onnx/cv/rvm_mobilenetv3_fp32.onnx";
  std::string video_path = "../../../examples/lite/resources/test_lite_rvm_0.mp4";
  std::string output_path = "../../../logs/test_lite_rvm_0.mp4";
  
  auto *rvm = new lite::cv::matting::RobustVideoMatting(onnx_path, 16); // 16 threads
  std::vector<lite::types::MattingContent> contents;
  
  // 1. video matting.
  rvm->detect_video(video_path, output_path, contents);
  
  delete rvm;
}

The output is:


4. License.

The code of Lite.AI.ToolKit is released under the GPL-3.0 License.

5. References.

Many thanks to these following projects. All the Lite.AI.ToolKit's models are sourced from these repos.

Expand for More References.

6. Compilation Options.

In addition, MNN, NCNN and TNN support for some models will be added in the future, but due to operator compatibility and some other reasons, it is impossible to ensure that all models supported by ONNXRuntime C++ can run through MNN, NCNN and TNN. So, if you want to use all the models supported by this repo and don't care about the performance gap of 1~2ms, just let ONNXRuntime as default inference engine for this repo. However, you can follow the steps below if you want to build with MNN, NCNN or TNN support.

  • change the build.sh with DENABLE_MNN=ON,DENABLE_NCNN=ON or DENABLE_TNN=ON, such as
cd build && cmake \
  -DCMAKE_BUILD_TYPE=MinSizeRel \
  -DINCLUDE_OPENCV=ON \   # Whether to package OpenCV into lite.ai.toolkit, default ON; otherwise, you need to setup OpenCV yourself.
  -DENABLE_MNN=ON \       # Whether to build with MNN,  default OFF, only some models are supported now.
  -DENABLE_NCNN=OFF \     # Whether to build with NCNN, default OFF, only some models are supported now.
  -DENABLE_TNN=OFF \      # Whether to build with TNN,  default OFF, only some models are supported now.
  .. && make -j8
  • use the MNN, NCNN or TNN version interface, see demo, such as
auto *nanodet = new lite::mnn::cv::detection::NanoDet(mnn_path);
auto *nanodet = new lite::tnn::cv::detection::NanoDet(proto_path, model_path);
auto *nanodet = new lite::ncnn::cv::detection::NanoDet(param_path, bin_path);

7. Citations.

Cite it as follows if you use Lite.AI.ToolKit.

@misc{lite.ai.toolkit2021,
  title={lite.ai.toolkit: A lite C++ toolkit of awesome AI models.},
  url={https://github.com/DefTruth/lite.ai.toolkit},
  note={Open-source software available at https://github.com/DefTruth/lite.ai.toolkit},
  author={Yan Jun},
  year={2021}
}

❀️ Star πŸŒŸπŸ‘†πŸ» this repo if it does any helps to you, many thanks ~

lite.ai.toolkit's People

Contributors

deftruth avatar ysc3839 avatar

Watchers

James Cloos avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.