Accel-Video Pipe (AV-Pipe or AVP) is an integrated C++ library for AI video inference tasks on customers' devices, aiming to provide easily-used and high-performance experience for users.
Note: as the main work of my independent undergrad thesis, this project is still under development. Feel free to play with AV-Pipe and post any question or suggestion👏.
AVP Framework:
-
Modeling the AI video inference task as a continuously running DAG graph, each DAG node can be treated as a modularized component with certain generalizability.
-
Rich support for neural network inference, including:
- LibTorch (Caffe2)
- OpenVINO (Intel® CPU, GPU, VPU)
- ONNXRuntime
- TensorRT (Nvidia® GPU)
- TVM (not well supported now)
- ncnn (For mobile device, still testing...)
Note: mainly focus on ONNX network format. For TensorFlow-Models, plz check the mediapipe.
-
Multi-platform: (Consider iOS/Android in the future...)
- Linux
- MacOS
- Windows (see in windows branch)
-
Simple coding style. You can build a video inference pipeline with less than 50 lines of C++.
AVP Automation:
-
Automatically code generation by using YAML pipeline configurations. User just need to write YAML files to configure the pipe components (
PipeProcessor
in AVP) and connect differentPipeProcessor
s.avp_automation.py
class will handle the YAML files and automatically generate the target C++ code and cmake it.Note: a nice front-end UI will bring even better user experience.
-
Automatically pipeline optimization.
avp_automation.py
also hasprofile
method andmulti-threading
method to first estimate the timing info of eachPipeProcessor
and then do an automatic thread allocation and scheduling. -
Visualization of AVP pipeline.
avp_automation
providesvisualize
method to show the DAG graph of pipeline possibly with timing info and thread info.E.g. AVP pipeline of pose_estimation, different colors represent different threads.
Namespace: avp
Base Classes (in avpipe/base.hpp
):
-
StreamPacket
: to store temporal data to be processed;mat
:cv::Mat
type, used for opencv-related operations/computations;tensor
:aT::Tensor
type (maybe the most friendly C++ tensor type), used for DNN-related operations.
-
Stream
: a queue ofStreamPacket
with thread-safe, synchronized blocking mechanisms. -
PipeProcessor
: the actual computing module;init
: for initializationprocess
: a universal procedure for each computing module:- take
StreamPacket
frominStreams
, prepare theStreamPacket
foroutStreams
.
- take
run
: a virtual function must be implemented by different modules.bindStream
: used to bindStream
to thePipeProcessor
.
you need to install the following C++ libraries to use the AV-Pipe.
Optional: (depending on your use case)
- OpenVINO
- ONNXRuntime
- TensorRT
- glog: for AVP log.
- graphviz: for visualization
Run the python script:
# run "python avp_automation/run.py -h" to see how to use
python avp_automation/run.py -f avp_example/pose_estimation.yaml -l POVGBR --loop_len 50
Explain: in -l
option, POVGBR
defines a sequence of actions which are [Profile, Optimize, Visualize, Gen-code, Build, Run].
Please see Dev Roadmap.