Giter VIP home page Giter VIP logo

api's Introduction

NNStreamer

Gitter DailyBuild CII Best Practices Total alerts Code Coverage Coverity Scan Defect Status GitHub repo size

Neural Network Support as Gstreamer Plugins.

NNStreamer is a set of Gstreamer plugins that allow Gstreamer developers to adopt neural network models easily and efficiently and neural network developers to manage neural network pipelines and their filters easily and efficiently.

Architectural Description (WIP)

Toward Among-Device AI from On-Device AI with Stream Pipelines, IEEE/ACM ICSE 2022 SEIP
NNStreamer: Efficient and Agile Development of On-Device AI Systems, IEEE/ACM ICSE 2021 SEIP [media]
NNStreamer: Stream Processing Paradigm for Neural Networks ... [pdf/tech report]
GStreamer Conference 2018, NNStreamer [media] [pdf/slides]
Naver Tech Talk (Korean), 2018 [media] [pdf/slides]
Samsung Developer Conference 2019, NNStreamer (media)
ResearchGate Page of NNStreamer

Official Releases

Tizen Ubuntu Android Yocto macOS
5.5M2 and later 16.04/18.04/20.04/22.04 9/P Kirkstone
arm armv7l badge Available Available Ready N/A
arm64 aarch64 badge Available android badge yocto badge N/A
x64 x64 badge ubuntu badge Ready Ready Available
x86 x86 badge N/A N/A Ready N/A
Publish Tizen Repo PPA Daily build Layer Brew Tap
API C/C# (Official) C Java C C
  • Ready: CI system ensures build-ability and unit-testing. Users may easily build and execute. However, we do not have automated release & deployment system for this instance.
  • Available: binary packages are released and deployed automatically and periodically along with CI tests.
  • Daily Release
  • SDK Support: Tizen Studio (5.5 M2+) / Android Studio (JCenter, "nnstreamer")
  • Enabled features of official releases

Objectives

  • Provide neural network framework connectivities (e.g., tensorflow, caffe) for gstreamer streams.

    • Efficient Streaming for AI Projects: Apply efficient and flexible stream pipeline to neural networks.
    • Intelligent Media Filters!: Use a neural network model as a media filter / converter.
    • Composite Models!: Multiple neural network models in a single stream pipeline instance.
    • Multi Modal Intelligence!: Multiple sources and stream paths for neural network models.
  • Provide easy methods to construct media streams with neural network models using the de-facto-standard media stream framework, GStreamer.

    • Gstreamer users: use neural network models as if they are yet another media filters.
    • Neural network developers: manage media streams easily and efficiently.

Maintainers

Committers

Components

Note that this project has just started and many of the components are in design phase. In Component Description page, we describe nnstreamer components of the following three categories: data type definitions, gstreamer elements (plugins), and other misc components.

Getting Started

For more details, please access the following manuals.

  • For Linux-like systems such as Tizen, Debian, and Ubuntu, press here.
  • For macOS systems, press here.
  • To build an API library for Android, press here.

Applications

CI Server

AI Acceleration Hardware Support

Although a framework may accelerate transparently as Tensorflow-GPU does, nnstreamer provides various hardware acceleration subplugins.

  • Movidius-X via ncsdk2 subplugin: Released
  • Movidius-X via openVINO subplugin: Released
  • Edge-TPU via edgetpu subplugin: Released
  • ONE runtime via nnfw(an old name of ONE) subplugin: Released
  • ARMNN via armnn subplugin: Released
  • Verisilicon-Vivante via vivante subplugin: Released
  • Qualcomm SNPE via snpe subplugin: Released
  • NVidia via TensorRT subplugin: Released
  • TRI-x NPUs: Released
  • NXP i.MX series: via the vendor
  • Others: TVM, TensorFlow, TensorFlow-lite, PyTorch, Caffe2, SNAP, ...

Contributing

Contributions are welcome! Please see our Contributing Guide for more details.

api's People

Contributors

again4you avatar anyj0527 avatar chunseoklee avatar gichan-jang avatar harshj20 avatar jaeyun-jung avatar kparichay avatar makesource avatar marekpikula avatar myungjoo avatar niley7464 avatar songgot avatar wooksong avatar yeonykim2 avatar zhoonit avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

api's Issues

Test failure of ML-service-agent

ML-service-agent query test sometimes fails.

[  167s] [ RUN      ] MLServiceAgentTest.query_client
[  167s] dbus-daemon[1231059]: [session uid=33 pid=1231059] Activating service name='org.tizen.machinelearning.service' requested by ':1.0' (uid=33 pid=1228407 comm="<not-read>" label="unconfined")
[  168s] dbus-daemon[1231059]: [session uid=33 pid=1231059] Successfully activated service 'org.tizen.machinelearning.service'
[  168s] ../tests/capi/unittest_capi_service_agent_client.cc:654: Failure
[  168s] Expected equality of these values:
[  168s]   ML_ERROR_NONE
[  168s]     Which is: 0
[  168s]   status
[  168s]     Which is: -22
[  168s] ../tests/capi/unittest_capi_service_agent_client.cc:661: Failure
[  168s] Expected equality of these values:
[  168s]   ML_ERROR_NONE
[  168s]     Which is: 0
[  168s]   status
[  168s]     Which is: -22
[  168s] ../tests/capi/unittest_capi_service_agent_client.cc:662: Failure
[  168s] Expected equality of these values:
[  168s]   input_data_size
[  168s]     Which is: 48
[  168s]   output_data_size
[  168s]     Which is: 16

[ml-service] Remote launch/request

Currently, using ml-service-remote, registering models and pipelines on remote devices has been implemented.
Next, let's implement the function of launching the pipeline from the sender device (RTOS) or requesting interference using registered model on remote devices.

  • Pipeline launch: launch and set/get state of the pipeline
  • Inference request
    • Singleshot (send input data with model config file (in/out dim, type, etc.) if necessary)
    • Pipeline (using appsrc, get caps or model config file)
  • Permission: Who will get permission and how to manage it?
  • Scheduling: How to deal the many request? thread pool or load balancer, etc.
  • pull mode: Get model or pipeline from connected devices.

Backporting ml-service / mlops to Tizen 7.0 and 8.0

Tizen 8.0 Backporting

Tizen 7.0 Backporting

DBus Interface

DBus Interface

Pipeline Interface

<?xml version="1.0" encoding="UTF-8" ?>
<node name="/Org/Tizen/MachineLearning/Service">
  <interface name="org.tizen.machinelearning.service.pipeline">
    <!-- Register the pipeline with given description. Return the call result and its id. -->
    <method name="register_pipeline">
      <arg type="s" name="pipeline" direction="in" />
      <arg type="i" name="result" direction="out" />
      <arg type="x" name="id" direction="out" />
    </method>
    <!-- Start the pipeline with given id. -->
    <method name="start_pipeline">
      <arg type="x" name="id" direction="in" />
      <arg type="i" name="result" direction="out" />
    </method>
    <!-- Stop the pipeline with given id -->
    <method name="stop_pipeline">
      <arg type="x" name="id" direction="in" />
      <arg type="i" name="result" direction="out" />
    </method>
    <!-- Destroy the pipeline with given id -->
    <method name="destroy_pipeline">
      <arg type="x" name="id" direction="in" />
      <arg type="i" name="result" direction="out" />
    </method>
    <!-- Get the state of pipeline with given id. -->
    <method name="get_state">
      <arg type="x" name="id" direction="in" />
      <arg type="i" name="result" direction="out" />
      <arg type="i" name="state" direction="out" />
    </method>
    <!-- Get the description of pipeline with given id. -->
    <method name="get_description">
      <arg type="x" name="id" direction="in" />
      <arg type="i" name="result" direction="out" />
      <arg type="s" name="description" direction="out" />
    </method>

    <!-- Sets the pipeline description with a given name. -->
    <method name="Set">
      <arg type="s" name="name" direction="in" />
      <arg type="s" name="description" direction="in" />
      <arg type="i" name="result" direction="out" />
    </method>
    <!-- Gets the pipeline description with a given name. -->
    <method name="Get">
      <arg type="s" name="name" direction="in" />
      <arg type="s" name="description" direction="out" />
      <arg type="i" name="result" direction="out" />
    </method>
    <!-- Deletes the pipeline description with a given name. -->
    <method name="Delete">
      <arg type="s" name="name" direction="in" />
      <arg type="i" name="result" direction="out" />
    </method>
  </interface>
</node>

Model Interface

<?xml version="1.0" encoding="UTF-8" ?>
<node name="/Org/Tizen/MachineLearning/Service">
  <interface name="org.tizen.machinelearning.service.model">
    <!-- Set the file path of the designated neural network model -->
    <method name="SetPath">
      <arg type="s" name="name" direction="in" />
      <arg type="s" name="path" direction="in" />
      <arg type="i" name="result" direction="out" />
    </method>
    <!-- Get the file path of the designated neural network model -->
    <method name="GetPath">
      <arg type="s" name="name" direction="in" />
      <arg type="s" name="path" direction="out" />
      <arg type="i" name="result" direction="out" />
    </method>
    <!-- Delete the file path of the designated neural network model -->
    <method name="Delete">
      <arg type="s" name="name" direction="in" />
      <arg type="i" name="result" direction="out" />
    </method>
  </interface>
</node>

Service API

Server Side

/* M1 Release */
int ml_service_set_pipeline (const char *name, const char *pipeline_desc);
int ml_service_get_pipeline (const char *name, char **pipeline_desc);
int ml_service_delete_pipeline (const char *name);

/* WIP */
int ml_service_pipeline_construct (const char *name, ml_pipeline_state_cb cb, void *user_data, ml_pipeline_h *pipe);
int ml_service_model_add (const char *name, const ml_service_model_description * desc);

int ml_service_server_getstate (ml_service_server_h h, ml_pipeline_state_e *state);
int ml_service_server_getdesc (ml_service_server_h h, char ** desc);
int ml_service_server_start (ml_service_server_h h);
int ml_service_server_stop (ml_service_server_h h);
int ml_service_server_close (ml_service_server_h h);

/**
 * @brief TBU / Query Server AI Service
 * @detail
 *   Rule 1. The pipeline should not have appsink, tensor_sink, appsrc or any other app-thread dependencies.
 *   Rule 2. Add "#INPUT#" and "#OUTPUT#" elements where input/output streams exist.
 *     E.g., " #INPUT# ! ... ! tensor-filter ... ! ... ! #OUTPUT# ".
 *   Rule 3. There should be exactly one pair of #INPUT# and #OUTPUT#.
 *   Rule 4. Supply input/output metadata with input_info & output_info.
 *   This is the simplist method, but restricted to static tensor streams.
 */
int ml_service_server_open_queryserver_static_tensors (ml_service_server_h *h, const char *topic_name, const char * desc, const ml_tensors_info_h input_info, const ml_tensors_info_h output_info);
/**
 * @brief TBU / Query Server AI Service
 * @detail
 *   Rule 1. The pipeline should not have appsink, tensor_sink, appsrc or any other app-thread dependencies.
 *   Rule 2. You may add "#INPUT#" and "#OUTPUT#" elements if you do not know how to use tensor-query-server.
 *     E.g., " #INPUT# ! tensor-filter ... ! ... ! #OUTPUT# ".
 *   Rule 3. There should be exactly one pair of #INPUT# and #OUTPUT#.
 *   Rule 4. Supply input/output metadata with gstcap_in and gstcap_out.
 *   This supports general GStreamer streams and general Tensor streams.
 */
int ml_service_server_open_queryserver_gstcaps (ml_service_server_h *h, const char *topic_name, const char * desc, const char *gstcap_in, const char *gstcap_out);
/**
 * @brief TBU / Query Server AI Service
 * @detail
 *   Rule 1. The pipeline should have a single pair of tensor-query-server-{sink / src}.
 *   Rule 2. The pipeline should not have appsink, tensor_sink, appsrc or any other app-thread dependencies.
 *   Rule 3. There should be exactly one pair of #INPUT# and #OUTPUT# if you use them.
 *   Rule 4. Add capsfilter or capssetter after src and before sink.
 *   This is for seasoned gstreamer/nnstreamer users who have some experiences in pipeline writing.
 */
int ml_service_server_open_queryserver_fulldesc (ml_service_server_h *h, const char *topic_name, const char * desc);


/**
 * @brief TBU / PUB/SUB AI Service
 * @detail
 * use "#OUTPUT#" unless you use fulldesc
 * don't rely on app threads (no appsink, appsrc, tensorsink or so on)
 */
int ml_service_server_open_publisher_static_tensors (ml_service_server_h *h, const char *topic_name, const char * desc, const ml_tensors_data_h out);
int ml_service_server_open_publisher_gstcaps (ml_service_server_h *h, const char *topic_name, const char * desc, const char *gstcap_out);
int ml_service_server_open_publisher_fulldesc (ml_service_server_h *h, const char *topic_name, const char * desc);


/**
 * @brief TBU / Client-side helpers
 * @detail
 *    Please use a pipeline for more efficient usage. This API is for testing or apps that can afford high-latency
 * @param [out] in Input tensors info. Set null if you don't need this info.
 * @param [out] out Output tensors info. Set null if you don't need this info.
 *    Note that we do not know if in/out is possible for remote clients, yet.
 */
int ml_service_client_open_query (ml_service_client_h *h, const char *topic_name, ml_tensors_info_h *in, ml_tensors_info_h *out);
int ml_service_client_open_subscriber (ml_service_client_h *h, const char *topic_name, ml_pipeline_sink_cb func, void *user_data);
int ml_service_client_query (ml_service_client_h h, const ml_tensors_data_h in, ml_tensors_data_h out);
int ml_service_client_close (ml_service_client_h h);

Use case #1

const gchar my_pipeline[] = "videotestsrc is-live=true ! videoconvert ! tensor_converter ! tensor_sink async=false";
gchar *pipeline;
int status;
ml_pipeline_h handle;

status = ml_service_set_pipeline ("my_pipeline", my_pipeline);
status = ml_service_get_pipeline ("my_pipeline", &pipeline);
status = ml_pipeline_construct (pipeline, NULL, NULL, &handle);
...

Feature check is prohibiting calling ml-api from ml-train api side which is at another repo.

Problem

ml_train_api(nntrainer) unittest fails after recent SR

Cause

  1. First call to ml_tensors_info_create(info) fails in below line(https://github.com/nnstreamer/nntrainer/blob/24a7738539cadd9d6ce5c0ee479eade373f8ea6f/api/capi/src/nntrainer.cpp#L1101-L1104)

  2. check feature fails at
    https://github.com/nnstreamer/api/blob/main/c/src/ml-api-common.c#L29

I am not sure how earlier versions made it possible to pass the given function available at gbs at this point.

Emergency measures

  • call _ml_set_feature_check from nntrainer side to turn the feature check off.

int _ml_tizen_set_feature_state (int state);

emergency PR will be updated soon. nnstreamer/nntrainer#1715

Long term solutions (maybe more)

  1. systematical approach to control features inside gbs (I heard that this is probably not viable from @jaeyun-jung though :p)
  2. testing outside of gbs
  3. migrate (at least test) to ml api (maybe this should not be considered as a long term solution because it still pose problem if someone is to use ml-api inside gbs anyway)
  4. Having a package to control a configuration which has a feature control (mimicking system_level feature check)

Migrate ML inference API from nnstreamer

For ubuntu pkg (cc: @gichan-jang @zhoonit @jijoongmoon)

  1. setup ppa (ml-api)
  2. update dependency to build nnstrainer

For Tizen
Now, please update and test with tizen.org and its infra.

  1. Register /platform/core/api/machine-learning into Tizen:Unified build (CC: @again4you ). We will need to setup Coverity/SVACE with it. You may need to start with a "dummy" spec file.
  2. Update tizen.org's nnstreamer.git from github.com
  3. Send SR of nnstreamer.git
  4. Update nnstreamer.git, remove CAPI packages.
  5. Send a group SR of nnstreamer.git + api.git (machine learning api in tizen.org)

Be careful with package backward compatibility issues, especially with Tizen Studio.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.