Giter VIP home page Giter VIP logo

nnstreamer / nnstreamer Goto Github PK

View Code? Open in Web Editor NEW
669.0 39.0 167.0 91.04 MB

:twisted_rightwards_arrows: Neural Network (NN) Streamer, Stream Processing Paradigm for Neural Network Apps/Devices.

Home Page: https://nnstreamer.ai

License: GNU Lesser General Public License v2.1

Shell 8.38% C 44.87% C++ 40.11% Python 2.34% Meson 2.72% Csound 0.19% Makefile 0.63% Yacc 0.54% Lex 0.08% Dockerfile 0.09% Lua 0.04% Roff 0.01%
ai gstreamer gstreamer-plugins tensorflow intelligence caffe2 neural-network tizen hacktoberfest android

nnstreamer's Introduction

NNStreamer

Gitter DailyBuild CII Best Practices Total alerts Code Coverage Coverity Scan Defect Status GitHub repo size

Neural Network Support as Gstreamer Plugins.

NNStreamer is a set of Gstreamer plugins that allow Gstreamer developers to adopt neural network models easily and efficiently and neural network developers to manage neural network pipelines and their filters easily and efficiently.

Architectural Description (WIP)

Toward Among-Device AI from On-Device AI with Stream Pipelines, IEEE/ACM ICSE 2022 SEIP
NNStreamer: Efficient and Agile Development of On-Device AI Systems, IEEE/ACM ICSE 2021 SEIP [media]
NNStreamer: Stream Processing Paradigm for Neural Networks ... [pdf/tech report]
GStreamer Conference 2018, NNStreamer [media] [pdf/slides]
Naver Tech Talk (Korean), 2018 [media] [pdf/slides]
Samsung Developer Conference 2019, NNStreamer (media)
ResearchGate Page of NNStreamer

Official Releases

Tizen Ubuntu Android Yocto macOS
5.5M2 and later 16.04/18.04/20.04/22.04 9/P Kirkstone
arm armv7l badge Available Available Ready N/A
arm64 aarch64 badge Available android badge yocto badge N/A
x64 x64 badge ubuntu badge Ready Ready Available
x86 x86 badge N/A N/A Ready N/A
Publish Tizen Repo PPA Daily build Layer Brew Tap
API C/C# (Official) C Java C C
  • Ready: CI system ensures build-ability and unit-testing. Users may easily build and execute. However, we do not have automated release & deployment system for this instance.
  • Available: binary packages are released and deployed automatically and periodically along with CI tests.
  • Daily Release
  • SDK Support: Tizen Studio (5.5 M2+) / Android Studio (JCenter, "nnstreamer")
  • Enabled features of official releases

Objectives

  • Provide neural network framework connectivities (e.g., tensorflow, caffe) for gstreamer streams.

    • Efficient Streaming for AI Projects: Apply efficient and flexible stream pipeline to neural networks.
    • Intelligent Media Filters!: Use a neural network model as a media filter / converter.
    • Composite Models!: Multiple neural network models in a single stream pipeline instance.
    • Multi Modal Intelligence!: Multiple sources and stream paths for neural network models.
  • Provide easy methods to construct media streams with neural network models using the de-facto-standard media stream framework, GStreamer.

    • Gstreamer users: use neural network models as if they are yet another media filters.
    • Neural network developers: manage media streams easily and efficiently.

Maintainers

Committers

Components

Note that this project has just started and many of the components are in design phase. In Component Description page, we describe nnstreamer components of the following three categories: data type definitions, gstreamer elements (plugins), and other misc components.

Getting Started

For more details, please access the following manuals.

  • For Linux-like systems such as Tizen, Debian, and Ubuntu, press here.
  • For macOS systems, press here.
  • To build an API library for Android, press here.

Applications

CI Server

AI Acceleration Hardware Support

Although a framework may accelerate transparently as Tensorflow-GPU does, nnstreamer provides various hardware acceleration subplugins.

  • Movidius-X via ncsdk2 subplugin: Released
  • Movidius-X via openVINO subplugin: Released
  • Edge-TPU via edgetpu subplugin: Released
  • ONE runtime via nnfw(an old name of ONE) subplugin: Released
  • ARMNN via armnn subplugin: Released
  • Verisilicon-Vivante via vivante subplugin: Released
  • Qualcomm SNPE via snpe subplugin: Released
  • NVidia via TensorRT subplugin: Released
  • TRI-x NPUs: Released
  • NXP i.MX series: via the vendor
  • Others: TVM, TensorFlow, TensorFlow-lite, PyTorch, Caffe2, SNAP, ...

Contributing

Contributions are welcome! Please see our Contributing Guide for more details.

nnstreamer's People

Contributors

abcinje avatar again4you avatar anyj0527 avatar chosanglyul avatar gichan-jang avatar harshj20 avatar helloahn avatar jaeyun-jung avatar jijoongmoon avatar jinhyuck-park avatar jvuillaumier avatar kbumsik avatar kimjh12 avatar kparichay avatar leemgs avatar linuxias avatar makesource avatar minty99 avatar myungjoo avatar niklasjang avatar niley7464 avatar ohsewon avatar songgot avatar tdrozdovsky avatar tony-jinwoo-ahn avatar tschulz avatar wooksong avatar xroumegue avatar yeonykim2 avatar zhoonit avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nnstreamer's Issues

debian: Fail to build so file with tensorflow-lite

Since tensorflow-lite-dev package for debian is built without -fPIC, so below error occurs when making debian package.

  • Error Code
[ 21%] Linking CXX shared library libtensor_filter_tflitecore.so
cd /home/again4you/nnstreamer_work/jy_review/nnstreamer/build/gst/tensor_filter && /usr/bin/cmake -E cmake_link_script CMakeFiles/tensor_filter_tflitecore.dir/link.txt --verbose=1
/usr/bin/x86_64-linux-gnu-g++  -fPIC -g -O2 -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2   -Wall -Werror -fPIC -g -std=c++11  -Wl,-Bsymbolic-functions -Wl,-z,relro -shared -Wl,-soname,libtensor_filter_tflitecore.so -o libtensor_filter_tflitecore.so CMakeFiles/tensor_filter_tflitecore.dir/tensor_filter_tensorflow_lite_core.cc.o -lgstcontroller-1.0 -lgstvideo-1.0 -lgstaudio-1.0 -lgstbase-1.0 -lgstreamer-1.0 -lgobject-2.0 -lglib-2.0 ../../libcommon.a -ltensorflow-lite -lgstcontroller-1.0 -lgstvideo-1.0 -lgstaudio-1.0 -lgstbase-1.0 -lgstreamer-1.0 -lgobject-2.0 -lglib-2.0 
/usr/bin/ld: /usr/lib/gcc/x86_64-linux-gnu/5/../../../../lib/libtensorflow-lite.a(error_reporter.o): relocation R_X86_64_32S against `_ZN6tflite14StderrReporter6ReportEPKcP13__va_list_tag' can not be used when making a shared object; recompile with -fPIC
/usr/lib/gcc/x86_64-linux-gnu/5/../../../../lib/libtensorflow-lite.a: error adding symbols: Bad value
collect2: error: ld returned 1 exit status
gst/tensor_filter/CMakeFiles/tensor_filter_tflitecore.dir/build.make:98: recipe for target 'gst/tensor_filter/libtensor_filter_tflitecore.so' failed
make[3]: *** [gst/tensor_filter/libtensor_filter_tflitecore.so] Error 1

[WIP] [Load] Add common internal functions

Add common internal functions for tensor_load filter.
It appears that these can be shared with tensor_save later as well.

This prepares #322

Signed-off-by: MyungJoo Ham [email protected]

Self evaluation:

  1. Build test: [X]Passed [ ]Failed [*]Skipped
  2. Run test: [ ]Passed [ ]Failed [X]Skipped

[Tensorfilter] Need NN Model to test other/tensors

Issue Description

May need Fast RCNN Model for object detection. The output of this RCNN is other/tensors.

  • Input : input image
  • Output : detection_boxes, detection_scores, detection_classes, num_detections

However, it is somewhat difficult to find pre-translated tflite model. According to the tflite manual, we could use toco to translate flatbuffer. It is worth to try. ( It is much better if we could find the proper tflite model from internet.)

[Filter] Support Caffe2

Let's hope that caffe2 users are not going to be mad enough to modify library itself for their own models.

[Example] Dynamic Pipeline Modification

We need example application (as a test case as well) that modifies tensor_* related pipelines in run-time.

We need to ensure that it's possible with our plugins.

Request to Update Wiki Pages: URLs to get files, Change the paths

  1. Please add URLs to get all the files required by the examples (i.e., *.tflite, labels.txt)
  • If possible, we may use github.com/nnsuite/testcase/... ( as long as the files are not touched by us, we do not need to worry administrative problems.)
  1. /dev as an example installation path is not good. Some people will really use it and possible break the system. Let's use something not harmful such as ~/lib/ or ~/test/

The requirement is to allow a developer with a clean system (a newly installed Ubuntu) to build, install, and run the example.

Research Topic: queue overfill handling

We "know" that gstreamer supports different modes for synchronization and queue handling.

However, soon, we will need deep understandings in this area.

Some people (at least two! :) ) should be familiar enough with this topic so that he can let us know how to:

  1. construct stream pipeline with queues or inter-element connections dropping frames whenever "next" frame comes while the "previous" frame is still being processed.
  2. at tensor_mux or tensor_merge, use the most recent tensor frame for each sink-pad, dropping any "obsolete" frame and keeping "recent" frame EVEN if it is already processed and sent away to next element.

[Filter/TF-Lite] the latency of tensorflow lite model (Not Urgent)

It was successful to extract labels from video stream data.
Since the processing term takes almost 0.3sec for each tensor, however, we need to decide the policy about it.
I attached the part of log data.

...

** Message: Invoke() is finished: 312.764000
** Message: Invoke() is finished: 311.526000
** Message: Invoke() is finished: 311.302000
** Message: Invoke() is finished: 311.696000
** Message: Invoke() is finished: 313.468000

...

You can check this log with #328

[Filter] Support Caffe

Additional requirement: be ready for models with heavily modified caffe libraries. (how are we going to let models bring their own custmoized libraries?) This is really a hell-hole, but I'm sure we will be required to support such.

Research & Choose standard (for us) GUI stream design tool!

  1. Required: GUI-based stream design (e.g., pipeviz)
  2. Required: tensor_* must be supported (or could be tensor_*'s faults... though...)
  3. Required: save/load stream design.
  4. Highly recommended: code generation from the GUI-drawn pipelines
  5. Highly recommended: supports both Linux and Windows.

Migrate In-House CI System to Here.

We meed to migrate our in-house CI system to here so that we can actively work in github.com instead of the internal repo. Until then, we will use github.com as a mirror.

CC: @sewon-oh @leemgs


For DEB packaging with launchpad.net, we may use https://help.launchpad.net/API/Webhooks
For RPM packaging with build.opensuse.org, we may use https://openbuildservice.org/2013/11/22/source-update-via_token/

For Per-PR checks, we may migrate to AWS/Azure. During migration, we may use (slow but free) babot.net if really needed :)

[Filter] Support Caffe

Additional requirement: be ready for models with heavily modified caffe libraries. (how are we going to let models bring their own custmoized libraries?) This is really a hell-hole, but I'm sure we will be required to support such.

Research & Choose standard (for us) GUI stream design tool!

  1. Required: GUI-based stream design (e.g., pipeviz)
  2. Required: tensor_* must be supported (or could be tensor_*'s faults... though...)
  3. Required: save/load stream design.
  4. Highly recommended: code generation from the GUI-drawn pipelines
  5. Highly recommended: supports both Linux and Windows.

[Decode] support audio

  1. added audio format
  2. code refactoring
  • add macro for debug message
  • use common functions to set tensor and media caps

Needs to check later : test with debug message (set silent=false)

Signed-off-by: Jaeyun Jung [email protected]

[Example] Multi-Cam with TF-Lite Filter.

This is going to be a show case of "multi-modal" interaction.
Prerequisite: tensor_merge ( #351 )

Note that in addition to the following suggestions, we need additional elements for the visualization.

Suggestion 1.

cam1 -- preprocessors -- tensor_converter --+-- tensor_merge -- tensor_filter -- tensor_sink
cam2 -- preprocessors -- tensor_converter --+

Suggestion 2.

cam1 -- preprocessors -- tensor_converter -- tensor_filter_1 --+-- tensor_merge -- tensor_sink
cam2 -- preprocessors -- tensor_converter -- tensor_filter_2 --+

Suggestion 3.

cam1 -- preprocessors -- tensor_converter ----- tensor_filter_1 --+-- tensor_merge -- tensor_sink_1
cam2 -- preprocessors -- tensor_converter --+-- tensor_filter_2 --+
                                            +-- tensor_filter_3 -- tensor_sink_2

[Example] What about example NN models??

Find somewhere appropriate to store example NN models (e.g., ./tflite_model/mobilenet_v1_1.0_224_quant.tflite in our WIki)

Later, we will need to let CI system to load such NN models to test the code. (at least with "smoke test")

FOr this tflite file, we may need to specify the URL in the wiki and/or in some script?

[Example] Multi-Cam with TF-Lite Filter.

This is going to be a show case of "multi-modal" interaction.
Prerequisite: tensor_merge ( #351 )

Note that in addition to the following suggestions, we need additional elements for the visualization.

Suggestion 1.

cam1 -- preprocessors -- tensor_converter --+-- tensor_merge -- tensor_filter -- tensor_sink
cam2 -- preprocessors -- tensor_converter --+

Suggestion 2.

cam1 -- preprocessors -- tensor_converter -- tensor_filter_1 --+-- tensor_merge -- tensor_sink
cam2 -- preprocessors -- tensor_converter -- tensor_filter_2 --+

Suggestion 3.

cam1 -- preprocessors -- tensor_converter ----- tensor_filter_1 --+-- tensor_merge -- tensor_sink_1
cam2 -- preprocessors -- tensor_converter --+-- tensor_filter_2 --+
                                            +-- tensor_filter_3 -- tensor_sink_2

[Filter] Support Caffe2

Let's hope that caffe2 users are not going to be mad enough to modify library itself for their own models.

[Synchronization] Stream speed (framerate) throttling at src

Requirement:

source generation or in-the-middle filtering framerate throtting.

E.g.,

FILESRC (fast) --> QUEUE (leaky=2. # frame = 1) --> FILTER (slow) --> SINK

What if we do not lose and do not overfill queues at the same time?

Then, FIELSRC needs to feed QUEUE only when it's empty.

How do we do this?

[other/tensors] What does "rank" mean in other/tensors?

What does "rank" mean in other/tensors?

Unlike other/tensor, we have multiple tensors in other/tensors.
How "rank" is defined in other/tensors?

If this means the RANK LIMIT, please document so in the code where other/tensors is defined (tensor_common.h)

ps. The range of num_tensors is [1, 16], not [1, 65535]. Right?

Research Topic: queue overfill handling

We "know" that gstreamer supports different modes for synchronization and queue handling.

However, soon, we will need deep understandings in this area.

Some people (at least two! :) ) should be familiar enough with this topic so that he can let us know how to:

  1. construct stream pipeline with queues or inter-element connections dropping frames whenever "next" frame comes while the "previous" frame is still being processed.
  2. at tensor_mux or tensor_merge, use the most recent tensor frame for each sink-pad, dropping any "obsolete" frame and keeping "recent" frame EVEN if it is already processed and sent away to next element.

[Short Term Task] Performance Comparison of NNStreamer vs ROS

This is to brag potential performance advantages of NNStreamer.

CASE 1

The two implementations to be compared:

NNStreamer:

[USB-CAM 1920x1080 or higher] --> [Video Scaler (videoconverter) 640x480 or lower] --> [tf_lite model, "A"] --> [tensor_sink] --> [APP print out]

ROS:

[USB-CAM 1920x1080 or higher] --> [Video Scaler (not sure if this needs to be a node) 640x480 or lower] --> [tf_lite model, "A"] --> [APP print out]

[APP Print out] can be replaced by tee + tensor_dec (if tensor_dec is ready for bounding-boxes or text-overlay) + video_mixer + video_sink, which is much better!!!

It is highly recommended to use useful, but lightweight NN model. (e.g., MTCNN or MobileNet).

We may need to find a way to emphasize the performance effect,

CASE 2

With this case, we have "linear NN model", Nx, whose input and output tensor size is huge (e.g., 1920x1080x3) while it has almost no real processing in it (extremely light model. probably output == input). For proper comparison, we may need to add artifical load in it (to simulate an optimized lightweight model)

The proposed execution is:

[USE_CAM] --> [Nx 1] --> [Nx 2] --> [Nx 3] --> [tf_lite mode, "A" (real useful)] --> [tensor_sink]

For demonstration purpose we may add video_mixer and text/box overlay.

Performance Measurements

Execution Scenario A:

  • Run as fast as possible in a system. Get the average FPS.
  • Note that this could be misleading if USB-CAM overshoots frames and the stream / node configuration is not well implemented.

Execution Scenario B: (This is required as well!!!)

  • Run at a slower fixed rate (e.g., 1 per 10 second). Count the number of memory transactions with PMU
  • For this we need someone who's familiar with PMU/Perf/... (At least one of @wook16-song @sangjung-woo @geunsik-lim is familiar with this.)

Ease of App Implementation

Write concise test app code. Make it as short as possible but readable.
Write a single-line bash command as well. (as an extreme comparison case)

Other Things to Consider

Performance profiling (execution time, cpu time, memory transactions) of ROS nodes might be troublesome or tricky. You need to count ALL relevant processes. For NNStreamer/Gstreamer, you don't need to worry this as it's a single process program.

[Example] Dynamic Pipeline Modification

We need example application (as a test case as well) that modifies tensor_* related pipelines in run-time.

We need to ensure that it's possible with our plugins.

[decoder] More decoder functions

Support many popular NN models with decoder.

  • image recognition (labeling)
  • bounding boxes
  • bounding boxes + image recognition (labeling)
  • segmentation
  • ...

[Filter/TF-Lite] the latency of tensorflow lite model (Not Urgent)

It was successful to extract labels from video stream data.
Since the processing term takes almost 0.3sec for each tensor, however, we need to decide the policy about it.
I attached the part of log data.

...

** Message: Invoke() is finished: 312.764000
** Message: Invoke() is finished: 311.526000
** Message: Invoke() is finished: 311.302000
** Message: Invoke() is finished: 311.696000
** Message: Invoke() is finished: 313.468000

...

You can check this log with #328

[Tensor Mux / Tensor Demux] Should support framerate

Issue Description

Tensor Mux & Demux should sync by the frame rate accordingly.

                                ------------------> fixed frame rate
tensor 1 -->    +-----------------+      X     |    3   |    X    |
tensor 2 -->    |  tensor mux     |      1     |    X   |    5    |
tensor 3 -->    +-----------------+      2     |    4   |    6    |
                                         |          |         |
                                        \/         \/        \/
                                       none      tensors   tensors  
                                                 [ 3,1,4 ] [ 3,5,6 ]

[Tensor Mux / Tensor Demux] Should support framerate

Issue Description

Tensor Mux & Demux should sync by the frame rate accordingly.

                                ------------------> fixed frame rate
tensor 1 -->    +-----------------+      X     |    3   |    X    |
tensor 2 -->    |  tensor mux     |      1     |    X   |    5    |
tensor 3 -->    +-----------------+      2     |    4   |    6    |
                                         |          |         |
                                        \/         \/        \/
                                       none      tensors   tensors  
                                                 [ 3,1,4 ] [ 3,5,6 ]

[Merge] Implement!

Implement tensor_merge.

Unlike tensor_mux, this creates a "larger" single tensor from multiple "smaller" tensors.

  • mux creates other/tensors from multiple instances of other/tensor
  • merge creates other/tensor (no s!!!) from multple instance of other/tensor

mux is useful for creating a single stream-line with different types of tensors. (e.g., video and audio). merge is useful for creating a single tensor from tensors that may placed in a same plane or with same dimensions (e.g., expanding colorspaces by merging RGB video with IR video, adding more channels to same dimensions by merging images of stereo-cameras, stiching RGB videos from mutliple cameras directing different angles.)

[GUI Tool] pipeviz is not compatible with tensor_* filters.

If we add tensor_converter (other tensor_* filters are not tested) into a pipeviz ( https://github.com/virinext/pipeviz ) gstreamer pipeline, we soon get:

mzx@kohaku:/source/AutoDrv/pipeviz$ ./pipeviz  --gst-plugin-path ../NNStreamer/build/gst/tensor_converter/:../NNStreamer/build/gst/tensor_filter/:../NNStreamer/build/gst/tensor_decoder/
failed to get the current screen resources
QXcbConnection: XCB error: 170 (Unknown), sequence: 163, resource id: 90, major code: 146 (Unknown), minor code: 20
Segmentation fault (core dumped)
mzx@kohaku:/source/AutoDrv/pipeviz$ 

[GUI Tool] pipeviz is not compatible with tensor_* filters.

If we add tensor_converter (other tensor_* filters are not tested) into a pipeviz ( https://github.com/virinext/pipeviz ) gstreamer pipeline, we soon get:

mzx@kohaku:/source/AutoDrv/pipeviz$ ./pipeviz  --gst-plugin-path ../NNStreamer/build/gst/tensor_converter/:../NNStreamer/build/gst/tensor_filter/:../NNStreamer/build/gst/tensor_decoder/
failed to get the current screen resources
QXcbConnection: XCB error: 170 (Unknown), sequence: 163, resource id: 90, major code: 146 (Unknown), minor code: 20
Segmentation fault (core dumped)
mzx@kohaku:/source/AutoDrv/pipeviz$ 

[Merge] Implement!

Implement tensor_merge.

Unlike tensor_mux, this creates a "larger" single tensor from multiple "smaller" tensors.

  • mux creates other/tensors from multiple instances of other/tensor
  • merge creates other/tensor (no s!!!) from multple instance of other/tensor

mux is useful for creating a single stream-line with different types of tensors. (e.g., video and audio). merge is useful for creating a single tensor from tensors that may placed in a same plane or with same dimensions (e.g., expanding colorspaces by merging RGB video with IR video, adding more channels to same dimensions by merging images of stereo-cameras, stiching RGB videos from mutliple cameras directing different angles.)

[Example] What about example NN models??

Find somewhere appropriate to store example NN models (e.g., ./tflite_model/mobilenet_v1_1.0_224_quant.tflite in our WIki)

Later, we will need to let CI system to load such NN models to test the code. (at least with "smoke test")

FOr this tflite file, we may need to specify the URL in the wiki and/or in some script?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.