Giter VIP home page Giter VIP logo

bitsy-ai / rpi-object-tracking Goto Github PK

View Code? Open in Web Editor NEW
176.0 19.0 71.0 165 MB

Object tracking tutorial using TensorFlow / TensorFlow Lite, Raspberry Pi, Pi Camera, and a Pimoroni Pan-Tilt Hat.

Home Page: https://medium.com/@grepLeigh/real-time-object-tracking-with-tensorflow-raspberry-pi-and-pan-tilt-hat-2aeaef47e134

License: MIT License

Makefile 3.52% Python 92.65% Dockerfile 0.46% Shell 3.36%

rpi-object-tracking's Introduction

Raspberry Pi Deep PanTilt

image

Documentation Status

READ THIS FIRST!

A detailed walk-through is available in Real-time Object Tracking with TensorFlow, Raspberry Pi, and Pan-tilt HAT.

Build List

An example of deep object detection and tracking with a Raspberry Pi

Basic Setup

Before you get started, you should have an up-to-date installation of Raspbian 10 (Buster) running on your Raspberry Pi. You'll also need to configure SSH access into your Pi.

Installation

  1. Install system dependencies
$ sudo apt-get update && sudo apt-get install -y \
    cmake python3-dev libjpeg-dev libatlas-base-dev raspi-gpio libhdf5-dev python3-smbus python3-venv libopenjp2-7 libtiff5
  1. Create new virtual environment
$ python3 -m venv .venv
  1. Activate virtual environment
$ source .venv/bin/activate
  1. Upgrade setuptools
$ python3 -m pip install --upgrade pip
$ pip install --upgrade setuptools
  1. Install TensorFlow 2.4 (community-built wheel)
$ pip install https://github.com/bitsy-ai/tensorflow-arm-bin/releases/download/v2.4.0/tensorflow-2.4.0-cp37-none-linux_armv7l.whl
  1. Install the rpi-deep-pantilt package.
pip install rpi-deep-pantilt
  1. Install Coral Edge TPU tflite_runtime (optional)

NOTE: This step is only required if you are using Coral's Edge TPU USB Accelerator. If you would like to run TFLite inferences using CPU only, skip this step.

$ pip install https://github.com/google-coral/pycoral/releases/download/release-frogfish/tflite_runtime-2.5.0-cp37-cp37m-linux_armv7l.whl

=======

Configuration

WARNING: Do not skip this section! You will not be able to use rpi-deep-pantilt without properly configuring your Pi.

Enable Pi Camera

  1. Run sudo raspi-config and select Interfacing Options from the Raspberry Pi Software Configuration Tool’s main menu. Press ENTER.

raspi-config main menu

  1. Select the Enable Camera menu option and press ENTER.

raspi-config interfacing options menu

  1. In the next menu, use the right arrow key to highlight ENABLE and press ENTER.

raspi-config enable camera yes/no menu

Enable SPI in Device Tree

  1. Run sudo raspi-config and select Interfacing Options from the Raspberry Pi Software Configuration Tool’s main menu. Press ENTER.

raspi-config main menu

  1. Select the SPI menu option and press ENTER.

  2. In the next menu, use the right arrow key to highlight Yes and press ENTER.

Enable i2c in Device Tree

A - Using raspi-config

  1. Run sudo raspi-config and select Interfacing Options from the Raspberry Pi Software Configuration Tool’s main menu. Press ENTER.

raspi-config main menu

  1. Select the I2C menu option and press ENTER.

  2. In the next menu, use the right arrow key to highlight Yes and press ENTER.

B - Editing configuation files

Alternatively, open /boot/config.txt and ensure that the following dtparams lines are uncommented:

dtparam=i2c1=on
dtparam=i2c_arm=on

Example Usage

Object Detection

The detect command will start a PiCamera preview and render detected objects as an overlay. Verify you're able to detect an object before trying to track it.

Supports Edge TPU acceleration by passing the --edge-tpu option.

rpi-deep-pantilt detect [OPTIONS] [LABELS]...

rpi-deep-pantilt detect --help
Usage: rpi-deep-pantilt detect [OPTIONS] [LABELS]...

  rpi-deep-pantilt detect [OPTIONS] [LABELS]

    LABELS (optional)     One or more labels to detect, for example:     
    $ rpi-deep-pantilt detect person book "wine glass"

    If no labels are specified, model will detect all labels in this list:
    $ rpi-deep-pantilt list-labels

    Detect command will automatically load the appropriate model

    For example, providing "face" as the only label will initalize
    FaceSSD_MobileNet_V2 model $ rpi-deep-pantilt detect face

    Other labels use SSDMobileNetV3 with COCO labels $ rpi-deep-pantilt detect
    person "wine class" orange

Options:
  --loglevel TEXT  Run object detection without pan-tilt controls. Pass
                   --loglevel=DEBUG to inspect FPS.
  --edge-tpu       Accelerate inferences using Coral USB Edge TPU
  --rotation INTEGER  PiCamera rotation. If you followed this guide, a
                      rotation value of 0 is correct.
                      https://medium.com/@grepLeigh/real-time-object-tracking-
                      with-tensorflow-raspberry-pi-and-pan-tilt-
                      hat-2aeaef47e134
  --help           Show this message and exit.

Object Tracking

The following will start a PiCamera preview, render detected objects as an overlay, and track an object's movement with Pimoroni pan-tilt HAT.

By default, this will track any person in the frame. You can track other objects by passing --label <label>. For a list of valid labels, run rpi-deep-pantilt list-labels.

rpi-deep-pantilt track

Supports Edge TPU acceleration by passing the --edge-tpu option.

Usage: rpi-deep-pantilt track [OPTIONS] [LABEL]

  rpi-deep-pantilt track [OPTIONS] [LABEL]

  LABEL (required, default: person) Exactly one label to detect, for example:     
  $ rpi-deep-pantilt track person

  Track command will automatically load the appropriate model

  For example, providing "face" will initalize FaceSSD_MobileNet_V2 model
  $ rpi-deep-pantilt track face

  Other labels use SSDMobileNetV3 model with COCO labels 
  $ rpi-deep-pantilt detect orange

Options:
  --loglevel TEXT  Pass --loglevel=DEBUG to inspect FPS and tracking centroid
                   X/Y coordinates
  --edge-tpu       Accelerate inferences using Coral USB Edge TPU
  --rotation INTEGER  PiCamera rotation. If you followed this guide, a
                      rotation value of 0 is correct.
                      https://medium.com/@grepLeigh/real-time-object-tracking-
                      with-tensorflow-raspberry-pi-and-pan-tilt-
                      hat-2aeaef47e134
  --help           Show this message and exit.

Valid labels for Object Detection/Tracking

rpi-deep-pantilt list-labels

The following labels are valid tracking targets.

['person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light', 'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow', 'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard', 'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple', 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch', 'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone', 'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear', 'hair drier', 'toothbrush']

Face Detection (NEW in v1.1.x)

The following command will detect human faces.

NOTE: Face detection uses a specialized model (FaceSSD_MobileNet_V2), while other labels are detecting using SSDMobileNetV3_COCO. You cannot detect both face and COCO labels at this time.

Watch this repo for updates that allow you to re-train these models to support a custom mix of object labels!

rpi-deep-pantilt detect face
Usage: cli.py face-detect [OPTIONS]

Options:
  --loglevel TEXT  Run object detection without pan-tilt controls. Pass
                   --loglevel=DEBUG to inspect FPS.
  --edge-tpu       Accelerate inferences using Coral USB Edge TPU
  --help           Show this message and exit.

Face Tracking (NEW in v1.1.x)

The following command will track a human face.

rpi-deep-pantilt track face
Usage: cli.py face-detect [OPTIONS]

Options:
  --loglevel TEXT  Run object detection without pan-tilt controls. Pass
                   --loglevel=DEBUG to inspect FPS.
  --edge-tpu       Accelerate inferences using Coral USB Edge TPU
  --help           Show this message and exit.

Model Summary

The following section describes the models used in this project.

Object Detection & Tracking

FLOAT32 model (ssd_mobilenet_v3_small_coco_2019_08_14)

rpi-deep-pantilt detect and rpi-deep-pantilt track perform inferences using this model. Bounding box and class predictions render at roughly 6 FPS on a Raspberry Pi 4.

The model is derived from ssd_mobilenet_v3_small_coco_2019_08_14 in tensorflow/models. I extended the model with an NMS post-processing layer, then converted to a format compatible with TensorFlow 2.x (FlatBuffer).

I scripted the conversion steps in tools/tflite-postprocess-ops-float.sh.

Quantized UINT8 model (ssdlite_mobilenet_edgetpu_coco_quant)

If you specify --edge-tpu option, rpi-deep-pantilt detect and rpi-deep-pantilt track perform inferences using this model. Rounding box and class predictions render at roughly 24+ FPS (real-time) on Raspberry Pi 4.

This model REQUIRES a Coral Edge TPU USB Accelerator to run.

This model is derived from ssdlite_mobilenet_edgetpu_coco_quant in tensorflow/models. I reversed the frozen .tflite model into a protobuf graph to add an NMS post-processing layer, quantized the model in a .tflite FlatBuffer format, then converted using Coral's edgetpu_compiler tool.

I scripted the conversion steps in tools/tflite-postprocess-ops-128-uint8-quant.sh and tools/tflite-edgetpu.sh.

Face Detection & Tracking

I was able to use the same model architechture for FLOAT32 and UINT8 input, facessd_mobilenet_v2_quantized_320x320_open_image_v4_tflite2.

This model is derived from facessd_mobilenet_v2_quantized_320x320_open_image_v4 in tensorflow/models.

Common Issues

i2c is not enabled

If you run $ rpi-deep-pantilt test pantilt and see a similar error, check your Device Tree configuration.

File "/home/pi/projects/rpi-deep-pantilt/.venv/lib/python3.7/site-packages/pantilthat/pantilt.py", line 72, in setup
self._i2c = SMBus(1)
FileNotFoundError: [Errno 2] No such file or directory

Open /boot/config.txt and ensure the following lines are uncommented:

dtparam=i2c1=on
dtparam=i2c_arm=on

Credits

The MobileNetV3-SSD model in this package was derived from TensorFlow's model zoo, with post-processing ops added.

The PID control scheme in this package was inspired by Adrian Rosebrock tutorial Pan/tilt face tracking with a Raspberry Pi and OpenCV

This package was created with Cookiecutter and the audreyr/cookiecutter-pypackage project template.

rpi-object-tracking's People

Contributors

circa10a avatar dependabot[bot] avatar hhdewarren avatar leigh-johnson avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rpi-object-tracking's Issues

Custom model integration

  • rpi-deep-pantilt version: Latest version
  • Python version: 3.7.3
  • TensorFlow version: 2.2.0
  • Operating System: Raspbian

I am trying to use this to get my pan/tilt hat to track an opposing Raspberry Pi camera. I have trained my custom model and converted it to TFlite, I just need to know how to integrate my model into this repository to be able to track. I trained my model on ssd_mobilenet_v3_small_coco_2019_08_14.tar.gz, which you use, so I figure there is some way to tweak the files to make this work.
Thanks!

RuntimeError:

  • rpi-deep-pantilt version: 1.0.1
  • Python version: 3.7.3
  • TensorFlow version: 2.0.0
  • edgetpu version: 2.15.0
  • tflite-runtime version: 2.1.0.post1
  • Operating System: Raspbian 10 (buster)

Description

I was trying to run the detect application, but I get an error.

*Note: I can run the commands detect and track with no error when I not call edgetpu.

What I Did

$ rpi-deep-pantilt detect --edge-tpu

Traceback (most recent call last):
  File "/home/pi/.virtualenvs/dl4rpi/bin/rpi-deep-pantilt", line 8, in <module>
    sys.exit(main())
  File "/home/pi/.virtualenvs/dl4rpi/lib/python3.7/site-packages/rpi_deep_pantilt/cli.py", line 107, in main
    cli()
  File "/home/pi/.virtualenvs/dl4rpi/lib/python3.7/site-packages/click/core.py", line 829, in __call__
    return self.main(*args, **kwargs)
  File "/home/pi/.virtualenvs/dl4rpi/lib/python3.7/site-packages/click/core.py", line 782, in main
    rv = self.invoke(ctx)
  File "/home/pi/.virtualenvs/dl4rpi/lib/python3.7/site-packages/click/core.py", line 1259, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/home/pi/.virtualenvs/dl4rpi/lib/python3.7/site-packages/click/core.py", line 1066, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/home/pi/.virtualenvs/dl4rpi/lib/python3.7/site-packages/click/core.py", line 610, in invoke
    return callback(*args, **kwargs)
  File "/home/pi/.virtualenvs/dl4rpi/lib/python3.7/site-packages/rpi_deep_pantilt/cli.py", line 52, in detect
    model = SSDMobileNet_V3_Coco_EdgeTPU_Quant()
  File "/home/pi/.virtualenvs/dl4rpi/lib/python3.7/site-packages/rpi_deep_pantilt/detect/ssd_mobilenet_v3_coco.py", line 56, in __init__
    self.tflite_interpreter.allocate_tensors()
  File "/home/pi/.virtualenvs/dl4rpi/lib/python3.7/site-packages/tensorflow_core/lite/python/interpreter.py", line 244, in allocate_tensors
    return self._interpreter.AllocateTensors()
  File "/home/pi/.virtualenvs/dl4rpi/lib/python3.7/site-packages/tensorflow_core/lite/python/interpreter_wrapper/tensorflow_wrap_interpreter_wrapper.py", line 106, in AllocateTensors
    return _tensorflow_wrap_interpreter_wrapper.InterpreterWrapper_AllocateTensors(self)
RuntimeError: Internal: Unsupported data type in custom op handler: 0Node number 2 (EdgeTpuDelegateForCustomOp) failed to prepare.

I think that the problem could be originated by some incompatibility between all the packages listed above.

Would you mind to take a look?

Thank you in advance!

Add SBUS support

Pls add SBUS support, which is is the standard for driving many industrial pan-tilt gimbals and drone's flight controllers and also (newer) servos. SBUS is a serial digital protocol which can drive up to 16 servos with a single wire. One of the pins (maybe serial port) of the raspberry pi can be used as SBUS output, so you can get rid of the pimoroni servo hat.
Also PWM support would be nice, since i saw some RPi projects which output PWM servo signal directly from GPIO pins, so also this option lets you to get rid of the pimoroni hat. Either ways, you make the project up to date, since i think outputting servo signal in I2C is nonsense

pipeline.config

Hello

Im trying to compile re-train myself a model using facessd_mobilenet_v2_quantized_320x320_open_image_v4 but im having problems using the config file provided on tensorflow model repo trowing an error about RandomCropToAspectRatio/stack failed to run.... https://github.com/tensorflow/models/blob/master/research/object_detection/samples/configs/facessd_mobilenet_v2_quantized_320x320_open_image_v4.config

Can you please share your pipeline.config so i can compare and look for differences?

Thank you!!

Export detections (bounding-box coordinates, classfication...) for detection/tracking

  • rpi-deep-pantilt version: 1.2.1
  • Python version: 3.7.3
  • TensorFlow version: 2.4.0
  • Operating System: RaspiOS lite (latest version)

Description

Searching for a way on how to export the bounding box coordintes for the detection / tracking. I want to evaluate where in the video objects are detected. Is there a way to export that information in order to process it somewhere else (e.g. by piping the output of rpi-deep-pantilt to anyother program/process?
It would be awesome if this could work with and without edge-tpu. (P.S: I noticed that the console output with edge-tpu is much less informative than without).

Other than that I'd like to trigger some action if a specific object is detected with a probability threshold >xx% , e.g. if a bird is detected trigger a Deterrent system.

Thank you very much fo the great project!

rpi-deep-pantilt track face

  • rpi-deep-pantilt version: 1.21
  • Python version: 3.7.3
  • TensorFlow version: 2.4.0
  • Operating System: raspbian buster

Description

When I run rpi-deep-pantilt track face --edge-tpu or rpi-deep-pantilt track face I get the following error:

Process Process-2:
Traceback (most recent call last):
  File "/usr/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
    self.run()
  File "/usr/lib/python3.7/multiprocessing/process.py", line 99, in run
    self._target(*self._args, **self._kwargs)
  File "/home/pi/.venv/lib/python3.7/site-packages/rpi_deep_pantilt/detect/camera.py", line 26, in run_pantilt_detect
    model = model_cls()
  File "/home/pi/.venv/lib/python3.7/site-packages/rpi_deep_pantilt/detect/facessd_mobilenet_v2.py", line 74, in __init__
    self.PATH_TO_LABELS, use_display_name=True)
  File "/home/pi/.venv/lib/python3.7/site-packages/rpi_deep_pantilt/detect/util/label.py", line 172, in create_category_index_from_labelmap
    label_map_path, use_display_name)
  File "/home/pi/.venv/lib/python3.7/site-packages/rpi_deep_pantilt/detect/util/label.py", line 132, in create_categories_from_labelmap
    label_map = load_labelmap(label_map_path)
  File "/home/pi/.venv/lib/python3.7/site-packages/rpi_deep_pantilt/detect/util/label.py", line 105, in load_labelmap
    label_map_string = fid.read()
  File "/home/pi/.venv/lib/python3.7/site-packages/tensorflow/python/lib/io/file_io.py", line 117, in read
    self._preread_check()
  File "/home/pi/.venv/lib/python3.7/site-packages/tensorflow/python/lib/io/file_io.py", line 80, in _preread_check
    compat.path_to_str(self.__name), 1024 * 512)
tensorflow.python.framework.errors_impl.NotFoundError: /home/pi/.venv/lib/python3.7/site-packages/rpi_deep_pantilt/data/facessd_label_map.pbtxt; No such file or directory

Every other label works perfectly. Kindly advise.

Thank you.

rpi-deep-pantilt track face

  • rpi-deep-pantilt version: master version as GitHub default.
  • Python version: 3.7.3
  • TensorFlow version: 2.4
  • Operating System: Rasbian Buster

Description

as I am following the guide setting up everything, I can get rpi-deep-pantilt track person or other object work with the command. BUT rpi-deep-pantilt track face my pantile totally went wild.... all the way looking down to the ground.. what have I down wrong?

What I Did

following the guide set up everything, and all worked except track face.... :(


rpi-deep-pantilt track face

# camera went nuts... 

Things aren't precise.

I can't find what is ultimately being run. I am working on a similar project, but wanted to integrate TF's SSD model on my Raspberry Pi4. Can you be more clear with your Readme.md section.

Inverted pi camera image

  • rpi-deep-pantilt version: 1.1.0
  • Python version: 3.7
  • TensorFlow version: 2.0.2
  • Operating System: Raspbian Buster Desktop

Description

I found it preferable to mount the pi camera on the tilt gimbal with the CSI cable coming from the top of the gimbal as this keeps the cable from having to wrap around the pan gimbal during gimbal movement. Of course this resulted in an inverted image compared to the normal pi camera image during the pi camera test and is unacceptable when running the detect program.

What I Did

In the python 3.7 site packages there is a picamera library. In the picamera library there is a camera.py program that controls the functions of the camera. At line 495 of the camera.py program change the "rotation" value from 0 to 180. This will correct the image during the pi camera test program, but will not correct the pi camera image during the detect program.
To correct the pi camera image used in the detect program, change the "rotation" value from 0 to 180 in the detect/camera.py program and all will be well.

Paste the command(s) you ran and the output.
If there was a crash, please include the traceback here.

Headless setup appears may not work with this version

  • Raspberry Pi Deep PanTilt version: Pi4, 4GB, 64SD
  • Python version: 3.7
  • Operating System: Debian

Description

Describe what you were trying to get done.:
Trying to setup as headless upon boot, so I can use it as a freestanding tracking application.

Tell us what happened, what went wrong, and what you expected to happen.:
In headless setup & running autostart (no monitor & keyboard attached), Ran few second & servo moves random with jitter, does not detect and freezes at the end (even when VNC viewer connected, it doesn't show video feed in anyway to begin with)--no detection window can be seen in VNC viewer but Terminal showing only Run-time is running txt.

What I Did

1: Enabled & edited /etc/xdg/lxsession/LXDE-pi/autostart to run --lxterminal -e "/home/pi/rpi-deep-pantilt/detection_start.sh"-- which include script to run "rpi-deep-pantilt track"
2: Ran Raspi-config & set resolution to 1024*768 (note; other setting including default will not even start detect session)
3: Upon "rpi-deep-pantilt track", pan tracks ok but tilt stays still & does not move and stop eventually.

This is terminal output from VNC viewer.
INFO: Initialized TensorFlow Lite runtime.
Exception in thread Thread-1:
Traceback (most recent call last):
File "/home/pi/rpi-deep-pantilt/.venv/lib/python3.7/site-packages/picamera/camera.py", line 1704, in capture_continuous
'Timed out waiting for capture to end')
picamera.exc.PiCameraRuntimeError: Timed out waiting for capture to end

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/lib/python3.7/threading.py", line 917, in _bootstrap_inner
self.run()
File "/usr/lib/python3.7/threading.py", line 865, in run
self._target(*self._args, **self._kwargs)
File "/home/pi/rpi-deep-pantilt/.venv/lib/python3.7/site-packages/rpi_deep_pantilt/detect/camera.py", line 99, in flush
for f in self.stream:
File "/home/pi/rpi-deep-pantilt/.venv/lib/python3.7/site-packages/picamera/camera.py", line 1710, in capture_continuous
encoder.close()
File "/home/pi/rpi-deep-pantilt/.venv/lib/python3.7/site-packages/picamera/encoders.py", line 431, in close
self.stop()
File "/home/pi/rpi-deep-pantilt/.venv/lib/python3.7/site-packages/picamera/encoders.py", line 419, in stop
self._close_output()
File "/home/pi/rpi-deep-pantilt/.venv/lib/python3.7/site-packages/picamera/encoders.py", line 349, in _close_output
mo.close_stream(output, opened)
File "/home/pi/rpi-deep-pantilt/.venv/lib/python3.7/site-packages/picamera/mmalobj.py", line 371, in close_stream
stream.flush()
File "/home/pi/rpi-deep-pantilt/.venv/lib/python3.7/site-packages/picamera/array.py", line 238, in flush
self.array = bytes_to_rgb(self.getvalue(), self.size or self.camera.resolution)
File "/home/pi/rpi-deep-pantilt/.venv/lib/python3.7/site-packages/picamera/array.py", line 127, in bytes_to_rgb
'Incorrect buffer length for resolution %dx%d' % (width, height))
picamera.exc.PiCameraValueError: Incorrect buffer length for resolution 320x320

Solved!
sudo rasps-config
advanced--resolution--set to DMT Mode 82 1920*1080--save--restart
sudo nano /boot/config.txt
set frame buffer width 1920 & frame buffer height to 1024--save--restart

this appear new Pi4's HW (HDMI driver) related issue

TensorFlow Lite runtime does not exit cleanly with a ^C

  • rpi-deep-pantilt version: 1.1.0
  • Python version: 3.7
  • TensorFlow version: 2.0.2
  • Operating System: Raspbian Buster Desktop

Description

TensorFlow Lite runtime does not exit cleanly with a ^C

What I Did

Exit rpi-deep-pantilt detect with a ^C

INFO: Initialized TensorFlow Lite runtime.
^CException in thread Thread-2:
Traceback (most recent call last):
  File "/usr/lib/python3.7/threading.py", line 917, in _bootstrap_inner
    self.run()
  File "/usr/lib/python3.7/threading.py", line 865, in run
    self._target(*self._args, **self._kwargs)
  File "/home/pi/rpi-deep-pantilt/.venv/lib/python3.7/site-packages/rpi_deep_pantilt/detect/camera.py", line 81, in render_overlay
    self.overlay.update(self.overlay_buff)
  File "/home/pi/rpi-deep-pantilt/.venv/lib/python3.7/site-packages/picamera/renderers.py", line 447, in update
    buf = self.renderer.inputs[0].get_buffer()
  File "/home/pi/rpi-deep-pantilt/.venv/lib/python3.7/site-packages/picamera/mmalobj.py", line 1141, in get_buffer
    'cannot get buffer from disabled port %s' % self.name)
picamera.exc.PiCameraPortDisabled: cannot get buffer from disabled port vc.ril.video_render:in:0: Argument is invalid

Setuptools needed upgrade to complete TensorFlow 2.0 installation

  • rpi-deep-pantilt version: 1.1.0
  • Python version: 3.7.3
  • TensorFlow version: 2.0.2
  • Operating System: Raspbian Buster Desktop on Rpi 4B+ w/4GB RAM

Description

During the installation of TensorFlow 2.0 from a community-built wheel I received the following error at the end of the build:
"ERROR: tensorboard 2.0.2 has requirement setuptools >=41.0.0, but you'll have setuptools 40.8.0 which is incompatible."

What I Did

Upgraded setuptools:

"sudo pip3 install setuptools --upgrade"

Reinstalled TensorFlow 2.0 after setuptools upgrade and build completed without error.

Regards,
TCIII

Investigate use of PCA9685 servo driver in place of Pimoroni custom servo driver

  • rpi-deep-pantilt version: 1.0.1
  • Python version: 3
  • TensorFlow version: 2.0
  • Operating System: Raspbian Buster Desktop

Description

Trying to purchase the designated hardware, however Pimoroni Pan/Tilt HAT is out of stock and no ETA. Also to be able to run the Rpi 4B+ without it throttling back, it takes a good size heat sink and fan that the Pimoroni Servo Driver HAT will physically interfere with.

What I Did

Investigated alternate servo drivers. The Pimoroni Pan/Tilt HAT uses a custom, proprietary IC chip to generate the servo PWM signals unlike most other servo drivers modules/HATs that use the PCA9685 servo driver IC. Pimoroni originally used the Adafruit PCA9685 library for the pantilthat function, before switching to the proprietary IC chip servo driver, as can be found here: https://github.com/RogueM/PanTiltFacetracker

The PCA9685 will also allow the use of bigger servos, in place of the Pimoroni Servo Driver HAT/Pan-Tilt Gimbal, to drive bigger gimbals that may include other equipment than just the pi V2 camera!

Recommend that you create a branch that allows the use of the PCA9865 servo driver board and Adafruit PCA9685 library in place of the Pimoroni proprietary IC servo driver and library. I believe that this will require minor modifications to the rpi-deep-pantilt/control/manager.py and hardware_test.py code.

Regards,
Thomas Coyle System Engineer

Proposing to include face detection together

  • Raspberry Pi Deep PanTilt version: 1.0.1
  • Python version: 3.7
  • Operating System: debian
    Pi4, 4GB, 64GB SD, Clean rpi-deep-pantilt install only on SD with .VENV setup

Improvement suggestion

When person is close to Picam, it tracks body & miss out face (face being out of screen frame) unless it started with face first & face is stay close to Picam without whole-body detected.

It would be great to include face detection model into this project, so it's not only able to track person with face within the frame but also it can be used to identify person with further implementation. (like identify person from model first -- identify face from face model second --track face first or adjust Picam to fit face into center of frame & maintain face in the center of frame)

Also note, new release & only available Coral USB Runtime 2.1 current version is not running with this rpi-deep-pantilt version & I couldn't find older version of Runtime to duplicate running this version with Coral USB.

Great project!

Fix Travis CI/CD

Travis doesn't support ARMHF architechture, so we'll have to build and test via Qemu? Don't know if that's worth the trouble. Leaving this ticket open to acknowledge I should probably figure this out.

Custom object detection implementation

  • rpi-deep-pantilt version: 1.2.1
  • Python version: 3.7
  • TensorFlow version: 2.2.0
  • Operating System: Windows

Description

I am trying to implement a custom object detector trained for leopard and exported from Google Cloud AutoML (cloud.google.com/vision/automl/object-detection/docs/export-edge). I can't get this to work. I've followed through on the advice given in #40, but I'm not having any luck. Specifically:

  • I cloned and edited the SSDMobileNet_V3_Coco_EdgeTPU_Quant class in a new leopardtflite.py file, which also has the code for the imports needed and has the labels changed. In this file, I also point model.path to wherever the leopard tflite file is saved.

  • I created a new object detection pbtxt file

  • I edited cli.py to import the leopard tf lite models and run the models when the 'leopard' label is specified.

What I Did

Here's what happens when I run the code

(.venv) pi@raspberrypi:~ $ rpi-deep-pantilt track leopard  --rotation=180
Process Process-2:
Traceback (most recent call last):
  File "/usr/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
    self.run()
  File "/usr/lib/python3.7/multiprocessing/process.py", line 99, in run
    self._target(*self._args, **self._kwargs)
  File "/home/pi/.venv/lib/python3.7/site-packages/rpi_deep_pantilt/detect/camera.py", line 30, in run_pantilt_detect
    model = model_cls()
  File "/home/pi/.venv/lib/python3.7/site-packages/rpi_deep_pantilt/detect/leopardtflite.py", line 51, in __init__
    self.model.path = '/home/pi/.venv/lib/python3.7/site-packages/rpi_deep_pantilt/detect/leopard.tflite'
AttributeError: 'leopardtfliteModel' object has no attribute 'model'
^C[INFO] You pressed `ctrl + c`! Exiting...
[INFO] You pressed `ctrl + c`! Exiting...
[INFO] You pressed `ctrl + c`! Exiting...

Aborted!

Any guidance you can provide on custom model implementation would be super appreciated. 🙏 I've also attached the cli file and leopardtflite.py file for reference.

leopardDetection.zip

Pan & Tilt and Edge PTU error

  • rpi-deep-pantilt version: v1.21
  • Python version: I think 3.7 (Fresh RDP install over Buster)
  • TensorFlow version: (Fresh RDP install only)
  • Operating System:Buster 2020-08-20 release with new Eprom (enabled Boot from SSD ,if card fail)

Description

Pi4 4GB, Pimorini Pan-tilt Kit, Picam V2.1

I did step by step install per GitHub instruction 1 to 7 (including Coral's Edge TPU) & appeared all were installed without error. (Step by step Instruction given at Toward Data Science is diffirent with GitHub and hang on step 12)

I installed Edge TPU runtime & it's working!

Using the new Pi 12 mp HQ Camera in place of the Pi V2.1 8 Mp Camera

*pi-deep-pantilt version: 1.2.0
Python version: 3.7.3
TensorFlow version: 2.2.0
Operating System: Raspbian Buster Desktop on Rpi 4B+ w/4GB RAM

Description

I am using the new Pi 12 Mp HQ Camera with the Pi 6 mm Lens, in place of the Pi V2.1 8 Mp Camera, on a custom servo gimbal that is functionally the same as the Pimoroni Pan/Tilt Gimbal.

What I Did

First of all I used the rpi-deep-pantilt test camera program to verify that the Rpi 4B was seeing the new HQ Camera which it was and the video looked nice and sharp.

Then I proceeded to use the rpi-deep-pantilt test pantilt to very the performance of the pan/tilt gimbal. I had to reverse the tilt value to get the tilt gimbal to move in correct direction due to my custom gimbal setup.

My next action was to use the rpi-deep-pantilt detect to verify that the program could detect and classify me which it did. Since the 6 mm Lens is wide angle, I could move over quite a distance horizontally before I exited the detection box and the FOV.

Finally I ran the rpi-deep-pantilt track to see if the program could successfully track me if I began to move out of the FOV. I found that I had to reverse both the pan and tilt directions to get the program to successfully track me. I am using the default PID values and will have to adjust them for my custom gimbal setup. The pan gimbal moved smoothly, but slowly. The tilt gimbal moved slowly and did hunt slightly after the detect box had captured me and I probably will have to adjust the I and D PID values.

Bottom line here is that the new Pi 12 Mp HQ Camera works well with Leigh's programs, however should the frame size be adjusted for the larger 12 Mp chip and the wider FOV? Suggestions welcomed!

Use a USB Webcam?

  • rpi-deep-pantilt version: 1.0.0 (2019-12-01)
  • Python version: 3.7
  • TensorFlow version: 2.2
  • Operating System: Raspberry OS

Description

Is it possible to use a regular USB Webcam? What changes do I have to make to make it run?

Additional question: When I run a coral model on object detection I get a max 8 FPS. Would that be faster using a Pi Cam?

Regards,
Armin

Coral USB is not working

  • Raspberry Pi Deep PanTilt version: 4 with 4GB & 64SD
  • Python version: 3.7
  • Operating System: Debian Buster

Description

Ran "rpi-deep-pantilt detect --edge-tpu --loglevel=INFO" after installing Coral USB per instruction & Googl's (Note it is now 2.1 version Runtime)"

Error: "RuntimeError: Internal: Unsupported data type in custom op handler: 0Node number 2 (EdgeTpuDelegateForCustomOp) failed to prepare."

It works without "--edge-tpu"

What I Did: multiple re-install & reboot & same result

Paste the command(s) you ran and the output.: "rpi-deep-pantilt detect --edge-tpu" and "rpi-deep-pantilt detect --edge-tpu --loglevel=INFO" ----generate same error.

Whole output of error from CMD
(.venv) pi@raspberrypi:~/rpi-deep-pantilt $ rpi-deep-pantilt detect --edge-tpu
INFO: Initialized TensorFlow Lite runtime.
Traceback (most recent call last):
  File "/home/pi/rpi-deep-pantilt/.venv/bin/rpi-deep-pantilt", line 8, in <module>
    sys.exit(main())
  File "/home/pi/rpi-deep-pantilt/.venv/lib/python3.7/site-packages/rpi_deep_pantilt/cli.py", line 107, in main
    cli()
  File "/home/pi/rpi-deep-pantilt/.venv/lib/python3.7/site-packages/click/core.py", line 764, in __call__
    return self.main(*args, **kwargs)
  File "/home/pi/rpi-deep-pantilt/.venv/lib/python3.7/site-packages/click/core.py", line 717, in main
    rv = self.invoke(ctx)
  File "/home/pi/rpi-deep-pantilt/.venv/lib/python3.7/site-packages/click/core.py", line 1137, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/home/pi/rpi-deep-pantilt/.venv/lib/python3.7/site-packages/click/core.py", line 956, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/home/pi/rpi-deep-pantilt/.venv/lib/python3.7/site-packages/click/core.py", line 555, in invoke
    return callback(*args, **kwargs)
  File "/home/pi/rpi-deep-pantilt/.venv/lib/python3.7/site-packages/rpi_deep_pantilt/cli.py", line 52, in detect
    model = SSDMobileNet_V3_Coco_EdgeTPU_Quant()
  File "/home/pi/rpi-deep-pantilt/.venv/lib/python3.7/site-packages/rpi_deep_pantilt/detect/ssd_mobilenet_v3_coco.py", line 56, in __init__
    self.tflite_interpreter.allocate_tensors()
  File "/home/pi/rpi-deep-pantilt/.venv/lib/python3.7/site-packages/tensorflow_core/lite/python/interpreter.py", line 244, in allocate_tensors
    return self._interpreter.AllocateTensors()
  File "/home/pi/rpi-deep-pantilt/.venv/lib/python3.7/site-packages/tensorflow_core/lite/python/interpreter_wrapper/tensorflow_wrap_interpreter_wrapper.py", line 106, in AllocateTensors
    return _tensorflow_wrap_interpreter_wrapper.InterpreterWrapper_AllocateTensors(self)
RuntimeError: Internal: Unsupported data type in custom op handler: 0Node number 2 (EdgeTpuDelegateForCustomOp) failed to prepare.

If there was a crash, please include the traceback here.

Error encountered when upgrading rpi-deep-pantilt

*pi-deep-pantilt version: 1.1.0
Python version: 3.7.3
TensorFlow version: 2.0.0
Operating System: Raspbian Buster Desktop on Rpi 4B+ w/4GB RAM

Description

When I attempted to upgrade rpi-deep-pantilt to version 1.2.0, the upgrade failed.

What I Did

(.venv) pi@raspberrypi:~/rpi-deep-pantilt $ python3 -m pip install --upgrade rpi-deep-pantilt
Looking in indexes: https://pypi.org/simple, https://www.piwheels.org/simple
Collecting rpi-deep-pantilt
Using cached rpi_deep_pantilt-1.2.0-py2.py3-none-any.whl (30 kB)
Requirement already satisfied, skipping upgrade: smbus; platform_machine == "armv7l" in ./.venv/lib/python3.7/site-packages (from rpi-deep-pantilt) (1.1.post2)
Requirement already satisfied, skipping upgrade: pillow in ./.venv/lib/python3.7/site-packages (from rpi-deep-pantilt) (7.0.0)
Requirement already satisfied, skipping upgrade: Click>=7.0 in ./.venv/lib/python3.7/site-packages (from rpi-deep-pantilt) (7.1.1)
Requirement already satisfied, skipping upgrade: h5py in ./.venv/lib/python3.7/site-packages (from rpi-deep-pantilt) (2.10.0)
ERROR: Could not find a version that satisfies the requirement tensorflow>=2.2.0 (from rpi-deep-pantilt) (from versions: 0.11.0, 1.12.0, 1.13.1, 1.14.0)
ERROR: No matching distribution found for tensorflow>=2.2.0 (from rpi-deep-pantilt)

I assume that TensorFlow needs to be updated to version 2.2.0 or greater since that is what the rpi-deep-pantilt upgrade is requesting?

Regards,
TCIII

Searching with "track" command

  • rpi-deep-pantilt version: 1.2.0
  • Python version: 3.7.3
  • TensorFlow version: 2.2.0
  • Operating System: Raspbian

I am trying to integrate my own custom model into this repository. I have successfully gotten my model to work with the "detect" and "track" commands. However, I am trying to create a script that allows for the pan/tilt HAT to pan from the -90deg position to the +90deg position, in increments of 60deg, while running the track command (tilt stays at 25deg position). Additionally, once the camera successfully detects an object and tracking begins, I want the panning command to stop. Any suggestions? Now, my custom model is set to detect a Raspberry Pi camera (it only detects well at about 1ft away). What changes do I need to make to the PID scripts?
I was thinking of creating a new Python script to combine the panning commands with the tracking command, but it doesn't work unless there is a way to differentiate when the tracking command is issued vs when the tracking begins. I figure the best way to do it is going into the repo files but I don't really know where to start. I noticed when the track command is issued it starts in a high tilted position and slowly moves down. Perhaps changing that script would work.

I am new to coding and software so please be in-depth.

Failed to write byte Error

I followed all the instructions but keep on getting this error. Can someone help me?

(.venv) pi@raspberrypi:~ $ rpi-deep-pantilt test pantilt

-- output 

(.venv) pi@raspberrypi:~ $ rpi-deep-pantilt test pantilt
INFO:root:Starting Pan-Tilt HAT test!
INFO:root:Pan-Tilt HAT should follow a smooth sine wave
Traceback (most recent call last):
  File "/home/pi/.venv/bin/rpi-deep-pantilt", line 8, in <module>
    sys.exit(main())
  File "/home/pi/.venv/lib/python3.7/site-packages/rpi_deep_pantilt/cli.py", line 153, in main
    cli()
  File "/home/pi/.venv/lib/python3.7/site-packages/click/core.py", line 829, in __call__
    return self.main(*args, **kwargs)
  File "/home/pi/.venv/lib/python3.7/site-packages/click/core.py", line 782, in main
    rv = self.invoke(ctx)
  File "/home/pi/.venv/lib/python3.7/site-packages/click/core.py", line 1259, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/home/pi/.venv/lib/python3.7/site-packages/click/core.py", line 1259, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/home/pi/.venv/lib/python3.7/site-packages/click/core.py", line 1066, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/home/pi/.venv/lib/python3.7/site-packages/click/core.py", line 610, in invoke
    return callback(*args, **kwargs)
  File "/home/pi/.venv/lib/python3.7/site-packages/rpi_deep_pantilt/cli.py", line 141, in pantilt
    return pantilt_test()
  File "/home/pi/.venv/lib/python3.7/site-packages/rpi_deep_pantilt/control/hardware_test.py", line 38,
    pantilthat.pan(a)
  File "/home/pi/.venv/lib/python3.7/site-packages/pantilthat/pantilt.py", line 466, in servo_one
    self.setup()
  File "/home/pi/.venv/lib/python3.7/site-packages/pantilthat/pantilt.py", line 80, in setup
    self._set_config()
  File "/home/pi/.venv/lib/python3.7/site-packages/pantilthat/pantilt.py", line 118, in _set_config
    self._i2c_write_byte(self.REG_CONFIG, config)
  File "/home/pi/.venv/lib/python3.7/site-packages/pantilthat/pantilt.py", line 209, in _i2c_write_byte
    raise IOError("Failed to write byte")
OSError: Failed to write byte

it says OSError: Failed to write byte
does anyone know how to fix this?

Multiple Issues ...

EDIT: 12/30/2020
Created the entire SD from scratch.

Turns out: Allocation error is only a warning. So, ignore it.
Biggest issue is the missing TensorFlow wheel. I used this description to address the issue.

https://qengineering.eu/install-tensorflow-2.2.0-on-raspberry-pi-4.html

The following line only works without the <sudo -H>
sudo -H pip3 install tensorflow-2.2.0-cp37-cp37m-linux_armv7l.whl

Now most of the cases work, except the tracker always goes off into never never land. Will keep you posted.


I was so excited when I found this demo and still you did a wonderful job! It's just that I struggle to get the parts to work.

It seems since you started this project there has been many updates in many areas and nothing really matches anymore e.g. description in the blog and various versions of code and frameworks. Sorry, just my best guess. I'm new to raspi and ML on raspi with CORAL -- latter tech is cool but it appears more demos don't work for a newbie.

Hours in the only thing that works according to script is : rpi-deep-pantilt test pantilt. The thing moves! Yes.

As for the next one : rpi-deep-pantilt test camera it's not crashing but I also can't see anyting. Despite the fact that I have turned on raspi cam and rebooted .... I see the desktop on my TV.

rpi-deep-pantilt detect --edge-tpu seems to load the labels (shows is it but ...) but presents this afterwards. I guess some error

EDIT: investigated more and the error below only happens after CPU mem is at least set to 64MB. Prior to this things crash with different issues.

2020-11-29 16:36:18.764478: W tensorflow/core/framework/cpu_allocator_impl.cc:80] Allocation of 307200 exceeds 10% of free system memory.
2020-11-29 16:36:22.000636: W tensorflow/core/framework/cpu_allocator_impl.cc:80] Allocation of 307200 exceeds 10% of free system memory.
2020-11-29 16:36:22.023519: W tensorflow/core/framework/cpu_allocator_impl.cc:80] Allocation of 307200 exceeds 10% of free system memory.
2020-11-29 16:36:22.072725: W tensorflow/core/framework/cpu_allocator_impl.cc:80] Allocation of 307200 exceeds 10% of free system memory.
2020-11-29 16:36:22.093627: W tensorflow/core/framework/cpu_allocator_impl.cc:80] Allocation of 307200 exceeds 10% of free system memory.

Now, this is only after I installed the latest ftlite framework from the Coral website. Prior to this there were other issues.

I understand you are doing this in your spare time and I truly appreciate you sharing this. Please don't take the text above as a rant. By no means is it meant that way.

I'm just wondering if there is some help you can offer on how to check if the correct libs are in place .... and how to make this work. I'm almost 3 days in with fiddling and now want to see it work.

Thanks in advance, Dirk

p.s. your blog still says:
pip install https://github.com/leigh-johnson/Tensorflow-bin/blob/master/tensorflow-2.0.0-cp37-cp37m-linux_armv7l.whl\?raw\=true

which leads into this error:

Looking in indexes: https://pypi.org/simple, https://www.piwheels.org/simple
Collecting tensorflow==2.0.0
ERROR: HTTP error 404 while getting https://github.com/leigh-johnson/Tensorflow-bin/blob/master/tensorflow-2.0.0-cp37-cp37m-linux_armv7l.whl?raw=true
ERROR: Could not install requirement tensorflow==2.0.0 from https://github.com/leigh-johnson/Tensorflow-bin/blob/master/tensorflow-2.0.0-cp37-cp37m-linux_armv7l.whl?raw=true because of HTTP error 404 Client Error: Not Found for url: https://github.com/bitsy-ai/tensorflow-arm-bin/blob/main/tensorflow-2.0.0-cp37-cp37m-linux_armv7l.whl for URL https://github.com/leigh-johnson/Tensorflow-bin/blob/master/tensorflow-2.0.0-cp37-cp37m-linux_armv7l.whl?raw=true

PID controller improvements

Summary

Currently, the PID controller implements basics from Adrian Rosebrock's blog post Pan/tilt face tracking with a Raspberry Pi and OpenCV. The PID gains are hard-coded, roughly tuned to track a desk-height object no further than a few meters.

Work Required

  • Implement Config API + serializer
  • Replace hard-coded PID controller gains with values read from config (default: current gains)
  • Add calibration routines for configurable variables (rotation, Px+Ix, Dx, Py+Iy, Dy)
  • Implement a ResetBehavior API, with a few common examples like reset x/y to origin, seek neighborhood, seek grid

Error encountered when trying to use Google Coral accelerator

  • rpi-deep-pantilt version: v1.2.0
  • Python version: 3.7.3
  • TensorFlow version: 2.2
  • Operating System: Raspbian 10 (buster), running on a 4 gig Raspberry Pi 4

Description

Object detection using Google Coral accelerator (following README.md instructions for rpi-deep-pantilt version 1.2) fails and does not display image. Please note standard object detection without using the accelerator works ("rpi-deep-pantilt detect --rotation 180"). I repeated the deep-pan-tilt build and object detection twice -- with the same result.

When I issue "rpi-deep-pantilt detect --edge-tpu --rotation 180", I get two errors with no image displayed on the RPi.

Fist error:

Traceback (most recent call last):
File "/home/pi/.venv/bin/rpi-deep-pantilt", line 10, in
sys.exit(main())
File "/home/pi/.venv/lib/python3.7/site-packages/rpi_deep_pantilt/cli.py", line 172, in main
cli()
File "/home/pi/.venv/lib/python3.7/site-packages/click/core.py", line 829, in call
return self.main(*args, **kwargs)
File "/home/pi/.venv/lib/python3.7/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/home/pi/.venv/lib/python3.7/site-packages/click/core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/home/pi/.venv/lib/python3.7/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/pi/.venv/lib/python3.7/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/home/pi/.venv/lib/python3.7/site-packages/rpi_deep_pantilt/cli.py", line 96, in detect
run_stationary_detect(labels, model_cls, rotation)
File "/home/pi/.venv/lib/python3.7/site-packages/rpi_deep_pantilt/detect/camera.py", line 80, in run_stationary_detect
model = model_cls()
File "/home/pi/.venv/lib/python3.7/site-packages/rpi_deep_pantilt/detect/ssd_mobilenet_v3_coco.py", line 65, in init
tf.lite.experimental.load_delegate(self.EDGETPU_SHARED_LIB)
File "/home/pi/.venv/lib/python3.7/site-packages/tensorflow/lite/python/interpreter.py", line 161, in load_delegate
delegate = Delegate(library, options)
File "/home/pi/.venv/lib/python3.7/site-packages/tensorflow/lite/python/interpreter.py", line 90, in init
self._library = ctypes.pydll.LoadLibrary(library)
File "/usr/lib/python3.7/ctypes/init.py", line 434, in LoadLibrary
return self._dlltype(name)
File "/usr/lib/python3.7/ctypes/init.py", line 356, in init
self._handle = _dlopen(self._name, mode)
OSError: libedgetpu.so.1: cannot open shared object file: No such file or directory
Exception ignored in: <function Delegate.del at 0xa5f3bfa8>

Second error:

Traceback (most recent call last):
File "/home/pi/.venv/lib/python3.7/site-packages/tensorflow/lite/python/interpreter.py", line 125, in del
if self._library is not None:
AttributeError: 'Delegate' object has no attribute '_library'

What I Did

I freshly installed Raspbian 10 (buster) on a high speed 32 gig SD card. I then followed the README.md instructions. I only had one issue after issuing the "pip install https://github.com/leigh-johnson/Tensorflow-bin/releases/download/v2.2.0/tensorflow-2.2.0-cp37-cp37m-linux_armv7l.whl" command.

Here is the error:

Building wheels for collected packages: grpcio
Running setup.py bdist_wheel for grpcio ... error
Complete output from command /home/pi/.venv/bin/python3 -u -c "import setuptools, tokenize;file='/tmp/pip-install-tjyl9jcz/grpcio/setup.py';f=getattr(tokenize, 'open', open)(file);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, file, 'exec'))" bdist_wheel -d /tmp/pip-wheel-okaw4p5s --python-tag cp37:
Found cython-generated files...
usage: -c [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
or: -c --help [cmd1 cmd2 ...]
or: -c --help-commands
or: -c cmd --help

error: invalid command 'bdist_wheel'


Failed building wheel for grpcio
Running setup.py clean for grpcio
Failed to build grpcio

Otherwise, I encountered no problems with the install. Please note, I was careful to make all recommended changes config.txt and Raspberry Pi interfaces.

Audio source triangulation

This is just future suggestion to already impressive work,

What if rpi-deep-pantilt can also triangulate audio source, point to that direction.

Perhaps 2 microphone practically spaced far enough apart & measure 2 audio channel input speed difference, triangulate & apply toward servo angle processing and point to that direction?
(I know it'll perhaps require 360 degree servo)

https://www.tensorflow.org/api_docs/python/tf/audio/decode_wav

tracking problem

  • rpi-deep-pantilt version:master version as GitHub default.
  • Python version: 3.7
  • TensorFlow version:2.0
  • Operating System:Rasbian

Description

Hi
I am an electronics student and I am running an rpi-object-tracking project.
I did all the steps exactly with the help of your website
After completing the installation process, when I execute the Pantilet test command, the servomotors connected to the Pantilet start to rotate 360 degrees.
The detect part of the project works properly. But when I run tracking, the servomotors start rotating 360 degrees for no reason and stop after a while, after that when I move in front of the camera, detection is done but the servomotors do not move.
Please help me. Thankful

pan tilt hat -> waveshare
raspberry pi 4->4G

What I Did

Where to change line_thickness. Seems to be defaulted to 4?

I'm puzzled. I got the code working. All good. However, for some reason I cannot change the line_thickness. None of my change seem to make any difference. It appears to be defaulted around 4.

What am I missing? Sorry, maybe a noobs question but we all start from a place of comfort.

Thx

Add --out/-o option for saving images

If --out/-o is specified, save PiCamera frame buffer to a dir of IMG files.

It's also possible to create a video using ffmpeg if avg FPS metadata is written out as well.

rpi-deep-pantilt not working (instantly crashing)

  • rpi-deep-pantilt version: 1.2.1
  • Python version: 3.7.3
  • TensorFlow version: 2.4.0
  • Operating System: RaspiOS lite (latest version)

Description

Exactly followed along the install instructions (no errors during installation) but when running any command (test pantilt, test camera, detect...) then I get this error message:

(.venv) pi@raspberrypi:~ $ rpi-deep-pantilt test pantilt
Traceback (most recent call last):
  File "/home/pi/.venv/bin/rpi-deep-pantilt", line 5, in <module>
    from rpi_deep_pantilt.cli import main
  File "/home/pi/.venv/lib/python3.7/site-packages/rpi_deep_pantilt/cli.py", line 10, in <module>
    from rpi_deep_pantilt.detect.ssd_mobilenet_v3_coco import (
  File "/home/pi/.venv/lib/python3.7/site-packages/rpi_deep_pantilt/detect/ssd_mobilenet_v3_coco.py", line 9, in <module>
    from PIL import Image
  File "/home/pi/.venv/lib/python3.7/site-packages/PIL/Image.py", line 114, in <module>
    from . import _imaging as core
ImportError: libopenjp2.so.7: cannot open shared object file: No such file or directory
(.venv) pi@raspberrypi:~ $

What I Did

I tried to run rpi-deep-pantilt without any command line parameters or with --help but it always crashes with the same error message as above.

Your help is much appreciated!
Thanks in advance!

Best regards
careyer

Realtime tracking by using FOC driven BLDC motor

I'm using FOC controlled BLDC motor for Pan & Tilt position control with rpi-deep-pantilt & I'd like to share my setup with those who are looking for heavier payload setup, fluid like motion and absolute silence movement.

Video link is here; https://youtu.be/lWcEgcILRG8

Facial recognition, tracking & handling of occlusion seems extremely well handled by Leigh's rpi-deep-pantilt as it shows on this video, considering it's only driven by Pi4 & Coral USB @slow motor speed setting in motor controller and I'm really happy with this result so far.

However, as some of you aware #48 issue is still remaining that when the face goes out of FOV and soon after, it breaks pan & tilt tracking loop & does not resume tracking when the face is back in FOV and I could not figure this solution yet & sadly this appears beyond my capability for sure.

Additionally; PWM output control signal limits-- that it's maximum defined control angle is limited to 180 degree pan rotation; I'm hoping to see continuous 360 degree pan rotation angle, perhaps using SPI communication between Pi & magnetic sensor with SPI port--- understand PID reset comment or sugestion during cycle loop but I'm thinking it's only going to add more random jitter trouble.

Pi-cam & similar likes are appear to be very sensitive on background lighting changes and this contribute heavily on occlusion & produce random jitters and I'm hoping USB Webcam support is available for purpose of supporting auto exposure that is readily available for most of USB-Cams (Leigh covered this USB-Cam issue on speed but not on auto exposure consideration and I'm with her as to numerous product out there).

Usage of both software & hardware PID control method seems to be worked at the best as it doesn't oscillate nor jerky and it clearly shows on this video (I've tried with none PID control loop, Software only and Hardware only with Caffe model)

Hardware; Pi4 8 GB with 8-20-2020 OS release, clean install of rpi-deep-pantilt only over OS, Coral USB-3, 22 pole BLDC Gimbal-Motor GM5208-24, Simplebgc32 extended Gimbal motor controller with 2.7 firmware, AS5048A sensor with PWM connection, Arducam 5 MP Auto Focus camera in manual focus mode, DC 12 volt 10 amp power supply for motor controller, DC 5.5Volt 5 amp step down converter for Pi4 power supply, wireless QI charging cradle for cell-phone, cooling fan for Pi, 360 rotating slip-ring for pan axe.

Note; this motor & controller combination can handle typical DSLR camera & most of small to medium sized tablet PCs on the market and also provide up to 3 axis self-stabilization really well. Yet, IMU sensor's gyro drift issue still exist for static application such as desktop use. This combination is more useful to use with drones, vehicle or on boat moving application like with available GPS correction heading signal compensating gyro drift --main reason I'm moving over to ODrive implementation in my next experiment & waiting for my order arrival.

Add test command for camera orientation

If the PiCamera is not oriented correctly, the PID controller will produce incorrect tracking angles. Add a test command that takes a photo, so the user can verify the cam is oriented correctly.

Ref #10

When object is out of FOV & re-appear, Rpi object tracking is not resuming

  • rpi-deep-pantilt version: 1.2
  • Python version: 3.7.3
  • TensorFlow version: 2.2
  • Operating System: Buster 8-20-2020

Description

Describe what you were trying to get done;
I tried to make RDP track face continueously with Edge TPU on Pi4 8GB with clean install only with RDP and I expected tracking would resume when subject re-appear in FOV but it hangs.

Tell us what happened, what went wrong, and what you expected to happen.;
When I run (.venv) pi@raspberrypi:~ $ rpi-deep-pantilt track --edge-tpu face,
As long as subject is in FOV, it works but when subject goes out of FOV, within in 10 second, cam is pointing up toward ceiling or facing down and stuck in there -- no tracking resume

When I run with (.venv) pi@raspberrypi:~ $ rpi-deep-pantilt track --edge-tpu person,
As long as subject is in FOV, it works but when subject goes out of FOV, cam remain toward position of last jitter up & down and when subject return within FOV, cam is stuck pointing toward last position & stuck in there. -- no tracking resume

Paste the command(s) you ran and the output.
(.venv) pi@raspberrypi:~ $ rpi-deep-pantilt track --edge-tpu face
(.venv) pi@raspberrypi:~ $ rpi-deep-pantilt track --edge-tpu person

If there was a crash, please include the traceback here.

No Crash but,
Upon use of face label;
(.venv) pi@raspberrypi:~ $ rpi-deep-pantilt track --edge-tpu face
^C[INFO] You pressed ctrl + c! Exiting...
[INFO] You pressed ctrl + c! Exiting...
Process Process-2:
[INFO] You pressed ctrl + c! Exiting...
Traceback (most recent call last):
File "/usr/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
self.run()
File "/usr/lib/python3.7/multiprocessing/process.py", line 99, in run
self._target(*self._args, **self._kwargs)
File "/home/pi/.venv/lib/python3.7/site-packages/rpi_deep_pantilt/detect/camera.py", line 38, in run_pantilt_detect
prediction = model.predict(frame)
File "/home/pi/.venv/lib/python3.7/site-packages/rpi_deep_pantilt/detect/facessd_mobilenet_v2.py", line 139, in predict
input_tensor = input_tensor[tf.newaxis, ...]
File "/home/pi/.venv/lib/python3.7/site-packages/tensorflow/python/ops/array_ops.py", line 944, in _slice_helper
new_axis_mask |= (1 << index)
KeyboardInterrupt

Aborted!
^CError in atexit._run_exitfuncs:
Traceback (most recent call last):
File "/usr/lib/python3.7/multiprocessing/popen_fork.py", line 28, in poll
pid, sts = os.waitpid(self.pid, flag)
KeyboardInterrupt

Upon use of Person label;
(.venv) pi@raspberrypi:~ $ rpi-deep-pantilt track --edge-tpu person
^C[INFO] You pressed ctrl + c! Exiting...
[INFO] You pressed ctrl + c! Exiting...
[INFO] You pressed ctrl + c! Exiting...

Aborted!
Process Process-2:
Traceback (most recent call last):
File "/usr/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
self.run()
File "/usr/lib/python3.7/multiprocessing/process.py", line 99, in run
self._target(*self._args, **self._kwargs)
File "/home/pi/.venv/lib/python3.7/site-packages/rpi_deep_pantilt/detect/camera.py", line 38, in run_pantilt_detect
prediction = model.predict(frame)
File "/home/pi/.venv/lib/python3.7/site-packages/rpi_deep_pantilt/detect/ssd_mobilenet_v3_coco.py", line 160, in predict
self.tflite_interpreter.invoke()
File "/home/pi/.venv/lib/python3.7/site-packages/tflite_runtime/interpreter.py", line 506, in invoke
self._interpreter.Invoke()
File "/home/pi/.venv/lib/python3.7/site-packages/tflite_runtime/interpreter_wrapper.py", line 118, in Invoke
return _interpreter_wrapper.InterpreterWrapper_Invoke(self)
KeyboardInterrupt
^CError in atexit._run_exitfuncs:
Traceback (most recent call last):
File "/usr/lib/python3.7/multiprocessing/popen_fork.py", line 28, in poll
pid, sts = os.waitpid(self.pid, flag)
KeyboardInterrupt

Am I missing any component or applying wrong command?

Setting default servo positions in track mode from PID controller

I added a scanning procedure in the manager.py script under the set_servos function which breaks upon detection and reverts to track mode. However, the track mode has a default position where the object in question is usually left out of the FOV. Here's my script:

import logging
from multiprocessing import Value, Process, Manager, Queue

import pantilthat as pth
import signal
import sys
import time
import RPi.GPIO as GPIO

from rpi_deep_pantilt.detect.util.visualization import visualize_boxes_and_labels_on_image_array
from rpi_deep_pantilt.detect.camera import run_pantilt_detect
from rpi_deep_pantilt.control.pid import PIDController

GPIO.setmode(GPIO.BCM)
GPIO.setwarnings(False)
GPIO.setup(8,GPIO.OUT)

logging.basicConfig()
LOGLEVEL = logging.getLogger().getEffectiveLevel()

RESOLUTION = (320, 320)

SERVO_MIN = -90
SERVO_MAX = 90

CENTER = (
    RESOLUTION[0] // 2,
    RESOLUTION[1] // 2
)


# function to handle keyboard interrupt
def signal_handler(sig, frame):
    # print a status message
    print("[INFO] You pressed `ctrl + c`! Exiting...")

    # disable the servos
    pth.servo_enable(1, False)
    pth.servo_enable(2, False)
    GPIO.output(8,GPIO.LOW)

    # exit
    sys.exit()

def in_range(val, start, end):
    # determine the input value is in the supplied range
    return (val >= start and val <= end)


def set_servos(pan, tilt, scan):
    # signal trap to handle keyboard interrupt
    signal.signal(signal.SIGINT, signal_handler)
    
    pn = 90
    tt = 25
    
    while scan.value == 't':     
        print('Scanning')
            
        pth.pan(pn)
        pth.tilt(tt)
        
        pn = pn-1
        
        if pn <= -90:
            pn = 90
        
        time.sleep(0.1)
        
        continue
        
    pan.value = -1*pn
    tilt.value = tt
            
    while True:     
        
        pan_angle = -1 * pan.value
        tilt_angle = tilt.value
        
        if g<6:
            print(pan_angle)
            print(tilt_angle)
        
        # if the pan angle is within the range, pan
        if in_range(pan_angle, SERVO_MIN, SERVO_MAX):
            pth.pan(pan_angle)
        else:
            logging.info(f'pan_angle not in range {pan_angle}')

        if in_range(tilt_angle, SERVO_MIN, SERVO_MAX):
            pth.tilt(tilt_angle)
        else:
            logging.info(f'tilt_angle not in range {tilt_angle}')
            
 
    
    
def pid_process(output, p, i, d, box_coord, origin_coord, action):
    # signal trap to handle keyboard interrupt
    signal.signal(signal.SIGINT, signal_handler)

    # create a PID and initialize it
    p = PIDController(p.value, i.value, d.value)
    p.reset()
    

    # loop indefinitely
    while True:
        error = origin_coord - box_coord.value
        output.value = p.update(error)
        # logging.info(f'{action} error {error} angle: {output.value}')
    

def pantilt_process_manager(
    model_cls,
    labels=('Raspi',),
    rotation=0
):
    
    pth.servo_enable(1, True)
    pth.servo_enable(2, True)
    with Manager() as manager:

        
        scan = manager.Value('c', 't')
        
        # set initial bounding box (x, y)-coordinates to center of frame
        center_x = manager.Value('i', 0)
        center_y = manager.Value('i', 0)

        center_x.value = RESOLUTION[0] // 2
        center_y.value = RESOLUTION[1] // 2
        

        # pan and tilt angles updated by independent PID processes
        pan = manager.Value('i', 0)
        tilt = manager.Value('i', 0)

        # PID gains for panning
        pan_p = manager.Value('f', 0.05)
        # 0 time integral gain until inferencing is faster than ~50ms
        pan_i = manager.Value('f', 0.1)
        pan_d = manager.Value('f', 0)

        # PID gains for tilting
        tilt_p = manager.Value('f', 0.15)
        # 0 time integral gain until inferencing is faster than ~50ms
        tilt_i = manager.Value('f', 0.2)
        tilt_d = manager.Value('f', 0)

        detect_processr = Process(target=run_pantilt_detect,
                                  args=(center_x, center_y, labels, model_cls, rotation, scan))

        pan_process = Process(target=pid_process,
                              args=(pan, pan_p, pan_i, pan_d, center_x, CENTER[0], 'pan'))

        tilt_process = Process(target=pid_process,
                               args=(tilt, tilt_p, tilt_i, tilt_d, center_y, CENTER[1], 'tilt'))

        servo_process = Process(target=set_servos, args=(pan, tilt, scan))
        
        
        detect_processr.start()
        pan_process.start()
        tilt_process.start()
        servo_process.start()
        
        detect_processr.join()
        pan_process.join()
        tilt_process.join()
        servo_process.join()
        
        
if __name__ == '__main__':
    pantilt_process_manager()

How do I set the initial servo position once tracking begins to the last position of the scan loop before breaking?

Error on RPi4 with Ubuntu

  • rpi-deep-pantilt version:master
  • Python version:3.6.9
  • TensorFlow version:oral tpu runtime
  • Operating System:ubuntu 18.04

Description

Describe what you were trying to get done.
Tell us what happened, what went wrong, and what you expected to happen.

What I Did

Paste the command(s) you ran and the output.
If there was a crash, please include the traceback here.

Hi Im trying install your project on a RPi4 4gb with Ubuntu, but it looks is asking by Raspbian
 pip install https://github.com/leigh-johnson/Tensorflow-bin/blob/master/tensorflow-2.0.0-cp37-cp37m-linux_armv7l.whl?raw=true
ERROR: tensorflow-2.0.0-cp37-cp37m-linux_armv7l.whl is not a supported wheel on this platform.


(.venv) ubuntu@ubuntu:~/rpi-deep-pantilt$ python3 -m pip install rpi-deep-pantilt
ERROR: Could not find a version that satisfies the requirement rpi-deep-pantilt (from versions: none)
ERROR: No matching distribution found for rpi-deep-pantilt

Cannot get any object detection

  • Raspberry Pi Deep PanTilt version:
  • Python version: 3.7
  • Operating System: Raspian Buster

Description

Running rpi-deep-pantilt detect I get the error ValueError: Failed to convert value into readable tensor.

What I Did

pi@raspberrypi:~ $ rpi-deep-pantilt detect
2019-12-27 21:07:25.972826: E tensorflow/core/platform/hadoop/hadoop_file_system.cc:132] HadoopFileSystem load error: libhdfs.so: cannot open shared object file: No such file or directory
WARNING:root:Limited tf.compat.v2.summary API due to missing TensorBoard installation.
WARNING:root:Limited tf.compat.v2.summary API due to missing TensorBoard installation.
Traceback (most recent call last):
  File "/home/pi/.local/bin/rpi-deep-pantilt", line 10, in <module>
    sys.exit(main())
  File "/home/pi/.local/lib/python3.7/site-packages/rpi_deep_pantilt/cli.py", line 107, in main
    cli()
  File "/usr/lib/python3/dist-packages/click/core.py", line 764, in __call__
    return self.main(*args, **kwargs)
  File "/usr/lib/python3/dist-packages/click/core.py", line 717, in main
    rv = self.invoke(ctx)
  File "/usr/lib/python3/dist-packages/click/core.py", line 1137, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/usr/lib/python3/dist-packages/click/core.py", line 956, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/usr/lib/python3/dist-packages/click/core.py", line 555, in invoke
    return callback(*args, **kwargs)
  File "/home/pi/.local/lib/python3.7/site-packages/rpi_deep_pantilt/cli.py", line 60, in detect
    run_detect(capture_manager, model)
  File "/home/pi/.local/lib/python3.7/site-packages/rpi_deep_pantilt/cli.py", line 31, in run_detect
    prediction = model.predict(frame)
  File "/home/pi/.local/lib/python3.7/site-packages/rpi_deep_pantilt/detect/ssd_mobilenet_v3_coco.py", line 282, in predict
    self.input_details[0]['index'], input_tensor)
  File "/home/pi/.local/lib/python3.7/site-packages/tensorflow_core/lite/python/interpreter.py", line 347, in set_tensor
    self._interpreter.SetTensor(tensor_index, value)
  File "/home/pi/.local/lib/python3.7/site-packages/tensorflow_core/lite/python/interpreter_wrapper/tensorflow_wrap_interpreter_wrapper.py", line 140, in SetTensor
    return _tensorflow_wrap_interpreter_wrapper.InterpreterWrapper_SetTensor(self, i, value)
ValueError: Failed to convert value into readable tensor.

Error in Face Tracking

  • rpi-deep-pantilt version: 1.2.0
  • Python version: 3.3.3
  • TensorFlow version: 2.2.0
  • Operating System: buster

Description

I followed the instruction on readme and issue #37, and successful ran the general detect command. However when using "detect face" command, the camera overlay would appear but the detection would not run. Failed log attached below.

What I Did

(.venv2) pi@raspberrypi:~/pantilt $ rpi-deep-pantilt detect face
^[[6~^[[6~WARNING:root:Detecting labels: ('face',)
^[[6~Traceback (most recent call last):
  File "/home/pi/pantilt/.venv2/bin/rpi-deep-pantilt", line 10, in <module>
    sys.exit(main())
  File "/home/pi/pantilt/.venv2/lib/python3.7/site-packages/rpi_deep_pantilt/cli.py", line 172, in main
    cli()
  File "/home/pi/pantilt/.venv2/lib/python3.7/site-packages/click/core.py", line 829, in __call__
    return self.main(*args, **kwargs)
  File "/home/pi/pantilt/.venv2/lib/python3.7/site-packages/click/core.py", line 782, in main
    rv = self.invoke(ctx)
  File "/home/pi/pantilt/.venv2/lib/python3.7/site-packages/click/core.py", line 1259, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/home/pi/pantilt/.venv2/lib/python3.7/site-packages/click/core.py", line 1066, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/home/pi/pantilt/.venv2/lib/python3.7/site-packages/click/core.py", line 610, in invoke
    return callback(*args, **kwargs)
  File "/home/pi/pantilt/.venv2/lib/python3.7/site-packages/rpi_deep_pantilt/cli.py", line 96, in detect
    run_stationary_detect(labels, model_cls, rotation)
  File "/home/pi/pantilt/.venv2/lib/python3.7/site-packages/rpi_deep_pantilt/detect/camera.py", line 99, in run_stationary_detect
    filtered_prediction = model.filter_tracked(
AttributeError: 'FaceSSD_MobileNet_V2' object has no attribute 'filter_tracked'
^CException ignored in: <module 'threading' from '/usr/lib/python3.7/threading.py'>
Traceback (most recent call last):
  File "/usr/lib/python3.7/threading.py", line 1281, in _shutdown
    t.join()
  File "/usr/lib/python3.7/threading.py", line 1032, in join
    self._wait_for_tstate_lock()
  File "/usr/lib/python3.7/threading.py", line 1048, in _wait_for_tstate_lock
    elif lock.acquire(block, timeout):

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.