Giter VIP home page Giter VIP logo

motion-ai's Introduction

Motion รƒ๐Ÿ‘

An open-source software solution for situational awareness from a network of video and audio sources. Utilizing Home Assistant, addons, the LINUX Foundation Open Horizon edge fabric, and edge AI services, the system enables personal AI on low-cost devices; integrating object detection and classification into a dashboard of daily activity.

Status

Supports arm64 Architecture Supports arm Architecture Supports amd64 Architecture

JetPack 4.5.1 Ubuntu 18.04 Rasbian Buster

Example

Quick Start

Start-to-finish takes about thirty (30) minutes with a broadband connection. There are options to consider; a non-executable example script may be utilized to specify commonly used options. Please edit the example script for your environment.

The following two (2) sets of commands will install motion-ai on the following types of hardware:

  • RaspberryPi Model 3B+ or 4 (arm); 2GB recommended
  • Ubuntu18.04 or Debian10 VM (amd64); 2GB, 2vCPU recommended
  • nVidia Jetson Nano (arm64|aarch64); 4GB required

The initial configuration presumes a locally attached camera on /dev/video0. Reboot the system after completion; for example:

sudo apt update -qq -y
sudo apt install -qq -y make git curl jq apt-utils ssh apparmor grub2-common network-manager
sudo touch /etc/default/grub
sudo mkdir /usr/share/hassio
sudo chmod 775 /usr/share/hassio
cd /usr/share/hassio
git clone http://github.com/dcmartin/motion-ai .
make

To install Home Assistant you will need both an architecture dependent OS Agent as well as the Supervised Home Assistant package; for example for ARM64:

wget https://github.com/home-assistant/os-agent/releases/download/1.2.2/os-agent_1.2.2_linux_aarch64.deb
sudo dpkg -i os-agent_1.2.2_linux_aarch64.deb

Then download and install the Supervised Home Assistant:

wget https://github.com/home-assistant/supervised-installer/releases/latest/download/homeassistant-supervised.deb
sudo dpkg --ignore-depends=docker-ce -i homeassistant-supervised.deb

Post Quickstart

When the system reboots install the official MQTT broker (aka core-mosquitto) and Motion Classic (aka motion-video0) add-ons using the Home Assistant Add-on Store (n.b. Motion Classic add-on may be accessed by adding the repository http://github.com/motion-ai/addons to the Add-On Store.

Select, install, configure and start each add-on (see below). When both add-ons are running, return to the command-line and start the AI's. After the MQTT and Motion Classic addons have started, run the make restart command to synchroize the Home Assistant configuration with the Motion Classic add-on, for example:

cd ~/motion-ai
make restart

User Experience

Dashboard

Once the system has started it will display a default view; note the image below is of a configured system:

Historical information, current status, and a device map of activity are also provided in the default dashboard.

Administrators

A more detailed interface is provided to administrators only, and includes both summary and detailed views for the system, including access to NetData and the motion add-on Web interface.

Administrators have access to all panels and dashboards, including the selected, overview (aka experimental), and per camera (see below). Notifications can be specified for both individual cameras as well as for all cameras.

Notifications & Alerts

Notifications appear in the side panel; alerts are sent to smartphone and smart-speakers when enabled and configured.


Add-on's

Install the MQTT and Motion Classic add-ons from the Add-On Store and configure and start; add the repository https://github.com/motion-ai/addons to the Add-On Store to install Motion Classic.

The Motion Classic configuration includes many options, most which typically do not need to be changed. The group is provided to segment a network of devices (e.g. indoor vs. outdoor); the device determines the MQTT identifier for publishing; the client determines the MQTT identifier for subscribing; timezone should be local to installation.

Note: No capital letters [A-Z], spaces, hyphens (-), or other special characters may be utilized for any of the following identifiers:

  • group - The collection of devices
  • device - The identifier for the hardware device
  • name - The name of the camera

The cameras section is a listing (n.b. hence the -) and provide information for both the motion detection as well as the front-end Web interface. The name,type, and w3w attributes are required. The top, left, and icon attributes are optional and are used to locate the camera on the overview image. The width, height, and other attributes are optional and are used for motion detection.

Example configuration (subset)

...
group: motion
device: raspberrypi
client: raspberrypi
timezone: America/Los_Angeles
cameras:
  - name: local
    type: local
    w3w: []
    top: 50
    left: 50
    icon: webcam
    width: 640
    height: 480
    framerate: 10
    minimum_motion_frames: 30
    event_gap: 60
    threshold: 1000
  - name: network
    type: netcam
    w3w:
      - what
      - three
      - words
    icon: door
    netcam_url: 'rtsp://192.168.1.224/live'
    netcam_userpass: 'username:password'
    width: 640
    height: 360
    framerate: 5
    event_gap: 30
    threshold_percent: 2

AI's

Return to the command-line, change to the installation directory, and run the following commands to start the AI's; for example:

cd ~/motion-ai
./sh/yolo4motion.sh
./sh/face4motion.sh
./sh/alpr4motion.sh

These commands only need to be run once; the AI's will automatically restart whenever the system is rebooted.

Overview image

The overview image is used to display the location of camera icons specified in the add-on (n.b. top and left percentages). The mode may be local, indicating that a local image file should be utilized; the default is overview.jpg in the www/images/ directory. The other modes utilize the Google Maps API; they are:

  • hybrid
  • roadmap
  • satellite
  • terrain

The zoom value scales the images generated by Google Maps API; it does not apply to local images.

Composition

The motion-ai solution is composed of two primary components:

Home Assistant add-ons:

Open Horizon AI services:

Data may be saved locally and processed to produce historical graphs as well as exported for analysis using other tools (e.g. time-series database InfluxDB and analysis front-end Grafana). Data may also be processed using Jupyter notebooks.

Supported architectures include:

CPU only

  • Supports amd64 Architecture - amd64 - Intel/AMD 64-bit virtual machines and devices
  • Supports aarch64 Architecture - aarch64 - ARMv8 64-bit devices
  • Supports armv7 Architecture - armv7 - ARMv7 32-bit devices (e.g. RaspberryPi 3/4)

GPU accelerated

  • Supports tegra Architecture -aarch64 - with nVidia GPU
  • Supports cuda Architecture - amd64 - with nVida GPU
  • Supports coral Architecture - armv7- with Google Coral Tensor Processing Unit
  • Supports ncs2 Architecture -armv7- with Intel/Movidius Neural Compute Stick v2

Installation

Installation is performed in five (5) steps; see detailed instructions. The software has been tested on the following devices:

  • RaspberryPi Model 3B+ and Model 4 (2 GB); Debian Buster
  • nVidia Jetson Nano and TX2; Ubuntu 18.04
  • VirtualBox VM; Ubuntu 18.04
  • Generic AMD64 w/ nVidia GPU; Ubuntu 18.04

Accelerated hardware 1: nVidia Jetson Nano (aka tegra)

Recommended components:

  1. nVidia Jetson Nano developer kit; 4GB required
  2. 4+ amp power-supply or another option
  3. High-endurance micro-SD card; minimum: 64 Gbyte
  4. One (1) jumper or female-female wire for enabling power-supply
  5. Fan; 40x20mm; cool heat-sink
  6. SSD disk; optional; recommended: 250+ Gbyte
  7. USB3/SATA cable and/or enclosure

Accelerated hardware 2: RaspberryPi 4 with Intel NCS2 (aka ncs2)

This configuration includes dual OLED displays to provide display of annotations text and image, as well as a USB-attached camera (n.b. Playstation3 PS/Eye camera). The Intel/NCS2 implemtation is still in alpha mode and not in the master branch.


# What is _edge AI_? The edge of the network is where connectivity is lost and privacy is challenged.

Low-cost computing (e.g. RaspberryPi, nVidia Jetson Nano, Intel NUC) as well as hardware accelerators (e.g. Google Coral TPU, Intel Movidius Neural Compute Stick v2) provide the opportunity to utilize artificial intelligence in the privacy and safety of a home or business.

To provide for multiple operational scenarios and use-cases (e.g. the elder's activities of daily living (ADL)), the platform is relatively agnostic toward AI models or hardware and more dependent on system availability for development and testing.

An AI's prediction quality is dependent on the variety, volume, and veracity of the training data (n.b. see Understanding AI, as the underlying deep, convolutional, neural-networks -- and other algorithms -- must be trained using information that represents the scenario, use-case, and environment; better predictions come from better information.

The Motion รƒ๐Ÿ‘ system provides a personal AI incorporating both a wide variety artificial intelligence, machine learning, and statistical models as well as a closed-loop learning cycle (n.b. see Building a Better Bot); increasing the volume, variety, and veracity of the corpus of knowledge.

Example: Age@Home

This system may be used to build solutions for various operational scenarios (e.g. monitoring the elderly to determine patterns of daily activity and alert care-givers and loved ones when aberrations occur); see the Age-At-Home project for more information; example below:


Changelog & Releases

Releases are based on Semantic Versioning, and use the format of MAJOR.MINOR.PATCH. In a nutshell, the version will be incremented based on the following:

  • MAJOR: Incompatible or major changes.
  • MINOR: Backwards-compatible new features and enhancements.
  • PATCH: Backwards-compatible bugfixes and package updates.

Author

David C Martin ([email protected])

Buy Me A Coffee

Contribute:

  • Let everyone know about this project
  • Test a netcam or local camera and let me know

Add motion-ai as upstream to your repository:

git remote add upstream http://github.com/dcmartin/motion-ai.git

Please make sure you keep your fork up to date by regularly pulling from upstream.

git fetch upstream
git merge upstream/master

Stargazers

Stargazers over time

CLOC

See CLOC.md

License

FOSSA Status

motion-ai's People

Contributors

dcmartin avatar finleyexp avatar fossabot avatar jurgenweber avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

motion-ai's Issues

Examples in Documentation after installing the Add on in HA

Hi, Could you please provide us with a copy/paste example file that actually works and has all e necessary parameters defined in it ? The examples provided seem fragmentary and incomplete. A full working example configuration file would be a great help to get started (just needing to provide correct settings for MQTT and URL's and passwords for the cameras)!

THANK YOU !!!!

Running on a separate machine from Home Assistant using NCS2

Hey there,
I just wanted to check and make sure before I got too 'into it', that this doesn't require the motion-ai portion to be installed on the same host as Home Assistant? I only run the basic install of HA on my Pi4, but all my services (MQTT, NodeRed, MariaDB, InfluxDB, etc) I run on a separate Ubuntu VM running on my Unraid server. I was planning on plugging an intel NCS2 into the server and passing it through to the VM. Is this type of setup supported as long as I have the appropriate addons installed in Home Assistant?

Thanks,
-MH

MQTT discovery of `motion-ai`

Add MQTT discovery capabilities for the topics generated by motion-ai.

AI service events

The AI services announce their service when launched as a retained message; service may be yolo4motion, alpr4motion, et. al. The id component is the machine identifier; will become the accessible address.

  • service/<service>/<id>

Add-on events

The Motion Classic add-on produces a retained message on launch of the following form:

  • <group>/<device>/start

Camera events

Events are produced by motion-ai based on output from the various AI's, for example yolo4motion annotations are processed and messages are produced on the following base topic:

  • <group>/<device>/<camera>/<event>

Where event may be one of:

  • annotated
  • detected
  • detected_entity
  • detected_person - unimplemented
  • detected_vehicle - unimplemented
  • detected_animal - unimplemented

Component types supported by MQTT discovery in HA include the following:

Other components in HA may be applicable.

External HomeAssistant instance

I already run HASS.io on Raspberry Pi 4. I do have Jetson Nano, where I'd like to install motion-ai and connect it to the HASS.io on Raspberry Pi.

Looking at the installation instructions, it looks like another home-assistant instance needs to be installed (locally) for motion-ai.

Can it be set up with a single home assistant instance?

Utilize Maria-DB for SQL store

Apparently utilizing Maria DB for the SQL store is much faster than the standard SQL3 database.

I tried to setup the homeassistant/recorder.yaml to utilize the MariaDB addon, but it didn't work properly.

I tried to run MariaDB in a container (see sh/mariadb.sh), but HA throws errors trying to use the DB for some updates; IDK.

Any help appreciated.

Utilize Elyra.ai to orchestrate model building

Background

Existing capabilities from Home Assistant Jupyter notebook addon are insufficient for general-purpose AI model building; the Elyra.ai project is attempting to provide a more complete solution.

Proposal

Utilization of Elyra.ai as a mechanism to execute a series of Jupyter notebooks that will process ground truth, and potentially other aspects of the model building, and generate a result which may be inspected for quality, e.g. confusion matrix.

Expected workflow would be based upon availability of content organized into a hierarchical structure, e.g. a standard Web directory listing (n.b. httpd.apache.org/docs/2.0/mod/mod_dir.html). For example see nVidia DIGITS (https://docs.nvidia.com/deeplearning/digits/digits-user-guide/index.html)

  1. Create asset with human-name with collected metadata in a defined repository (e.g. Github)
  2. Define extrinsic asset components; content, code, etc.. from one or more repositories (e.g. Github, directory, S3, ..)
  3. Execute asset; either batch or dynamic; provide container composition of Elyra functionality
  4. Monitor asset execution until completion and provide state
  5. Update asset post-execution (e.g. pull-request to Github repository)

Results from execution will be utilized by other services to enable CI/CD and other processes.

Examples

Confusion matrix

nVidia DIGITS model build

Watson classifier heatmap

Close feedback loop

  1. Close feedback loop from front-end (end-user interaction) to annotation capture and curation platform.
    a. Enable labelstud.io as additional Home Assistant add-on [X]
    b. Enable min.io as additional Home Assistant add-on [X]
    c. Enable end-user feedback through HA persistent_notification with link to yes/no, including all relevant metadata [X]
    d. Capture feedback from MQTT topic, store object in minio and store metadata in labelstudio add-ons

  2. Present captured feedback for end-user interaction
    a. review & approve
    b. inspect, create, update, delete putative annotations

  3. Present training results
    a. review & approve
    b. inspect, delete training results

  4. Deploy updated trained models
    a. Add modelmanager service to control OH edge-sync-service
    b. Add hzn::ess and hzn::modmgr SDK components to base containers
    c. Modify AI containers to utilize hzn::modmgr SDK for model weights, labels, etc..

Benchmark Coral vs NCS2 vs Nano

Do you have a recommendation on which hardware works best for this setup? Coral vs NCS2 vs Nano
Can you create a benchmark and update documentation allowing people to buy for the performance they need?

Send email when detection occurs

In addition to persistent_notifications add e-mail option.

Send e-mail to one address containing information about the detection event and a link to the GIF stored in the Media folder using the external URL access for the MAI site (e.g. Nabu Casa secured).

Install script fails at installing home assistant

When installing motion AI I get the following error:
I tested it on a RPI3 and an Ubuntu PC.

[info] Install supervisor Docker container
invalid reference format
Problem installing Home Assistant; check with "ha core info" command

Multiple, remote, system integration

MAI should provide mechanism to integrate with a remote installation of MAI, i.e. a separate installation of MAI on a non-local network.

MAI currently publishes JSON payloads on MQTT topics of the events associated with annotations, detections, and the detection of specific entities (n.b. categories for person, vehicle, and animal are not (yet) published. This mechanism provides basic information to external consumers, e.g. another Home-Assistant system, but only when using a common MQTT broker.

  • Annotated :: group/device/camera/annotated
  • Detected :: group/device/camera/detected
  • Detected Entity :: group/device/camera/detected_entity

The JSON payload does not contain any image, only the attributes of the image and the corresponding information from the AI predictions.

As Home-Assistant does not provide for mechanisms to send MQTT to alternative brokers, an additional agent must subscribe to appropriate topics (e.g. as above, but potentially others) and forward information.

One potential mechanism is the Kuiper project, a part of the EdgeX Foundry portfolio, that listens to MQTT and executes rules defined by SQL to generate JSON payloads to any specified MQTT broker.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.