Giter VIP home page Giter VIP logo

jakowenko / double-take Goto Github PK

View Code? Open in Web Editor NEW
1.1K 21.0 84.0 12.82 MB

Unified UI and API for processing and training images for facial recognition.

Home Page: https://hub.docker.com/r/jakowenko/double-take

License: MIT License

JavaScript 54.72% Dockerfile 0.38% Vue 44.60% SCSS 0.01% Shell 0.29%
frigate mqtt face-recognition compreface facebox deepstack home-assistant home-automation room-presence rekognition

double-take's Introduction

Double Take Double Take Docker Pulls Discord

Double Take

Unified UI and API for processing and training images for facial recognition.

Why?

There's a lot of great open source software to perform facial recognition, but each of them behave differently. Double Take was created to abstract the complexities of the detection services and combine them into an easy to use UI and API.

Features

Supported Architecture

  • amd64
  • arm64
  • arm/v7

Supported Detectors

Supported NVRs

Integrations

Subscribe to Frigate's MQTT topics and process images for analysis.

mqtt:
  host: localhost

frigate:
  url: http://localhost:5000

When the frigate/events topic is updated the API begins to process the snapshot.jpg and latest.jpg images from Frigate's API. These images are passed from the API to the configured detector(s) until a match is found that meets the configured requirements. To improve the chances of finding a match, the processing of the images will repeat until the amount of retries is exhausted or a match is found.

When the frigate/+/person/snapshot topic is updated the API will process that image with the configured detector(s). It is recommended to increase the MQTT snapshot size in the Frigate camera config.

cameras:
  front-door:
    mqtt:
      timestamp: False
      bounding_box: False
      crop: True
      quality: 100
      height: 500

If a match is found the image is saved to /.storage/matches/<filename>.

Trigger automations / notifications when images are processed.

If the MQTT integration is configured within Home Assistant, then sensors will automatically be created.

Notification Automation

This notification will work for both matches and unknown results. The message can be customized with any of the attributes from the entity.

alias: Notify
trigger:
  - platform: state
    entity_id: sensor.double_take_david
  - platform: state
    entity_id: sensor.double_take_unknown
condition:
  - condition: template
    value_template: '{{ trigger.to_state.state != trigger.from_state.state }}'
action:
  - service: notify.mobile_app
    data:
      message: |-
        {% if trigger.to_state.attributes.match is defined %}
          {{trigger.to_state.attributes.friendly_name}} is near the {{trigger.to_state.state}} @ {{trigger.to_state.attributes.match.confidence}}% by {{trigger.to_state.attributes.match.detector}}:{{trigger.to_state.attributes.match.type}} taking {{trigger.to_state.attributes.attempts}} attempt(s) @ {{trigger.to_state.attributes.duration}} sec
        {% elif trigger.to_state.attributes.unknown is defined %}
          unknown is near the {{trigger.to_state.state}} @ {{trigger.to_state.attributes.unknown.confidence}}% by {{trigger.to_state.attributes.unknown.detector}}:{{trigger.to_state.attributes.unknown.type}} taking {{trigger.to_state.attributes.attempts}} attempt(s) @ {{trigger.to_state.attributes.duration}} sec
        {% endif %}
      data:
        attachment:
          url: |-
            {% if trigger.to_state.attributes.match is defined %}
              http://localhost:3000/api/storage/matches/{{trigger.to_state.attributes.match.filename}}?box=true&token={{trigger.to_state.attributes.token}}
            {% elif trigger.to_state.attributes.unknown is defined %}
               http://localhost:3000/api/storage/matches/{{trigger.to_state.attributes.unknown.filename}}?box=true&token={{trigger.to_state.attributes.token}}
            {% endif %}
        actions:
          - action: URI
            title: View Image
            uri: |-
              {% if trigger.to_state.attributes.match is defined %}
                http://localhost:3000/api/storage/matches/{{trigger.to_state.attributes.match.filename}}?box=true&token={{trigger.to_state.attributes.token}}
              {% elif trigger.to_state.attributes.unknown is defined %}
                 http://localhost:3000/api/storage/matches/{{trigger.to_state.attributes.unknown.filename}}?box=true&token={{trigger.to_state.attributes.token}}
              {% endif %}
mode: parallel
max: 10

MQTT

Publish results to double-take/matches/<name> and double-take/cameras/<camera>. The number of results will also be published to double-take/cameras/<camera>/person and will reset back to 0 after 30 seconds.

Errors from the API will be published to double-take/errors.

mqtt:
  host: localhost

double-take/matches/david

{
  "id": "1623906078.684285-5l9hw6",
  "duration": 1.26,
  "timestamp": "2021-06-17T05:01:36.030Z",
  "attempts": 3,
  "camera": "living-room",
  "zones": [],
  "match": {
    "name": "david",
    "confidence": 66.07,
    "match": true,
    "box": { "top": 308, "left": 1018, "width": 164, "height": 177 },
    "type": "latest",
    "duration": 0.28,
    "detector": "compreface",
    "filename": "2f07d1ad-9252-43fd-9233-2786a36a15a9.jpg",
    "base64": null
  }
}

double-take/cameras/back-door

{
  "id": "ff894ff3-2215-4cea-befa-43fe00898b65",
  "duration": 4.25,
  "timestamp": "2021-06-17T03:19:55.695Z",
  "attempts": 5,
  "camera": "back-door",
  "zones": [],
  "matches": [
    {
      "name": "david",
      "confidence": 100,
      "match": true,
      "box": { "top": 286, "left": 744, "width": 319, "height": 397 },
      "type": "manual",
      "duration": 0.8,
      "detector": "compreface",
      "filename": "dcb772de-d8e8-4074-9bce-15dbba5955c5.jpg",
      "base64": null
    }
  ],
  "misses": [],
  "unknowns": [],
  "counts": { "person": 1, "match": 1, "miss": 0, "unknown": 0 }
}

Notify Services

notify:
  gotify:
    url: http://localhost:8080
    token:

API Images

Match images are saved to /.storage/matches and can be accessed via http://localhost:3000/api/storage/matches/<filename>.

Training images are saved to /.storage/train and can be accessed via http://localhost:3000/api/storage/train/<name>/<filename>.

Latest images are saved to /.storage/latest and can be accessed via http://localhost:3000/api/storage/latest/<name|camera>.jpg.

Query Parameters Description Default
box Show bounding box around faces false
token Access token

UI

The UI is accessible via http://localhost:3000.

  • Matches: /
  • Train: /train
  • Config: /config
  • Access Tokens: /tokens (if authentication is enabled)

Authentication

Enable authentication to password protect the UI. This is recommended if running Double Take behind a reverse proxy which is exposed to the internet.

auth: true

API

Documentation can be viewed on Postman.

Usage

Docker Compose

version: '3.7'

volumes:
  double-take:

services:
  double-take:
    container_name: double-take
    image: jakowenko/double-take
    restart: unless-stopped
    volumes:
      - double-take:/.storage
    ports:
      - 3000:3000

Configuration

Configurable options are saved to /.storage/config/config.yml and are editable via the UI at http://localhost:3000/config. Default values do not need to be specified in configuration unless they need to be overwritten.

auth

# enable authentication for ui and api (default: shown below)
auth: false

token

# if authentication is enabled
# age of access token in api response and mqtt topics (default: shown below)
# expressed in seconds or a string describing a time span zeit/ms
# https://github.com/vercel/ms
token:
  image: 24h

mqtt

# enable mqtt subscribing and publishing (default: shown below)
mqtt:
  host:
  username:
  password:
  client_id:

  tls:
    # cert chains in PEM format: /path/to/client.crt
    cert:
    # private keys in PEM format: /path/to/client.key
    key:
    # optionally override the trusted CA certificates: /path/to/ca.crt
    ca:
    # if true the server will reject any connection which is not authorized with the list of supplied CAs
    reject_unauthorized: false

  topics:
    # mqtt topic for frigate message subscription
    frigate: frigate/events
    #  mqtt topic for home assistant discovery subscription
    homeassistant: homeassistant
    # mqtt topic where matches are published by name
    matches: double-take/matches
    # mqtt topic where matches are published by camera name
    cameras: double-take/cameras

detect

# global detect settings (default: shown below)
detect:
  match:
    # save match images
    save: true
    # include base64 encoded string in api results and mqtt messages
    # options: true, false, box
    base64: false
    # minimum confidence needed to consider a result a match
    confidence: 60
    # hours to keep match images until they are deleted
    purge: 168
    # minimum area in pixels to consider a result a match
    min_area: 10000

  unknown:
    # save unknown images
    save: true
    # include base64 encoded string in api results and mqtt messages
    # options: true, false, box
    base64: false
    # minimum confidence needed before classifying a name as unknown
    confidence: 40
    # hours to keep unknown images until they are deleted
    purge: 8
    # minimum area in pixels to keep an unknown result
    min_area: 0

frigate

# frigate settings (default: shown below)
frigate:
  url:

  # if double take should send matches back to frigate as a sub label
  # NOTE: requires frigate 0.11.0+
  update_sub_labels: false

  # stop the processing loop if a match is found
  # if set to false all image attempts will be processed before determining the best match
  stop_on_match: true

  # ignore detected areas so small that face recognition would be difficult
  # quadrupling the min_area of the detector is a good start
  # does not apply to MQTT events
  min_area: 0

  # object labels that are allowed for facial recognition
  labels:
    - person

  attempts:
    # number of times double take will request a frigate latest.jpg for facial recognition
    latest: 10
    # number of times double take will request a frigate snapshot.jpg for facial recognition
    snapshot: 10
    # process frigate images from frigate/+/person/snapshot topics
    mqtt: true
    # add a delay expressed in seconds between each detection loop
    delay: 0

  image:
    # height of frigate image passed for facial recognition
    height: 500

  # only process images from specific cameras
  cameras:
    # - front-door
    # - garage

  # only process images from specific zones
  zones:
    # - camera: garage
    #   zone: driveway

  # override frigate attempts and image per camera
  events:
    # front-door:
    #   attempts:
    #     # number of times double take will request a frigate latest.jpg for facial recognition
    #     latest: 5
    #     # number of times double take will request a frigate snapshot.jpg for facial recognition
    #     snapshot: 5
    #     # process frigate images from frigate/<camera-name>/person/snapshot topic
    #     mqtt: false
    #     # add a delay expressed in seconds between each detection loop
    #     delay: 1

    #   image:
    #     # height of frigate image passed for facial recognition (only if using default latest.jpg and snapshot.jpg)
    #     height: 1000
    #     # custom image that will be used in place of latest.jpg
    #     latest: http://camera-url.com/image.jpg
    #     # custom image that will be used in place of snapshot.jpg
    #     snapshot: http://camera-url.com/image.jpg

cameras

# camera settings (default: shown below)
cameras:
  front-door:
    # apply masks before processing image
    # masks:
    #   # list of x,y coordinates to define the polygon of the zone
    #   coordinates:
    #     - 1920,0,1920,328,1638,305,1646,0
    #   # show the mask on the final saved image (helpful for debugging)
    #   visible: false
    #   # size of camera stream used in resizing masks
    #   size: 1920x1080

    # override global detect variables per camera
    # detect:
    #   match:
    #     # save match images
    #     save: true
    #     # include base64 encoded string in api results and mqtt messages
    #     # options: true, false, box
    #     base64: false
    #     # minimum confidence needed to consider a result a match
    #     confidence: 60
    #     # minimum area in pixels to consider a result a match
    #     min_area: 10000

    #   unknown:
    #     # save unknown images
    #     save: true
    #     # include base64 encoded string in api results and mqtt messages
    #     # options: true, false, box
    #     base64: false
    #     # minimum confidence needed before classifying a match name as unknown
    #     confidence: 40
    #     # minimum area in pixels to keep an unknown result
    #     min_area: 0

    # snapshot:
    #   # process any jpeg encoded mqtt topic for facial recognition
    #   topic:
    #   # process any http image for facial recognition
    #   url:

detectors

# detector settings (default: shown below)
detectors:
  compreface:
    url:
    # recognition api key
    key:
    # number of seconds before the request times out and is aborted
    timeout: 15
    # minimum required confidence that a recognized face is actually a face
    # value is between 0.0 and 1.0
    det_prob_threshold: 0.8
    # require opencv to find a face before processing with detector
    opencv_face_required: false
    # comma-separated slugs of face plugins
    # https://github.com/exadel-inc/CompreFace/blob/master/docs/Face-services-and-plugins.md)
    # face_plugins: mask,gender,age
    # only process images from specific cameras, if omitted then all cameras will be processed
    # cameras:
    #   - front-door
    #   - garage

  rekognition:
    aws_access_key_id: !secret aws_access_key_id
    aws_secret_access_key: !secret aws_secret_access_key
    aws_region:
    collection_id: double-take
    # require opencv to find a face before processing with detector
    opencv_face_required: true
    # only process images from specific cameras, if omitted then all cameras will be processed
    # cameras:
    #   - front-door
    #   - garage

  deepstack:
    url:
    key:
    # number of seconds before the request times out and is aborted
    timeout: 15
    # require opencv to find a face before processing with detector
    opencv_face_required: false
    # only process images from specific cameras, if omitted then all cameras will be processed
    # cameras:
    #   - front-door
    #   - garage

  facebox:
    url:
    # number of seconds before the request times out and is aborted
    timeout: 15
    # require opencv to find a face before processing with detector
    opencv_face_required: false
    # only process images from specific cameras, if omitted then all cameras will be processed
    # cameras:
    #   - front-door
    #   - garage

opencv

# opencv settings (default: shown below)
# docs: https://docs.opencv.org/4.6.0/d1/de5/classcv_1_1CascadeClassifier.html
opencv:
  scale_factor: 1.05
  min_neighbors: 4.5
  min_size_width: 30
  min_size_height: 30

schedule

# schedule settings (default: shown below)
schedule:
  # disable recognition if conditions are met
  disable:
    # - days:
    #     - monday
    #     - tuesday
    #   times:
    #     - 20:00-23:59
    #   cameras:
    #     - office
    # - days:
    #     - tuesday
    #     - wednesday
    #   times:
    #     - 13:00-15:00
    #     - 18:00-20:00
    #   cameras:
    #     - living-room

notify

# notify settings (default: shown below)
notify:
  gotify:
    url:
    token:
    priority: 5

    # only notify from specific cameras
    # cameras:
    #   - front-door
    #   - garage

    # only notify from specific zones
    # zones:
    #   - camera: garage
    #     zone: driveway

time

# time settings (default: shown below)
time:
  # defaults to iso 8601 format with support for token-based formatting
  # https://github.com/moment/luxon/blob/master/docs/formatting.md#table-of-tokens
  format:
  # time zone used in logs
  timezone: UTC

logs

# log settings (default: shown below)
# options: silent, error, warn, info, http, verbose, debug, silly
logs:
  level: info

ui

# ui settings (default: shown below)
ui:
  # base path of ui
  path:

  pagination:
    # number of results per page
    limit: 50

  thumbnails:
    # value between 0-100
    quality: 95
    # value in pixels
    width: 500

  logs:
    # number of lines displayed
    lines: 500

telemetry

# telemetry settings (default: shown below)
# self hosted version of plausible.io
# 100% anonymous, used to help improve project
# no cookies and fully compliant with GDPR, CCPA and PECR
telemetry: true

Storing Secrets

Note: If using one of the Home Assistant Add-ons then the default Home Assistant /config/secrets.yaml file is used.

mqtt:
  host: localhost
  username: mqtt
  password: !secret mqtt_password

detectors:
  compreface:
    url: localhost:8000
    key: !secret compreface_key

The secrets.yml file contains the corresponding value assigned to the identifier.

mqtt_password: <password>
compreface_key: <api-key>

Development

Run Local Containers

Service
UI localhost:8080
API localhost:3000
MQTT localhost:1883
# start development containers
./.develop/docker up

# remove development containers
./.develop/docker down

Build Local Image

./.develop/build

Donations

If you would like to make a donation to support development, please use GitHub Sponsors.

double-take's People

Contributors

alfiegerner avatar bigbangus avatar jakowenko avatar konturn avatar pkulak avatar pospielov avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

double-take's Issues

save_unknown also saves images with high confidence

maybe there should be 2 env vars, one save_unknown and one save_all_faces or something,
right now save_unknown also includes matches with 90% confidence to the train/manage gui even though I have that setting to about 70-75

Mqtt snapshots and Config cameras

Hello @jakowenko , Not sure if you are noticing this also but after installing 0.6 release, I am like seeing few odds stuff:

  • Mqtt subscription for snapshot not honoring the cameras present in config yml. Instead it seems to process every entity sent by frigate . May be because of 'frigate/cameras/+/person/snapshots'.
  • Config has minimum match confidence of 95 but UI does not think a recognition of 99% as match or say. anything greater than 95% as match. Only at 95.xx% tag is set to green otherwise it remains red.
  • Updated HA to latest and it seems like double-take might have stopped publishing sensor data via API

As always, thank you for your time and effort!

Need to configure cameras?

Hi,

Sorry if I am thick, but I am a bit confuzed whether all cameras are auto-detected, since my first try at double-take, it seems it only fokuses on two of my four cameras in frigate.

The image in the README.md has a yaml-config example that also configures custom url and topic values for the cameras, but "how needed" this is, or when to use it, is not fully clear.

Somewhat related to the fact that I havent fully understood where the snapshots provided at frigats http url, are the same (size-wise, etc) with the ones sent via mqtt.

Timezone/Date format

Hi, thanks for this awesome project.

I found a couple of issues, as I'm based in Sweden:

Docs are missing for environment variable 'TZ'
Also it would be nice to parametrize the date format, or send the unix timestamp/UTC

train/add/name not working

Hi,

Congrats on the new version, looking really nice.

Just coming back to play with this and ran into an error with the train feature. I've used this functionality before with success, but having issues with v.90.

I can see inside the docker shell that storage is set up correctly, i.e. within the docket OS I can see files at /.storage/train/alex. But when I try and load this via a post call:

curl --location --request POST 'localhost:3010/api/train/add/alex'

I get 0 files queued:

info: alex: queuing 0 file(s) for training
info: alex: training complete in 5.21 sec

This is starting with clean (deleted) database.db.

Any idea what I might be missing?

Ability to define minimum box size

Fantastic project, which slotted perfectly into my Frigate + HA setup. Thank you.

I'm using DeepStack for the facial recognition here. I noticed that sometimes Frigate picks up a person quite far away and sends that image over to double take, even after playing with the min/max area filters.

Because of this, the face box for facial recognition is very small and for me, produces results with false high confidence matches despite not much face being present. Examples below:

image
image

Can I set the minimum face 'box size' or set anything else to help results?

Thanks!

Only latest and beta available at dockerhub + potential bug related to compreface

Hi,

I noticed only latest and beta is available at dockerhub, not specific versions/releases.

For example my double-take LCX container has crashed and when setting things up again now from scratch I'm experiencing a bug(?) with 0.5.1 and compreface but before reporting it as an actual bug I'd like to check with an older version since I know it has worked before, and not just being some configuration error from my side now. Not having all the tags at dockerhub makes it harder to roll back to a specific version to test this

The thing I'm experiencing is that I get this error in the logs when processing images:

compreface process error: Cannot read property 'map' of undefined

feat: webgui for training unmatched/matched faces

This is a pretty big feature, but I wanted to share the idea and I might be able to help out a bit as well as I'm a web developer myself, anyhow, here's the idea described on an epic level

It would be awesome to have a web interface for browsing the files stored under storage/matches to enable you to "verify" or "dismiss" a matched photo, if you verify a photo it will be added to the training database for that person and if you click dismiss it will be untrained (if we got a false positive for someone elses face).

Also it would be awesome to have all photos with detected faces below the configured CONFIDENCE to some kind of "unmatched" directory so you can easily tag the photos for training.

Configuration via YAML instead of ENV variables

I've been working on updating the API to accept configuration variables via a YAML file rather than passing them directly to the container as environment variables. Wanted to get some feedback, but here is what a more complex config would look like.

server:
  port: 3000

mqtt:
  host: localhost
  password: test
  topics:
    matches: double-take/matches
    frigate: frigate/events

detectors:
  compreface:
    url: http://localhost:8000
    key: xxx-xxx-xxx-xxx-xxx
  deepstack:
    url: http://localhost:8001

frigate:
  url: http://localhost:4000
  image:
    height: 500
  attempts:
    latest: 15
    snapshot: 0
  cameras:
    - office
  zones:
    - name: zone-name
      camera: office

confidence:
  match: 50
  unknown: 30

save:
  matches: true
  unknown: true

purge:
  matches: 48
  unknown: 12

Usage of Age / Gender etc. from Compreface

Hello David,
Thank you very much for sharing your great project!
I think it is a must have for the Smart Home enthusiasts...

What do you think about adding support for the new plugins from Compreface like Age and Gender recognition?

I would like to read your thoughts...

Keep up helping the community :D

Trouble with tokens - reverse proxy

I have a bit of a strange issue that I've noticed with auth, reverse proxy, and tokens.

I have my HA notification using the following URLs for both preview and image links, just like the documentation.
https://double-take.mydomain.com/api/storage/matches/{{trigger.to_state.attributes.match.filename}}?box=true&token={{trigger.to_state.attributes.token}}

The preview image will show up in the notification, but clicking the link leads to an error page displaying {"error":"Unauthorized"}

Loading the UI, finding the same image, and comparing the links show the last part of the token is different.

For instance, the URL for the image I get from the notification would have something like:
...token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJyb3V0ZSI6InN0b3JhZ2UiLCJpYXQiOjE2Mjk4MzE0MTgsImV4cCI6MTYyOTgzNTAxOH0.huZHokRVXgSyfYGdYULEDtntf-Hvt1BHgg7JUdZDw64
and the URL for the image I get from the double-take UI would have something like:
...token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJyb3V0ZSI6InN0b3JhZ2UiLCJpYXQiOjE2Mjk4NDk0ODcsImV4cCI6MTYyOTg1MzA4N30.77ji-R4ILNos6ooQ7EpWqheliN-5hI9MnuNKKLdwPDc

It seems weird that the image will show fine in the notification preview, but not actually load when tapping on the image link.

Any way to deal with self signed certs on recognize endpoint

Trying to send a snap from a camera and I cannot get it to work for the life of me...

http://docker.home.lab:8203/api/recognize?url=https://10.0.9.40/snap.jpeg

Getting this error:

processing double-take: 2b290cf1-b466-46a7-9774-642e8e724be8
url validation error: self signed certificate
response:
{
  id: '2b290cf1-b466-46a7-9774-642e8e724be8',
  duration: 0.03,
  timestamp: '6/10/2021, 8:06:59 PM',
  attempts: 0,
  camera: 'double-take',
  zones: [],
  matches: []
}
6/10/2021, 8:06:59 PM
done processing double-take: 2b290cf1-b466-46a7-9774-642e8e724be8 in 0.03 sec

The camera also has a http endpoint but I'm not sure why it doesn't pass validation...
http://docker.home.lab:8203/api/recognize?url=http://10.0.9.40/snap.jpeg

Error:

processing double-take: 0f60d94e-0169-4fc5-a269-4adc7b04515a
url validation error: Request failed with status code 400
response:
{
  id: '0f60d94e-0169-4fc5-a269-4adc7b04515a',
  duration: 0.01,
  timestamp: '6/10/2021, 8:10:13 PM',
  attempts: 0,
  camera: 'double-take',
  zones: [],
  matches: []
}
6/10/2021, 8:10:13 PM
done processing double-take: 0f60d94e-0169-4fc5-a269-4adc7b04515a in 0.01 sec

Camera is a Ubnt Doorbell camera

Train tab get stuck on Training in Progress if Compreface is down.

Steps to reproduce:

  • Stop compreface (by me, it needs to get restarted after reboot, even if I run it with the option -d on docker-compose)
  • Select a person profile in Train tab
  • Add a photo (+)

Now you have a screen like that anytime that you open Train Tab:
Screenshot 2021-08-14 164337

Double-Take Version:
1.0.0-167758f:Beta

Compreface Version:
0.5.1

For any more info, happy to share :)

docs: Node-Red example / save match to home assistant media-directory

It would be very nice with a documented example how to send notifications with attached photo as shown in the github readme.
From what I understand you can fetch the image from {FRIGATE_URL}/clips/{msg.payload.camera}/{msg.payload.id}.jpg is this the preferred way, or do you store the photo to home assistant first somehow?

min_area_match doesnt seem to be working

I am getting a lot of "matches" for faces that are really small (20x20). I have the default set:
objects: face: min_area_match: 10000
but
image
these keep being matched...

Also, unrelated... does anyone find that deepstack will take giant leaps at guesses, and compreface always has super certainty on guesses?

Changing Attempts Timing/Delay

Curious about the way double-take is grabbing snapshots/latest files from the API. A few things I do want to mention:

When trying to grab latest files and grabbing snapshots from MQTT, double-take does this in parallel. What happens is that frigate will use a snapshot that might be older than the match found from the latest.jpg processing thread. Meaning if there are a multiple cameras, and double-take is used for presence detection, I could potentially be matched from the snapshot thread in a place I am no longer in. I decided to turn off the MQTT snapshot process for this reason.

The other thing I noticed is the attempts value, I notice that when grabbing latest images from frigate, that it does so in a sequential way without delay/timings. From the logs double-take makes 20 attempts in 1-2 seconds and doesn't find a match. But in the frigate event, my face might not show up until 3-4 seconds after the event starts, missing my face.

This has to do with FPS as I am using 5 FPS on the camera, if the attempts try to grab 20 frames in 1-2 seconds it will be analyzing the same images over and over.

Wondering if we can add a delay between attempts for this reason.

I also noticed is that switching to a slower model actually helps this as the delays between attempts is greater.

Local Train folder 0 images, cant connect to MQTT

I have this running in unraid from docker hub, but when I try to train it using the API, I get an error conn refused 127.0.0.1

I assigned the variables in docker, including key MQTT_HOST and the value being my MQTT broker IP. as well as port user and PW

My logs say MQTT attempting to connect,

When attempting to train from a frigate snapshot, I get a 404, this leads me to believe it cannot reach frigate, but my logs are not giving me any errors like MQTT is.

also, I have the appdata shared on unraid, but cannot write to the "train" folder

Can I try the beta version with the webgui? LOL

Frigate Error - Connection already opened

After i have finally a Double Take instance work, thanks to Jako, i now come in this scenario.
My Frigate docker container inside the "homeassistant OS" after some time (2/3 hours) seems freezed, looking to the container logs i can see a lot of message like:


2021-08-03T11:21:16Z {'REMOTE_ADDR': '127.0.0.1', 'REMOTE_PORT': '40420', 'HTTP_HOST': 'ccab4aaf-frigate', (hidden keys: 24)} failed with OperationalError Traceback (most recent call last): File "/usr/local/lib/python3.8/dist-packages/flask_sockets.py", line 40, in __call__ handler, values = adapter.match() File "/usr/local/lib/python3.8/dist-packages/werkzeug/routing.py", line 1945, in match raise NotFound() werkzeug.exceptions.NotFound: 404 Not Found: The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.8/dist-packages/gevent/pywsgi.py", line 999, in handle_one_response self.run_application() File "/usr/local/lib/python3.8/dist-packages/geventwebsocket/handler.py", line 87, in run_application return super(WebSocketHandler, self).run_application() File "/usr/local/lib/python3.8/dist-packages/gevent/pywsgi.py", line 945, in run_application self.result = self.application(self.environ, self.start_response) File "/usr/local/lib/python3.8/dist-packages/flask/app.py", line 2464, in __call__ return self.wsgi_app(environ, start_response) File "/usr/local/lib/python3.8/dist-packages/flask_sockets.py", line 48, in __call__ return self.wsgi_app(environ, start_response) File "/usr/local/lib/python3.8/dist-packages/flask/app.py", line 2450, in wsgi_app response = self.handle_exception(e) File "/usr/local/lib/python3.8/dist-packages/flask/app.py", line 1867, in handle_exception reraise(exc_type, exc_value, tb) File "/usr/local/lib/python3.8/dist-packages/flask/_compat.py", line 39, in reraise raise value File "/usr/local/lib/python3.8/dist-packages/flask/app.py", line 2447, in wsgi_app response = self.full_dispatch_request() File "/usr/local/lib/python3.8/dist-packages/flask/app.py", line 1952, in full_dispatch_request rv = self.handle_user_exception(e) File "/usr/local/lib/python3.8/dist-packages/flask/app.py", line 1821, in handle_user_exception reraise(exc_type, exc_value, tb) File "/usr/local/lib/python3.8/dist-packages/flask/_compat.py", line 39, in reraise raise value File "/usr/local/lib/python3.8/dist-packages/flask/app.py", line 1948, in full_dispatch_request rv = self.preprocess_request() File "/usr/local/lib/python3.8/dist-packages/flask/app.py", line 2242, in preprocess_request rv = func() File "/opt/frigate/frigate/http.py", line 89, in _db_connect database.connect() File "/usr/local/lib/python3.8/dist-packages/peewee.py", line 3061, in connect raise OperationalError('Connection already opened.') peewee.OperationalError: Connection already opened.


This kind of problem seems something related to Frigate and DoubleTake as i can read from this issue blakeblackshear/frigate#920.

@jakowenko What do think about it?
Thanks so much

deepstack process error

I'm having an issue lately, getting the same error after frigate recognizes a person:

2021-05-02T13:49:43.842Z
processing door: 1619963382.771798-7j16g5
deepstack process error: Cannot read property 'map' of undefined
Cannot read property 'duration' of undefined
deepstack process error: Cannot read property 'map' of undefined

It was running great, until it broke... :)

unauthorised error using API

hi there,

I have enabled authentication on the UI which I understand is also meant to enable authentication on the API.

I have validated that the HA entities for double_take_name have the token attribute in them, and that HA is passing token={{trigger.to_state.attributes.token}} into the image URL, but I always get an 'unauthorised' error.

have tried manually pulling the attributes from the HA sensor and manually recreating the URL using filename and token, and it comes back with 'unauthorised' as well.

what is the best way to troubleshoot?

Ability to delete Person

Currently you can add a new person, but I don't think there is a way to delete a user.

I originally set the names as 'firstname lastname' the space caused some problem in HA, as a result I recreated them as 'firstname_lastname' but I don't seem to be able to delete the original names. While you could force _ where there are blank names I think the ability to delete would be more valuable.

objects.face.min_area_match

David, hi!
In config.yml is parameter - objects.face.min_area_match: 15000
When frigate captures images, double-take compares them all in a row. And can you make double-take compare only those that fit the specified "Box" parameters?
Why compare those images when they do not match those specified in config.yml.
For example, here double-take makes 3 unnecessary comparisons when "Box" is smaller than the parameter objects.face.min_area_match: 15000

image

bug: require restart of server to be able to train images from storage

If you add some images to storage/train/TAG and try to call the api /train/add/TAG it doesn't pick up any files until restarting the application. Have not checked the code but it seems like it's only scanning/adding the files from that directory to the database as untrained when the application starts up, maybe it should do a scan for new files so we can add new files meanwhile the application is running without having to do a restart

How much do you train?

Not an issue, but I started retraining after in the start train every photo, even the tiny small ones. Do the detection get better if I just train the pictures where the face is very visual in the frame, front, and side, and skip those small pictures?

[FR] Snapshot from Camera Directly

Hello @jakowenko , Hope you doing well.
What you think about getting snapshot from Camera snapshot URL directly instead of MQTT and Frigate.
Most of the time people use sub-stream of the camera for object detection or say in Frigate and use High res stream for recording. Now, frigate decodes sub-stream and post low res snapshot to mqtt and this makes face recognition and also training less accurate.

So, I was like, what if we grab image directly from camera on say some trigger and let double-take use that for face recognition or for training.
It could be like how frigate does.. Provide camera level configuration and honor that if URL is given or use frigate topic for snapshot. either or but certainly not both probably.

Benefits,

  • double-take will be decoupled from frigate and can then be used with BlueIris or say any other surveillance systems.

how you feel about this?

Thank you again for your time and effort!

Question: How to start RE-Train for new images in training-Folder?

i am filling up the train-images-folder with new images.
How do i start re-training of all folder?
or of a certain single folder, after coping images into it?

If i switch the dropdown to a certain "person/folder", the sync-button is still inactive.
do i have to restart the whole containter for that?

face-retrain

i pulled the latest beta, but it still shows 10x. ...instead of 10.2...?
image

Feature Request: Move image when trained and update name

Hello,

first of all, thank you for your amazing work!

I would like to propose this function: When in the config the setting save.unknown is true (default) and I train an unknown image in the matches section it would be great, if the image is automatically moved/copied to .storage/train/{matchname}.

It would also be nice if a "modified" (trained/untrained) image would be reevaluated again, so that the unknown images are not unknown again or if known images are corrected with the right name.

I'm using the latest beta as of now.

What do you think about this?

/tmp filling up inside container - system does not purge automatically

im noticing some behaviour specific in the /tmp folder of the double-take container. i noticed this because suddenly the directory becomes 10's of gbytes in size and the OS runs out of disk space.

Is this natural? some of the files i see in this dir:

1629476245.068447-s3lhd2-latest-4d0196c0-0698-4dc9-8560-c2f250f6fd2c.jpg fece6499-e89a-45fb-b34f-56df5c27bec7.jpg
1629476245.068447-s3lhd2-latest-4d6c54eb-ebe4-4ce8-986d-932f624e42c3.jpg fed7133a-879e-4d71-8746-54b6e83338c0-mqtt-3a8f2fd5-4de7-45b8-9409-09eb1af130d7.jpg
1629476245.068447-s3lhd2-latest-54723bc1-79eb-4453-90d7-2babd1831194.jpg fedcda88-94f0-4980-8f75-a6042cdd5cad-mqtt-0c00db20-50c0-458f-b996-e6cdd0eb7dd6.jpg

bash-5.0# ls -l | wc -l
114848

I think this should be the case / beeing that large. I could remove it manually i think, but last time i stopped the container rmi/rebuild it to get rid of the issue.

Setup: 6 cams running with 5 fps / 1080p reso - purge 168 setting. (fyi didnt had this issue before - using it already some time). Double take v0.10.1

Views anyone?

Feature Request: Compreface Mask (Covid19) recognition.

Hello there!

I just read in the new features from Compreface 0.6.0:

Mask recognition plugin. We added a new plugin that returns if the person wears a mask and if yes - if it's worn correctly

I know that 99% don't need this feature, but if we have this info available, why not have double-take read it?

Have fun!

status code 401

latest version:

21-08-07 08:50:50 error: Request failed with status code 401
21-08-07 08:50:50 info: arco: queuing 0 file(s) for training
21-08-07 08:50:50 info: arco: training complete in 0.04 sec

i get this error, i cannot train.
manual upload an image is working tough.

beta:
yesterday i installed the beta version and i get this error:

21-08-06 18:56:25 error: Error: save url error: Request failed with status code 401
at createError (/double-take/api/node_modules/axios/lib/core/createError.js:16:15)
at settle (/double-take/api/node_modules/axios/lib/core/settle.js:17:12)
at IncomingMessage.handleStreamEnd (/double-take/api/node_modules/axios/lib/adapters/http.js:260:11)
at IncomingMessage.emit (events.js:412:35)
at endReadableNT (internal/streams/readable.js:1317:12)
at processTicksAndRejections (internal/process/task_queues.js:82:21)
21-08-06 18:56:25 info: arco: queuing 0 file(s) for training
21-08-06 18:56:25 info: arco: training complete in 0 sec

config:

Double Take

auth: true

mqtt:
host: 10.0.102.100
username: ***
password: ***

frigate:
url: http://10.0.102.100:5000
image:
height: 500
attempts:
latest: 10
snapshot: 0
cameras:
- achtertuin
- oprit
- deurbel

detectors:
compreface:
url: http://10.0.102.100:8000
key: ***

Home Assistant addon

Would it be possible to make a home assistant addon? Since its based on docker it should be an easy task?

Use Frigate mask definitions

Great project!! Thank you for all the work.

It would be great if we could copy / paste motion masks from our frigate setup.

Or maybe there is something already similar.

My problem is Face Detection is running against my TV as a second person.

mqtt topic state

Hi! Very cool project! You can modify multiple parts:

  1. add a status mqtt topic during start and operation, for example, state = online
    Thank you for paying attention to my details!

Everything seems to work but Double Take did not catch snapshot

Hi,
Thanks for your great project!
I have configure on my HomeAssistantOS (v2021.7.4) the Frigate Software (v.1.13-0.8.4 - Coral TPU) and a Vm with DoubleTake (v.0.9.1) and CompreFace (v.0.5.1).
The training part of Double Take is working great and also for some days the integration with Frigate MQTT Snapshot but from some days the Frigate part is not working anymore...
I have tested all the possible permutation :-)

  • Fresh new installation of Home Assistant -> Nothing
  • Changing all the parameters of DoubleTake config -> Nothing
  • Changing all the parameters of Frigate config -> Nothing
  • Clean install of MQTT Mosquito on HomeAssistant -> Nothing

I know is not easy to understand why this happen only to me.. but there is a way to debug what happen from Frigate to DoubleTake side?

I have attached my Frigate and DoubleTake config.
Thanks so mach for all possible help


file: frigate.yml

mqtt:
  host: 192.168.0.40
  user: *****
  password: *****

logger:
  # Optional: default log level (default: shown below)
  default: info
  # Optional: module by module log level configuration
  logs:
    frigate.mqtt: debug

cameras:
  bunny:
    ffmpeg:
      inputs:
        - path: rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov
          roles:
            - detect
            - clips
    width: 240
    height: 160
    fps: 5
    objects:
      track:
       - person
    snapshots:
      # Optional: Enable writing jpg snapshot to /media/frigate/clips (default: shown below)
      # This value can be set via MQTT and will be updated in startup based on retained value
      enabled: True
      # Optional: print a timestamp on the snapshots (default: shown below)
      timestamp: False
      # Optional: draw bounding box on the snapshots (default: shown below)
      bounding_box: False
      # Optional: crop the snapshot (default: shown below)
      crop: True
      # Optional: height to resize the snapshot to (default: original size)
      height: 175
      # Optional: Camera override for retention settings (default: global values)
      retain:
        # Required: Default retention days (default: shown below)
        default: 2
    mqtt:
      # Optional: Enable publishing snapshot via mqtt for camera (default: shown below)
      # NOTE: Only applies to publishing image data to MQTT via 'frigate/<camera_name>/<object_name>/snapshot'.
      # All other messages will still be published.
      enabled: True
      # Optional: print a timestamp on the snapshots (default: shown below)
      timestamp: False
      # Optional: draw bounding box on the snapshots (default: shown below)
      bounding_box: False
      # Optional: crop the snapshot (default: shown below)
      crop: True
      # Optional: height to resize the snapshot to (default: shown below)
      height: 500
      # Optional: Restrict mqtt messages to objects that entered any of the listed zones (default: no required zones)
      required_zones: []
    rtmp:
      # Required: Enable the live stream (default: True)
      enabled: False
detectors:
  coral:
    type: edgetpu


file: doubletake config

# Double Take
server:
  port: 3000

mqtt:
  host: 192.168.0.40
  mqtt.username: *****
  mqtt.password: *****
  topics:
    frigate: frigate/events
    matches: double-take/matches
    cameras: double-take/cameras

confidence:
  match: 60
  unknown: 10

objects:
  face:
    min_area_match: 10000

save:
  matches: true
  unknown: true

purge:
  matches: 168
  unknown: 8

frigate:
  url: http://192.168.0.40:5000
  image:
    height: 500
  attempts:
    latest: 10
    snapshot: 0
  cameras:
    #- ufficio
    - sala
    - bunny
    
detectors:
  compreface:
    url: http://192.168.0.96:8000
    key:  # key from recognition service in created app

time:
  format: F
  timezone: Europe/Rome

use snapshot image from the mqqt message instead of fetching from the api

Currently the snapshot is fetched from frigate by posting to 0.0.0.0:PORT/recognize from the mqtt client, but since frigate publishes the snapshots binary image with the mqtt message maybe we could get better performance by just using the data that we already got with the message?

Or are there any other reasons why to fetch the image again with axios, maybe I'm missing something.

Question: set matched to unknown

Hi, thanks for this project, really loving it.

Just wondering, I have instances where am unknown person (delivery driver) is matched as a family member. How can I train the image to unknown? Do I just create a new folder / name 'unknown', which I guess is like creating an identity called unknown. Or is there a way to train the model to move it to the true unknown category?

I'm using compreface.

Cheers!

Mass learn and unlearn

As sometimes its easy to click learn on too many things. I noticed I clicked learn on some hits that should not be hits.

Would it be possible to implement "unlearn / forget" and relearn (if I for example use deepstack and install facebox too, I could click a button and it learns facebox with all the images already learned)

Get registered persons from DeepStack

Hi! Is it possible to use the trained persons from DeepStack in Double Take? I registered the persons and faces over API in DeepStack. Can I somehow retrieve the information in Double Take?

recent match found, even if new faces appear in sequential following frames

When a camera detects a match it's getting paused, even if the following snapshots on the same event contain new faces.
Testing this with a friend today it first matched on my face and when I told him to get in front of the camera as well it didn't detect him. Testing the latest snapshot from frigate manually in compreface it detects both his and my face, though his below the confidence so it should still be saved as I run double-take with save_unknown=true

The logs show "paused processing CAMERA_NAME, recent match found" but I think it should try and keep processing as new faces can appear.

[FR] Add Zone support from Frigate

Would it be possible to only pass images to Deepstack when the motion from Frigate is within a Zone?

Use Case:

I have a camera looking down my drive.
I have a Motion mask on the road as I am not bothered about cars / people in the road.
I then have a zone this side (drive) of the footpath / sidewalk.

I have a zone defined as:
drive_zone_0
under "drive" camera

I am not bothered / interested in Face Detection of people walking up and down the street. But I would like to run Face Detection on people who walk down my drive towards the house. ( drive_zone_0 )

Question: Does mobile train work with iphone (take photo or upload from library)?

It doesn't matter how I ingest the photo to train with, deepstack confirms success and Facebox says no face detected.

What am I missing?

No errors in the facebox container. It just won't detect/train.

I am starting it within docker-compose.

facebox:
    image: machinebox/facebox
    container_name: facebox
    restart: always
    ports:
      - 8000:8080
    environment:
      - "MB_KEY=<redacted>"
    networks:
      - intproxy

Problem retraining from UI

Hi,

I have some faces wrongly allocated, I can select them in the UI, pick the correct person and select the tick to re-designate them, as per previous versions, but the logs appear to show no images are being selected for retraining, e.g. 3 images selected by this process, logs show 0:

info: testName: queuing 0 file(s) for training
info: testName: training complete in 3.81 sec

So all these images remain in my matched tab with incorrect names.

Not sure if relevant but also seeing lots of these in the logs:

error: connect ECONNREFUSED 127.0.0.1:80

mqtt messages

David, hello!
Thank you for your project and for answering our questions quickly. But you can add a few more things:

  1. messages error output in mqtt, for example, in config.yml the parameters are incorrect, or detectors are not specified;
  2. you can made button "reset" in web-ui double-take.
    Now all this can be done only through docker/portainer which is very inconvenient.

Processed events with no matches produce a lot of missed images

@erikarenhill, curious to hear your thoughts on this as well.

I've been testing some of the recent code changes with untrained detectors. One thing I've noticed is that if SAVE_UNKNOWN is set to true, then a lot of images will be produced for each event. Each retry per detector could produce an image if a person is found. For example, if I'm running compreface and deepstack and the snapshot and latest retries are set to 10 each, then I could in theory have 40 images saved for just a single event.

Maybe just taking a couple of the best results from the unknown images instead of saving out every single one to display on the UI.

Screen Shot 2021-04-21 at 2 55 17 PM

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.