Giter VIP home page Giter VIP logo

skrashevich / double-take Goto Github PK

View Code? Open in Web Editor NEW

This project forked from jakowenko/double-take

388.0 7.0 19.0 20.07 MB

Unified UI and API for processing and training images for facial recognition.

Home Page: https://hub.docker.com/r/skrashevich/double-take

License: MIT License

Shell 0.28% JavaScript 54.88% Vue 39.25% Dockerfile 0.97% SCSS 0.01% HTML 0.90% TypeScript 0.28% Go 3.44%

double-take's Introduction

Double Take Double Take Docker Pulls CodeFactor

Community-owned resources:

WorldWide discord server

Frigate/DoubleTake CIS Region Telegram chat πŸ‡ΊπŸ‡¦πŸ‡°πŸ‡ΏπŸ‡§πŸ‡ΎπŸ‡·πŸ‡ΊπŸ‡ΊπŸ‡³ make love, not war

Double Take

Unified UI and API for processing and training images for facial recognition.

Why?

There's a lot of great open source software to perform facial recognition, but each of them behave differently. Double Take was created to abstract the complexities of the detection services and combine them into an easy to use UI and API.

Features

Supported Architecture

  • amd64
  • arm64

Supported Detectors

Supported NVRs

Installation

Docker

docker run -d -v $(pwd)/.double-take:/.storage -p 3000:3000 skrashevich/double-take:latest

Docker Compose

version: '3.7'

volumes:
  double-take:

services:
  double-take:
    container_name: double-take
    image: skrashevich/double-take
    restart: unless-stopped
    volumes:
      - double-take:/.storage
    ports:
      - 3000:3000

Docker (Windows)

To run the Double Take application in Docker on Windows, follow the below instructions:

  1. Install Docker Desktop on Windows system if not already installed.

  2. Open Command Prompt logged in as an administrator.

  3. Pull the Double Take Docker image with the command:

    docker pull skrashevich/double-take:latest
    
  4. Determine the location you wish to use for the configuration folder. For example: C:\Users\YourUsername\double-take-config .

  5. Run the Docker command to start the Double Take container, replacing the default configuration folder location with your new location:

    docker run -d -v C:\Users\YourUsername\double-take-config:/.storage -p 3000:3000 skrashevich/double-take:latest 
    

Make sure that the C:\Users\YourUsername\double-take-config directory exists and you have the necessary permissions for that folder. If the folder does not exist, create it before running the Docker command.

  1. If all went well, your NodeJS app should now be up and running inside a Docker container. You can check your application by visiting http://localhost:3000.

Integrations

Subscribe to Frigate's MQTT topics and process images for analysis.

mqtt:
  host: localhost

frigate:
  url: http://localhost:5000

When the frigate/events topic is updated the API begins to process the snapshot.jpg and latest.jpg images from Frigate's API. These images are passed from the API to the configured detector(s) until a match is found that meets the configured requirements. To improve the chances of finding a match, the processing of the images will repeat until the amount of retries is exhausted or a match is found.

When the frigate/+/person/snapshot topic is updated the API will process that image with the configured detector(s). It is recommended to increase the MQTT snapshot size in the Frigate camera config.

cameras:
  front-door:
    mqtt:
      timestamp: False
      bounding_box: False
      crop: True
      quality: 100
      height: 500

If a match is found the image is saved to /.storage/matches/<filename>.

Trigger automations / notifications when images are processed.

If the MQTT integration is configured within Home Assistant, then sensors will automatically be created.

Notification Automation

This notification will work for both matches and unknown results. The message can be customized with any of the attributes from the entity.

alias: Notify
trigger:
  - platform: state
    entity_id: sensor.double_take_david
  - platform: state
    entity_id: sensor.double_take_unknown
condition:
  - condition: template
    value_template: '{{ trigger.to_state.state != trigger.from_state.state }}'
action:
  - service: notify.mobile_app
    data:
      message: |-
        {% if trigger.to_state.attributes.match is defined %}
          {{trigger.to_state.attributes.friendly_name}} is near the {{trigger.to_state.state}} @ {{trigger.to_state.attributes.match.confidence}}% by {{trigger.to_state.attributes.match.detector}}:{{trigger.to_state.attributes.match.type}} taking {{trigger.to_state.attributes.attempts}} attempt(s) @ {{trigger.to_state.attributes.duration}} sec
        {% elif trigger.to_state.attributes.unknown is defined %}
          unknown is near the {{trigger.to_state.state}} @ {{trigger.to_state.attributes.unknown.confidence}}% by {{trigger.to_state.attributes.unknown.detector}}:{{trigger.to_state.attributes.unknown.type}} taking {{trigger.to_state.attributes.attempts}} attempt(s) @ {{trigger.to_state.attributes.duration}} sec
        {% endif %}
      data:
        attachment:
          url: |-
            {% if trigger.to_state.attributes.match is defined %}
              http://localhost:3000/api/storage/matches/{{trigger.to_state.attributes.match.filename}}?box=true&token={{trigger.to_state.attributes.token}}
            {% elif trigger.to_state.attributes.unknown is defined %}
               http://localhost:3000/api/storage/matches/{{trigger.to_state.attributes.unknown.filename}}?box=true&token={{trigger.to_state.attributes.token}}
            {% endif %}
        actions:
          - action: URI
            title: View Image
            uri: |-
              {% if trigger.to_state.attributes.match is defined %}
                http://localhost:3000/api/storage/matches/{{trigger.to_state.attributes.match.filename}}?box=true&token={{trigger.to_state.attributes.token}}
              {% elif trigger.to_state.attributes.unknown is defined %}
                 http://localhost:3000/api/storage/matches/{{trigger.to_state.attributes.unknown.filename}}?box=true&token={{trigger.to_state.attributes.token}}
              {% endif %}
mode: parallel
max: 10

MQTT

Publish results to double-take/matches/<name> and double-take/cameras/<camera>. The number of results will also be published to double-take/cameras/<camera>/person and will reset back to 0 after 30 seconds.

Errors from the API will be published to double-take/errors.

mqtt:
  host: localhost

double-take/matches/david

{
  "id": "1623906078.684285-5l9hw6",
  "duration": 1.26,
  "timestamp": "2021-06-17T05:01:36.030Z",
  "attempts": 3,
  "camera": "living-room",
  "zones": [],
  "match": {
    "name": "david",
    "confidence": 66.07,
    "match": true,
    "box": { "top": 308, "left": 1018, "width": 164, "height": 177 },
    "type": "latest",
    "duration": 0.28,
    "detector": "compreface",
    "filename": "2f07d1ad-9252-43fd-9233-2786a36a15a9.jpg",
    "base64": null
  }
}

double-take/cameras/back-door

{
  "id": "ff894ff3-2215-4cea-befa-43fe00898b65",
  "duration": 4.25,
  "timestamp": "2021-06-17T03:19:55.695Z",
  "attempts": 5,
  "camera": "back-door",
  "zones": [],
  "matches": [
    {
      "name": "david",
      "confidence": 100,
      "match": true,
      "box": { "top": 286, "left": 744, "width": 319, "height": 397 },
      "type": "manual",
      "duration": 0.8,
      "detector": "compreface",
      "filename": "dcb772de-d8e8-4074-9bce-15dbba5955c5.jpg",
      "base64": null
    }
  ],
  "misses": [],
  "unknowns": [],
  "counts": { "person": 1, "match": 1, "miss": 0, "unknown": 0 }
}

Notify Services

notify:
  gotify:
    url: http://localhost:8080
    token:
notify:
  telegram:
    token: 
    chat_id: "12345678"

chat_id must be in quotes

API Images

Match images are saved to /.storage/matches and can be accessed via http://localhost:3000/api/storage/matches/<filename>.

Training images are saved to /.storage/train and can be accessed via http://localhost:3000/api/storage/train/<name>/<filename>.

Latest images are saved to /.storage/latest and can be accessed via http://localhost:3000/api/storage/latest/<name|camera>.jpg.

Query Parameters Description Default
box Show bounding box around faces false
token Access token

UI

The UI is accessible via http://localhost:3000.

  • Matches: /
  • Train: /train
  • Config: /config
  • Access Tokens: /tokens (if authentication is enabled)

Authentication

Enable authentication to password protect the UI. This is recommended if running Double Take behind a reverse proxy which is exposed to the internet.

auth: true

API

Documentation can be viewed on Here.

Configuration

Configurable options are saved to /.storage/config/config.yml and are editable via the UI at http://localhost:3000/config. Default values do not need to be specified in configuration unless they need to be overwritten.

auth

# enable authentication for ui and api (default: shown below)
auth: false

token

# if authentication is enabled
# age of access token in api response and mqtt topics (default: shown below)
# expressed in seconds or a string describing a time span zeit/ms
# https://github.com/vercel/ms
token:
  image: 24h

mqtt

# enable mqtt subscribing and publishing (default: shown below)
mqtt:
  host:
  username:
  password:
  client_id:

  tls:
    # cert chains in PEM format: /path/to/client.crt
    cert:
    # private keys in PEM format: /path/to/client.key
    key:
    # optionally override the trusted CA certificates: /path/to/ca.crt
    ca:
    # if true the server will reject any connection which is not authorized with the list of supplied CAs
    reject_unauthorized: false

  topics:
    # mqtt topic for frigate message subscription
    frigate: frigate/events
    #  mqtt topic for home assistant discovery subscription
    homeassistant: homeassistant
    # mqtt topic where matches are published by name
    matches: double-take/matches
    # mqtt topic where matches are published by camera name
    cameras: double-take/cameras

detect

# global detect settings (default: shown below)
detect:
  match:
    # save match images
    save: true
    # include base64 encoded string in api results and mqtt messages
    # options: true, false, box
    base64: false
    # minimum confidence needed to consider a result a match
    confidence: 60
    # hours to keep match images until they are deleted
    purge: 168
    # minimum area in pixels to consider a result a match
    min_area: 10000

  unknown:
    # save unknown images
    save: true
    # include base64 encoded string in api results and mqtt messages
    # options: true, false, box
    base64: false
    # minimum confidence needed before classifying a name as unknown
    confidence: 40
    # hours to keep unknown images until they are deleted
    purge: 8
    # minimum area in pixels to keep an unknown result
    min_area: 0

frigate

# frigate settings (default: shown below)
frigate:
  url:

  # if double take should send matches back to frigate as a sub label
  # NOTE: requires frigate 0.11.0+
  update_sub_labels: false

  # stop the processing loop if a match is found
  # if set to false all image attempts will be processed before determining the best match
  stop_on_match: true

  # ignore detected areas so small that face recognition would be difficult
  # quadrupling the min_area of the detector is a good start
  # does not apply to MQTT events
  min_area: 0

  # object labels that are allowed for facial recognition
  labels:
    - person

  attempts:
    # number of times double take will request a frigate latest.jpg for facial recognition
    latest: 10
    # number of times double take will request a frigate snapshot.jpg for facial recognition
    snapshot: 10
    # process frigate images from frigate/+/person/snapshot topics
    mqtt: true
    # add a delay expressed in seconds between each detection loop
    delay: 0

  image:
    # height of frigate image passed for facial recognition
    height: 500

  # only process images from specific cameras
  cameras:
    # - front-door
    # - garage

  # only process images from specific zones
  zones:
    # - camera: garage
    #   zone: driveway

  # override frigate attempts and image per camera
  events:
    # front-door:
    #   attempts:
    #     # number of times double take will request a frigate latest.jpg for facial recognition
    #     latest: 5
    #     # number of times double take will request a frigate snapshot.jpg for facial recognition
    #     snapshot: 5
    #     # process frigate images from frigate/<camera-name>/person/snapshot topic
    #     mqtt: false
    #     # add a delay expressed in seconds between each detection loop
    #     delay: 1

    #   image:
    #     # height of frigate image passed for facial recognition (only if using default latest.jpg and snapshot.jpg)
    #     height: 1000
    #     # custom image that will be used in place of latest.jpg
    #     latest: http://camera-url.com/image.jpg
    #     # custom image that will be used in place of snapshot.jpg
    #     snapshot: http://camera-url.com/image.jpg

    # This option allows setting a custom time delay for the MQTT home
    # assistant device tracker.                                                   
                                                                                
    # By adjusting  device_tracker_timeout , users can determine how long they    
    # want to wait before receiving a 'not_home' message when no person is        
    # recognized. The time delay is implemented in minutes and the default value  
    # is set to 30 minutes
    device_tracker_timeout: 30

cameras

# camera settings (default: shown below)
cameras:
  front-door:
    # apply masks before processing image
    # masks:
    #   # list of x,y coordinates to define the polygon of the zone
    #   coordinates:
    #     - 1920,0,1920,328,1638,305,1646,0
    #   # show the mask on the final saved image (helpful for debugging)
    #   visible: false
    #   # size of camera stream used in resizing masks
    #   size: 1920x1080

    # override global detect variables per camera
    # detect:
    #   match:
    #     # save match images
    #     save: true
    #     # include base64 encoded string in api results and mqtt messages
    #     # options: true, false, box
    #     base64: false
    #     # minimum confidence needed to consider a result a match
    #     confidence: 60
    #     # minimum area in pixels to consider a result a match
    #     min_area: 10000

    #   unknown:
    #     # save unknown images
    #     save: true
    #     # include base64 encoded string in api results and mqtt messages
    #     # options: true, false, box
    #     base64: false
    #     # minimum confidence needed before classifying a match name as unknown
    #     confidence: 40
    #     # minimum area in pixels to keep an unknown result
    #     min_area: 0

    # snapshot:
    #   # process any jpeg encoded mqtt topic for facial recognition
    #   topic:
    #   # process any http image for facial recognition
    #   url:

detectors

# detector settings (default: shown below)
detectors:
  compreface:
    url:
    # recognition api key
    key:
    # number of seconds before the request times out and is aborted
    timeout: 15
    # minimum required confidence that a recognized face is actually a face
    # value is between 0.0 and 1.0
    det_prob_threshold: 0.8
    # require opencv to find a face before processing with detector
    opencv_face_required: false
    # comma-separated slugs of face plugins
    # https://github.com/exadel-inc/CompreFace/blob/master/docs/Face-services-and-plugins.md)
    # face_plugins: mask,gender,age
    # only process images from specific cameras, if omitted then all cameras will be processed
    # cameras:
    #   - front-door
    #   - garage

  rekognition:
    aws_access_key_id: !secret aws_access_key_id
    aws_secret_access_key: !secret aws_secret_access_key
    aws_region:
    collection_id: double-take
    # require opencv to find a face before processing with detector
    opencv_face_required: true
    # only process images from specific cameras, if omitted then all cameras will be processed
    # cameras:
    #   - front-door
    #   - garage

  deepstack:
    url:
    key:
    # number of seconds before the request times out and is aborted
    timeout: 15
    # require opencv to find a face before processing with detector
    opencv_face_required: false
    # only process images from specific cameras, if omitted then all cameras will be processed
    # cameras:
    #   - front-door
    #   - garage

  aiserver:
    url:
    # number of seconds before the request times out and is aborted
    timeout: 15
    # require opencv to find a face before processing with detector
    opencv_face_required: false
    # only process images from specific cameras, if omitted then all cameras will be processed
    # cameras:
    #   - front-door
    #   - garage

  facebox:
    url:
    # number of seconds before the request times out and is aborted
    timeout: 15
    # require opencv to find a face before processing with detector
    opencv_face_required: false
    # only process images from specific cameras, if omitted then all cameras will be processed
    # cameras:
    #   - front-door
    #   - garage

opencv

# opencv settings (default: shown below)
# docs: https://docs.opencv.org/4.6.0/d1/de5/classcv_1_1CascadeClassifier.html
opencv:
  scale_factor: 1.05
  min_neighbors: 4.5
  min_size_width: 30
  min_size_height: 30

schedule

# schedule settings (default: shown below)
schedule:
  # disable recognition if conditions are met
  disable:
    # - days:
    #     - monday
    #     - tuesday
    #   times:
    #     - 20:00-23:59
    #   cameras:
    #     - office
    # - days:
    #     - tuesday
    #     - wednesday
    #   times:
    #     - 13:00-15:00
    #     - 18:00-20:00
    #   cameras:
    #     - living-room

notify

# notify settings (default: shown below)
notify:
  gotify:
    url:
    token:
    priority: 5

    # only notify from specific cameras
    # cameras:
    #   - front-door
    #   - garage

    # only notify from specific zones
    # zones:
    #   - camera: garage
    #     zone: driveway

time

# time settings (default: shown below)
time:
  # defaults to iso 8601 format with support for token-based formatting
  # https://github.com/moment/luxon/blob/master/docs/formatting.md#table-of-tokens
  format:
  # time zone used in logs
  timezone: UTC

logs

# log settings (default: shown below)
# options: silent, error, warn, info, http, verbose, debug, silly
logs:
  level: info
  sql: false # trace sql queries

ui

# ui settings (default: shown below)
ui:
  # base path of ui
  path:

  pagination:
    # number of results per page
    limit: 50

  thumbnails:
    # value between 0-100
    quality: 95
    # value in pixels
    width: 500

  logs:
    # number of lines displayed
    lines: 500

telemetry

# telemetry settings (default: shown below)
# self hosted version of plausible.io
# 100% anonymous, used to help improve project
# no cookies and fully compliant with GDPR, CCPA and PECR
telemetry: true

Storing Secrets

Note: If using one of the Home Assistant Add-ons then the default Home Assistant /config/secrets.yaml file is used.

mqtt:
  host: localhost
  username: mqtt
  password: !secret mqtt_password

detectors:
  compreface:
    url: localhost:8000
    key: !secret compreface_key

The secrets.yml file contains the corresponding value assigned to the identifier.

mqtt_password: <password>
compreface_key: <api-key>

Development

Run Local Containers

Service
UI localhost:8080
API localhost:3000
MQTT localhost:1883
# start development containers
./.develop/docker up

# remove development containers
./.develop/docker down

Build Local Image

./.develop/build

Star History

Star History Chart

double-take's People

Contributors

alfiegerner avatar bagsik avatar bigbangus avatar code-factor avatar dependabot[bot] avatar jakowenko avatar konturn avatar leccelecce avatar marq24 avatar myxor avatar pkulak avatar pospielov avatar skrashevich avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

double-take's Issues

[BUG] Pictures in portrait mode are not rendered correctly. Maybe causing an issue with training

I went from original double-take to yours and found out that all my pictures coming from my phone (Samsung Galaxy Note 20 Ultra) in portrait mode are displayed in landscape mode in the UI.

I thought it was not a problem but when I was trying CodeProject.AI, the logs shows no face detected on these images, Just added one picture taken in landscape mode and CodeProject.AI detected the faces.

Version of Double Take
1.13.11.3

Expected behavior
Portrait photo should be sent/viewed as portrait

Screenshots
2023-09-22_15-23-20

Hardware

  • amd64 + GPU
  • Qnap NAS OS
  • Chrome
  • skrashevich/double-take:latest

Does the notify automation still work? Can't get it to work properly

From the docs i used below notify automation, but it's not working.

alias: Notify
trigger:
  - platform: state
    entity_id: sensor.double_take_david
  - platform: state
    entity_id: sensor.double_take_unknown
condition:
  - condition: template
    value_template: '{{ trigger.to_state.state != trigger.from_state.state }}'
action:
  - service: notify.mobile_app
    data:
      message: |-
        {% if trigger.to_state.attributes.match is defined %}
          {{trigger.to_state.attributes.friendly_name}} is near the {{trigger.to_state.state}} @ {{trigger.to_state.attributes.match.confidence}}% by {{trigger.to_state.attributes.match.detector}}:{{trigger.to_state.attributes.match.type}} taking {{trigger.to_state.attributes.attempts}} attempt(s) @ {{trigger.to_state.attributes.duration}} sec
        {% elif trigger.to_state.attributes.unknown is defined %}
          unknown is near the {{trigger.to_state.state}} @ {{trigger.to_state.attributes.unknown.confidence}}% by {{trigger.to_state.attributes.unknown.detector}}:{{trigger.to_state.attributes.unknown.type}} taking {{trigger.to_state.attributes.attempts}} attempt(s) @ {{trigger.to_state.attributes.duration}} sec
        {% endif %}
      data:
        attachment:
          url: |-
            {% if trigger.to_state.attributes.match is defined %}
              http://localhost:3000/api/storage/matches/{{trigger.to_state.attributes.match.filename}}?box=true&token={{trigger.to_state.attributes.token}}
            {% elif trigger.to_state.attributes.unknown is defined %}
               http://localhost:3000/api/storage/matches/{{trigger.to_state.attributes.unknown.filename}}?box=true&token={{trigger.to_state.attributes.token}}
            {% endif %}
        actions:
          - action: URI
            title: View Image
            uri: |-
              {% if trigger.to_state.attributes.match is defined %}
                http://localhost:3000/api/storage/matches/{{trigger.to_state.attributes.match.filename}}?box=true&token={{trigger.to_state.attributes.token}}
              {% elif trigger.to_state.attributes.unknown is defined %}
                 http://localhost:3000/api/storage/matches/{{trigger.to_state.attributes.unknown.filename}}?box=true&token={{trigger.to_state.attributes.token}}
              {% endif %}
mode: parallel
max: 10

[BUG] i cant get it started I get adres already in use but its not.

I cant get the container started i get error Error: listen EADDRINUSE: address already in use :::3000?

config

mqtt:
host: 192.168.2.16:1883
username: dtake
password:
topics:
# mqtt topic for frigate message subscription
frigate: frigate/events
# mqtt topic for home assistant discovery subscription
homeassistant: homeassistant
# mqtt topic where matches are published by name
matches: double-take/matches
# mqtt topic where matches are published by camera name
cameras: double-take/cameras

frigate:
url: http://192.168.2.144:5000
update_sub_labels: true
cameras:
- voordeur
labels:
- person

detectors:
compreface:
url: http://192.168.2.16:8452
# recognition api key
key:
# number of seconds before the request times out and is aborted
timeout: 180
# minimum required confidence that a recognized face is actually a face
# value is between 0.0 and 1.0
det_prob_threshold: 0.8
# require opencv to find a face before processing with detector
opencv_face_required: false

cameras:
voordeur:
detect:
match:
# save match images
save: true
# include base64 encoded string in api results and mqtt messages
# options: true, false, box
base64: false
# minimum confidence needed to consider a result a match
confidence: 90
# hours to keep match images until they are deleted
purge: 168
# minimum area in pixels to consider a result a match 10000
min_area: 6000
unknown:
# save unknown images
save: true
# include base64 encoded string in api results and mqtt messages
# options: true, false, box
base64: false
# minimum confidence needed before classifying a name as unknown
confidence: 40
# hours to keep unknown images until they are deleted
purge: 8
# minimum area in pixels to keep an unknown result
min_area: 0

compose:
version: '3.7'

volumes:
double-take:

services:
double-take:
container_name: double-take
image: skrashevich/double-take:latest
#jakowenko/double-take
restart: unless-stopped
volumes:
- /usr/share/hassio/homeassistant/double-take:/.storage
ports:
- 3580:3000
Log error:
info: Double Take v1.13.9
node:events:491
throw er; // Unhandled 'error' event
^
Error: listen EADDRINUSE: address already in use :::3000
at Server.setupListenHandle [as _listen2] (node:net:1463:16)
at listenInCluster (node:net:1511:12)
at Server.listen (node:net:1599:7)
at Object.module.exports.start (/double-take/api/server.js:23:52)
Emitted 'error' event on Server instance at:
at Server.incomingRequest (/double-take/api/node_modules/@opentelemetry/instrumentation-http/build/src/http.js:280:33)
at emitErrorNT (node:net:1490:8)
at processTicksAndRejections (node:internal/process/task_queues:83:21) {
code: 'EADDRINUSE',
errno: -98,
syscall: 'listen',
address: '::',
port: 3000
}
Additional context
Add any other context about the problem here.

[FEAT] Occupancy entity

In my humble opinion entity state should corresponding with actual state. If I look at home assistant entities (sun.sun, binary_sensor.door_state, energy, voltage, temperature, humidity) they always show actual state.

Frigate have moved their binary sensors to occupancy like
binary_sensor.cameraname_person_occupancy and it is true, only if person is detected and this is awesome! (if person is moves out of camera state changes to false)

double-take is creating entity sensor.double_take_personname with value of last camera name recognize that person

My idea is to set entity sensor.double_take_personname to None/False few seconds after recognize (ideally - with recognized person tracking getting off camera view) or to create entity like binary_sensor.cameraname_personname_occupancy

Snapshots cropping issue

I apologize if I am not reporting the issue properly. I recently started using Frigate, DoubleTake, and Compreface with a single camera Hikvision DS-2CD2387G2. I am not sure where the issue is. I am getting low resolution event images on DoubleTake for snapshots and mqtt. The β€œlatest” images seem to have consistently high resolution. Snapshot seems to be always cropped to a person height while β€œlatest” shows the entire area. The snapshot images also pixelated probably because of they have been resized. I tried disabling β€œsnapshot” on Frigate but it did not change anything. Crop is set to False. Image files in frigate/media/clips are all of consistent size - about 2MB for *.jpg and 20MB for *clean.png.

Version of Double Take
1.13.10 and 1.13.9

Expected behavior
I would expect to see images with the resolution as they have been produced by the camera.

Hardware

  • Frigate, DoubleTake, and Compreface are all running in Docker/Ubuntu as a Proxmox VM.

New release using codeproject

How do I get your new release on home assistant using codeproject ai. I getting errors using current version.

23-05-14 17:59:09 info: processing frontyard: 8de6b3c8-410e-44be-9a23-5a0e7b3a476c
23-05-14 17:59:09 info: processing frontyard: 1684101483.656355-l1wiog
23-05-14 17:59:09 error: TypeError: aiserver process error: Cannot read properties of undefined (reading 'recognize')
at module.exports.recognize (/double-take/api/src/util/detectors/actions/index.js:4:24)
at Object.module.exports.process (/double-take/api/src/util/process.util.js:160:28)
at Object.module.exports.start (/double-take/api/src/util/process.util.js:137:28)
at module.exports.polling (/double-take/api/src/util/process.util.js:55:36)
at runMicrotasks ()
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at async Promise.all (index 0)
at async module.exports.start (/double-take/api/src/controllers/recognize.controller.js:152:7)
23-05-14 17:59:09 info: done processing frontyard: 8de6b3c8-410e-44be-9a23-5a0e7b3a476c in 0.01 sec
23-05-14 17:59:09 info: {
id: '8de6b3c8-410e-44be-9a23-5a0e7b3a476c',
duration: 0.01,
timestamp: '2023-05-14T21:59:09.769Z',
attempts: 1,
camera: 'frontyard',
zones: [],
counts: { person: 0, match: 0, miss: 0, unknown: 0 },
matches: [],
misses: [],
unknowns: []

[FEAT] Apply zone restrictions to MQTT config

Is your feature request related to a problem? Please describe.
I have set some zones in Frigate, with accompanying configuration in Double Take to only process events with those zones.
However, it seems that mqtt images ignore this processing.

Describe the solution you'd like
I'd like my zones config in Double Take to be honoured i.e. don't process an mqtt image if it's not in the right zone.

Additional context
I am going to guess that the problem here is that mqtt publishes by Frigate just include the JPG image, and don't contain any metadata such as zones, so you can't tell. I am wondering whether the receipt of an mqtt message within the start and end period of an event could be used to essentially say, ignore this mqtt message from Frigate as it's arrived during an event outside a zone?

[FEAT] Deepface

Is your feature request related to a problem? Please describe.
Any chance of integrating Deepface as a detector?

Requests on Jakowenko for the integration are:
jakowenko#196
jakowenko#305

[FEAT] Enable/Disable face detection via MQTT / HA Addon

Is your feature request related to a problem? Please describe.
I'm using AWS Recognition as it gives much better results for me than CompreFace (lots of false positives there even with good training images). That of course generates some cost when I submit images for face detection.

Describe the solution you'd like
It'd help reduce costs if I could dynamically enable/disable the face detection from Home Assistant, either by publishing to MQTT or by using an MQTT-based switch in HA.
That way I could use HA to only enable face detection when I actually need it, for example if someone just came home, which is currently my only use case. This way I could avoid detecting faces when everyone is already home and someone just comes up to my door or if no-one is home of course.
Would it be feasible to add an MQTT topic where we could post to enable/disable face detection in double take either completely, or based on recognition service / camera? For me it'd be good enough if I could just have a simple on/off toggle to disable or enable it completely.

[BUG] Unable to run the development containers

Describe the bug
When running ./.develop/docker up, the api container exits early with sh: 1: exec: nodemon: not found

Version of Double Take
1.13.10-SHA7

Screenshots
image

Hardware

  • Architecture or platform: intel
  • OS: macOS Ventura

[BUG] config requires property "detectors"

Describe the bug
When loading the config, I get the error message 'config requires property "detectors"'

Version of Double Take
v1.13.4

Expected behavior
Load DT

Hardware
Docker container

Additional context
I can jump onto the UI for Frigate, Double Take and CodeProject
Config:

# Double Take
# Learn more at https://github.com/skrashevich/double-take/#configuration

mqtt:
  host: 192.168.1.198
  username: mqtt-user
  password: XXXXX

frigate:
  url: http://192.168.1.175:5000
  update_sub_labels: True

# detector settings (default: shown below)
detectors:
  aiserver:
    url: http://192.168.1.175:32168
    # number of seconds before the request times out and is aborted
    timeout: 15
    # require opencv to find a face before processing with detector
    opencv_face_required: false
    # only process images from specific cameras, if omitted then all cameras will be processed
    # cameras:
    #   - front-door
    #   - garage

IMG_1448

[BUG] Time Issue - Out By 1 Hour

Describe the bug
I am in the UK, and currently we have the time set to summer time (GMT+1).
I have DT as an add-on in Home Assistant.
Home Assistant is set to the correct time zone (HA entities are reporting the correct time when I look).
Frigate is running in a docker container in an LXC on Proxmox and I have TZ set for Frigate to 'Europe/London'.
Home Assistant is running in the same Proxmox but in a VM.
The issue I am encountering is time in DT is out by an hour - is there a configurable option to bring this inline correctly for the HA addon install?

Version of Double Take
1.13.9

Expected behavior
When an event occurs, the recorded date/time is the correct date/time

Screenshots
Frigate event:
image
Same event from DT:
image
Notice the 'CreatedAt' time in DT states 12:26 but the event time recorded in Frigate states 13:23 (I can handle the 3 minute issue)

{
  "detector": "compreface",
  "duration": 0.59,
  "name": "unknown",
  "confidence": 52.82,
  "match": false,
  "box": {
    "top": 33,
    "left": 211,
    "width": 48,
    "height": 48
  },
  "checks": [
    "confidence too low: 52.82 < 85"
  ],
  "createdAt": "2023-06-27T12:26:44.190Z",
  "updatedAt": null
}

[BUG] Gotify ?

Describe the bug
Maybe not a bug ? I am trying to send notification to my Gotify server. I am not using Home Assistant.

Version of Double Take
1.13.11.3

Expected behavior
I want to received a notification every time a face is received on double take (identified or not)

Hardware
LXC Container on Proxmox on Docker
Frigate/Double Take/Coral AI Server
Docker Image : skrashevich/double-take:latest

Additional context

Declared in config file like this

notify:
gotify:
url: !secret gotify_url
token: !secret gotify_key

[FEAT] Recognize "objects" too

Is your feature request related to a problem? Please describe.
Now i see that double take subscribes only at frigate/+/person events in mqtt but i would like to recognize my different cats with custom models

Describe the solution you'd like
Id like to subscribe to different topics and use different model to recognize different objects than just faces.

Additional context
i have 4 cats, every one of a different color. Id love to have an album to check they are fine when outside from home more than one day

[BUG] Frigate MQTT jpg and frigate/events

Describe the bug
It looks like mqtt topic frigate/events doesn't receive any mqtt jpg's.
We can change it to frigate/+/person/snapshot and then mqtt properly is processed. However, in this case Double Take doesn't get any mqtt event and doesn't pull any "latest" or "snapshot" from frigate. Also, in this case it does not update any sublabel in frigate.

Version of Double Take
X.X.X-SHA7

Expected behavior
To be able to receive mqtt jpg and also process API frigate snapshot and latest with sublabel update. Maybe subscribe to both topics?

Screenshots
If applicable, add screenshots to help explain your problem.

Hardware

  • Architecture or platform: amd64
  • OS: Ubuntu
  • Browser (if applicable) n.a
  • Docker image (if applicable) HA Addon

Additional context
Add any other context about the problem here.

Deepstack

Hello and nice work.
I installed double take and deepstack as addon on home assitant.
Frigate has it self machine.
Do i need to configure deepstack in home assistant or not?
Thanks

[BUG] Don't understand the `frigate.attempts.mqtt` setting

Describe the bug
This is more of a config question than a bug. I was wondering if frgiate publishes the actual image to MQTT and DT picks it up. If true, I was trying to set frigate.attempts.mqtt to true and setting the snapshot and latest settings to 0 would mean that DT will take the image published to MQTT and not try to download it from frigate. I am not even sure I understand the functionality but was just playing around with settings. I think DT is reporting that it can't even find a person whereas Frigate using AI Server found the person and published the event. I even removed the min_area setting without luck. However, if I set attempts.latest and attempts.snapshot to default or anything other than 0, it works. So not sure what is going on here. IIRC, I remember seeing only MQTT labels in the UI thumbnail list in the previous versions but now none of them are MQTT labels.

If this is not how the integration is supposed to work, then we can ignore this and move on.

Here is my config:

---
auth: true

mqtt:
  host: m
  username: USERNAME
  password: PASSWORD

frigate:
  url: http://f:5000
  update_sub_labels: true
  min_area: 4096
  labels:
    - person
  image:
    height: 720
  attempts:
    latest: 0
    snapshot: 0
    mqtt: true
    delay: 2
  
logs:
  level: silly
  
detectors:
  deepstack:
    url: http://ds:5000
    timeout: 15
    opencv_face_required: true
  aiserver:
    url: http://cpai:5000
    timeout: 60
    opencv_face_required: true

Version of Double Take
1.13.11.8

Expected behavior
maybe pick up the image from MQTT and process it?

Screenshots
Here is my log output:

home_dt.1.m308n5oqkwkk@potter    | verbose: Incoming event from frigate: {"before": {"id": "1700787361.207806-4z62ld", "camera": "garage", "frame_time": 1700787361.207806, "snapshot": null, "label": "person", "sub_label": null, "top_score": 0.0, "false_positive": true, "start_time": 1700787361.207806, "end_time": null, "score": 0.92138671875, "box": [869, 412, 1086, 950], "area": 116746, "ratio": 0.4033457249070632, "region": [681, 407, 1261, 987], "stationary": false, "motionless_count": 0, "position_changes": 0, "current_zones": [], "entered_zones": [], "has_clip": false, "has_snapshot": false, "attributes": {}, "current_attributes": []}, "after": {"id": "1700787361.207806-4z62ld", "camera": "garage", "frame_time": 1700787361.331878, "snapshot": {"frame_time": 1700787361.331878, "box": [880, 397, 1093, 941], "area": 115872, "region": [683, 390, 1267, 974], "score": 0.92578125, "attributes": []}, "label": "person", "sub_label": null, "top_score": 0.923583984375, "false_positive": false, "start_time": 1700787361.207806, "end_time": null, "score": 0.92578125, "box": [880, 397, 1093, 941], "area": 115872, "ratio": 0.3915441176470588, "region": [683, 390, 1267, 974], "stationary": false, "motionless_count": 0, "position_changes": 1, "current_zones": [], "entered_zones": [], "has_clip": true, "has_snapshot": false, "attributes": {}, "current_attributes": []}, "type": "new"}
home_dt.1.m308n5oqkwkk@potter    | info: processing garage: 1700787361.207806-4z62ld
home_dt.1.m308n5oqkwkk@potter    | info: done processing garage: 1700787361.207806-4z62ld in 0 sec
home_dt.1.m308n5oqkwkk@potter    | info: {
home_dt.1.m308n5oqkwkk@potter    |   id: '1700787361.207806-4z62ld',
home_dt.1.m308n5oqkwkk@potter    |   duration: 0,
home_dt.1.m308n5oqkwkk@potter    |   timestamp: '2023-11-24T00:56:01.720Z',
home_dt.1.m308n5oqkwkk@potter    |   attempts: 0,
home_dt.1.m308n5oqkwkk@potter    |   camera: 'garage',
home_dt.1.m308n5oqkwkk@potter    |   zones: [],
home_dt.1.m308n5oqkwkk@potter    |   counts: { person: 0, match: 0, miss: 0, unknown: 0 },
home_dt.1.m308n5oqkwkk@potter    |   matches: [],
home_dt.1.m308n5oqkwkk@potter    |   misses: [],
home_dt.1.m308n5oqkwkk@potter    |   unknowns: [],
home_dt.1.m308n5oqkwkk@potter    |   token: '********'
home_dt.1.m308n5oqkwkk@potter    | }
home_dt.1.m308n5oqkwkk@potter    | verbose: Event type: frigate
home_dt.1.m308n5oqkwkk@potter    | verbose: FRIGATE.URL: http://f:5000; FRIGATE.UPDATE_SUB_LABELS: true; best.length: 0
home_dt.1.m308n5oqkwkk@potter    | verbose: Incoming event from frigate: {"before": {"id": "1700787361.207806-4z62ld", "camera": "garage", "frame_time": 1700787361.331878, "snapshot": {"frame_time": 1700787361.331878, "box": [880, 397, 1093, 941], "area": 115872, "region": [683, 390, 1267, 974], "score": 0.92578125, "attributes": []}, "label": "person", "sub_label": null, "top_score": 0.923583984375, "false_positive": false, "start_time": 1700787361.207806, "end_time": null, "score": 0.92578125, "box": [880, 397, 1093, 941], "area": 115872, "ratio": 0.3915441176470588, "region": [683, 390, 1267, 974], "stationary": false, "motionless_count": 0, "position_changes": 1, "current_zones": [], "entered_zones": [], "has_clip": true, "has_snapshot": false, "attributes": {}, "current_attributes": []}, "after": {"id": "1700787361.207806-4z62ld", "camera": "garage", "frame_time": 1700787361.53252, "snapshot": {"frame_time": 1700787361.331878, "box": [880, 397, 1093, 941], "area": 115872, "region": [683, 390, 1267, 974], "score": 0.92578125, "attributes": []}, "label": "person", "sub_label": null, "top_score": 0.923583984375, "false_positive": false, "start_time": 1700787361.207806, "end_time": null, "score": 0.86376953125, "box": [923, 424, 1126, 900], "area": 96628, "ratio": 0.4264705882352941, "region": [704, 370, 1300, 966], "stationary": false, "motionless_count": 3, "position_changes": 1, "current_zones": ["driveway"], "entered_zones": ["driveway"], "has_clip": true, "has_snapshot": true, "attributes": {}, "current_attributes": []}, "type": "update"}
home_dt.1.m308n5oqkwkk@potter    | info: processing garage: 1700787361.207806-4z62ld
home_dt.1.m308n5oqkwkk@potter    | info: done processing garage: 1700787361.207806-4z62ld in 0 sec
home_dt.1.m308n5oqkwkk@potter    | info: {
home_dt.1.m308n5oqkwkk@potter    |   id: '1700787361.207806-4z62ld',
home_dt.1.m308n5oqkwkk@potter    |   duration: 0,
home_dt.1.m308n5oqkwkk@potter    |   timestamp: '2023-11-24T00:56:01.991Z',
home_dt.1.m308n5oqkwkk@potter    |   attempts: 0,
home_dt.1.m308n5oqkwkk@potter    |   camera: 'garage',
home_dt.1.m308n5oqkwkk@potter    |   zones: [ 'driveway' ],
home_dt.1.m308n5oqkwkk@potter    |   counts: { person: 0, match: 0, miss: 0, unknown: 0 },
home_dt.1.m308n5oqkwkk@potter    |   matches: [],
home_dt.1.m308n5oqkwkk@potter    |   misses: [],
home_dt.1.m308n5oqkwkk@potter    |   unknowns: [],
home_dt.1.m308n5oqkwkk@potter    |   token: '********'
home_dt.1.m308n5oqkwkk@potter    | }
home_dt.1.m308n5oqkwkk@potter    | verbose: Event type: frigate
home_dt.1.m308n5oqkwkk@potter    | verbose: FRIGATE.URL: http://f:5000; FRIGATE.UPDATE_SUB_LABELS: true; best.length: 0
home_dt.1.m308n5oqkwkk@potter    | verbose: Incoming event from frigate: {"before": {"id": "1700787361.207806-4z62ld", "camera": "garage", "frame_time": 1700787361.53252, "snapshot": {"frame_time": 1700787361.331878, "box": [880, 397, 1093, 941], "area": 115872, "region": [683, 390, 1267, 974], "score": 0.92578125, "attributes": []}, "label": "person", "sub_label": null, "top_score": 0.923583984375, "false_positive": false, "start_time": 1700787361.207806, "end_time": null, "score": 0.86376953125, "box": [923, 424, 1126, 900], "area": 96628, "ratio": 0.4264705882352941, "region": [704, 370, 1300, 966], "stationary": false, "motionless_count": 3, "position_changes": 1, "current_zones": ["driveway"], "entered_zones": ["driveway"], "has_clip": true, "has_snapshot": true, "attributes": {}, "current_attributes": []}, "after": {"id": "1700787361.207806-4z62ld", "camera": "garage", "frame_time": 1700787365.281423, "snapshot": {"frame_time": 1700787361.331878, "box": [880, 397, 1093, 941], "area": 115872, "region": [683, 390, 1267, 974], "score": 0.92578125, "attributes": []}, "label": "person", "sub_label": null, "top_score": 0.923583984375, "false_positive": false, "start_time": 1700787361.207806, "end_time": 1700787369.686066, "score": 0.748046875, "box": [1108, 297, 1186, 549], "area": 19656, "ratio": 0.30952380952380953, "region": [985, 267, 1305, 587], "stationary": false, "motionless_count": 2, "position_changes": 1, "current_zones": ["driveway"], "entered_zones": ["driveway"], "has_clip": true, "has_snapshot": true, "attributes": {}, "current_attributes": []}, "type": "end"}

Hardware

  • Architecture or platform: amd64
  • OS: Ubuntu Server
  • Browser (if applicable): Firefox
  • Docker image (if applicable): skrashevich/double-take:latest

Additional context

[BUG] Error in log -> Not implemented: HTMLCanvasElement.prototype.getContext (without installing the canvas npm package)

Describe the bug
In the log there is output 'Not implemented: HTMLCanvasElement.prototype.getContext'...

23-08-27 09:46:06 info: processing fri_front: 032382f2-6a1a-44d3-86dc-2c7d166e8793
23-08-27 09:46:06 error: Error: Not implemented: HTMLCanvasElement.prototype.getContext (without installing the canvas npm package)
    at module.exports (/double-take/api/node_modules/jsdom/lib/jsdom/browser/not-implemented.js:9:17)
    at HTMLCanvasElementImpl.getContext (/double-take/api/node_modules/jsdom/lib/jsdom/living/nodes/HTMLCanvasElement-impl.js:42:5)
    at HTMLCanvasElement.getContext (/double-take/api/node_modules/jsdom/lib/jsdom/living/generated/HTMLCanvasElement.js:131:58)
    at Object.Module.imread (/double-take/api/src/util/opencv/lib.js:9180:24)
    at Object.module.exports.faceCount (/double-take/api/src/util/opencv/index.js:68:20)
    at async Object.module.exports.start (/double-take/api/src/util/process.util.js:136:45)
    at async module.exports.polling (/double-take/api/src/util/process.util.js:55:25)
    at async Promise.all (index 0)
    at async module.exports.start (/double-take/api/src/controllers/recognize.controller.js:152:7) null
23-08-27 09:46:06 error: opencv error:  Cannot read properties of null (reading 'drawImage')
23-08-27 09:46:06 error: Error: Not implemented: HTMLCanvasElement.prototype.getContext (without installing the canvas npm package)
    at module.exports (/double-take/api/node_modules/jsdom/lib/jsdom/browser/not-implemented.js:9:17)
    at HTMLCanvasElementImpl.getContext (/double-take/api/node_modules/jsdom/lib/jsdom/living/nodes/HTMLCanvasElement-impl.js:42:5)
    at HTMLCanvasElement.getContext (/double-take/api/node_modules/jsdom/lib/jsdom/living/generated/HTMLCanvasElement.js:131:58)
    at Object.Module.imread (/double-take/api/src/util/opencv/lib.js:9180:24)
    at Object.module.exports.faceCount (/double-take/api/src/util/opencv/index.js:68:20)
    at async Object.module.exports.start (/double-take/api/src/util/process.util.js:136:45)
    at async module.exports.polling (/double-take/api/src/util/process.util.js:55:25)
    at async Promise.all (index 0)
    at async module.exports.start (/double-take/api/src/controllers/recognize.controller.js:152:7) null
23-08-27 09:46:06 error: opencv error:  Cannot read properties of null (reading 'drawImage')

Version of Double Take
1.13.11

Expected behavior
"clean log"

Hardware

  • Architecture or platform: amd64
  • OS: Ubuntu
  • Docker image: skrashevich/double-take:latest

additional info
I am running compreface & codeprojec-ai as detectors and I have 'opencv_face_required' set to true for both... my opencv settings are the default:

opencv:
  scale_factor: 1.05
  min_neighbors: 4.5
  min_size_width: 40
  min_size_height: 40

[BUG] Unable to delete images from Train Menu

Describe the bug
I use Compreface and am whilst i can train and untrain images, I um unable to delete from the interface

Version of Double Take
1.13.11.1

Expected behavior
Expect the image to be deleted

Additional context
log:
error: TypeError: db.query is not a function
at module.exports.delete (/double-take/api/src/controllers/storage.controller.js:145:8)
at newFn (/double-take/api/node_modules/express-async-errors/index.js:16:20)
at Layer.handle [as handle_request] (/double-take/api/node_modules/express/lib/router/layer.js:95:5)
at next (/double-take/api/node_modules/express/lib/router/route.js:144:13)
at /double-take/api/src/middlewares/index.js:67:3
at newFn (/double-take/api/node_modules/express-async-errors/index.js:16:20)
at Layer.handle [as handle_request] (/double-take/api/node_modules/express/lib/router/layer.js:95:5)
at next (/double-take/api/node_modules/express/lib/router/route.js:144:13)
at module.exports.jwt (/double-take/api/src/middlewares/index.js:9:14)
at newFn (/double-take/api/node_modules/express-async-errors/index.js:16:20)

[BUG] (Support) Sub_Label not updating

Describe the bug
I can't seem to get the label to update, I'm hoping that it's a simple oversight/I'm over thinking the problem. I don't see anything in my Frigate/Double take logs that suggests there is a problem.

Version of Double Take
1.13.10-SHA7
skrashevich/double-take:latest

Frigate:
0.12.1-367D724

Hardware

  • OS: Unraid

[BUG] update_sub_labels: true doesn't work

Hello i'm on the latest version of double take and latest version of frigate ( tried on the latest frigate beta and it's the same)

I didn't change my config and update sub labels doesn't work anymore :(

Any idea why ?

[FEAT] Auto train matches

Is your feature request related to a problem? Please describe.
It would be nice if there was an option to auto train face matches. So far I've never had any false positives with compreface-gpu so it would remove some manual labor if there was some automation to learning faces.

Describe the solution you'd like
A flag to activate auto training.
A minimum match percentage option that allows auto training. For example, you might have a match at 95% but only want to auto train at 99%.

Additional context
Before self hosting facial recognition I had a Google Nest subscription, and the nest system had this feature.

[BUG] Error copy jpg files inside .storage during processing image

Describe the bug
Error copying jpg files inside .storage during doubletake processing image.
ENOENT: no such file or directory, copyfile './.storage/matches/397fa834-e9b7-4941-9eb2-2ef5ee541b56.jpg' -> './.storage/latest/unknown.jpg'

Version of Double Take
1.13.11.8-SHA7

Expected behavior
success copy latest.jpg to unknown.jpg? i don't know.

** docker-compose.yml
`version: '3.7'

volumes:
double-take:

services:
double-take:
container_name: double-take
image: skrashevich/double-take:latest
restart: unless-stopped
volumes:
- /opt/doubletake/storage:/.storage
ports:
- 3000:3000
`

Hardware

  • amd64
  • Ubuntu
  • Browser (if applicable) [e.g. Chrome, Safari]
  • Docker image: e.g. skrashevich/double-take:latest

Additional context
Add any other context about the problem here.

[BUG] Sublabels not returned in Frigate/[ERR_HTTP_HEADERS_SENT]

Describe the bug
Sub_labels are not showing up in Frigate, but in double take I can see the matching without any issues.
Another issue happens at the same time in the logs.
Same configuration of frigate and double-take tested on version 1.13.10, it's working fine.

2023-09-18 18:50:33 error: Error [ERR_HTTP_HEADERS_SENT]: Cannot set headers after they are sent to the client
2023-09-18 18:50:33     at new NodeError (node:internal/errors:387:5)
2023-09-18 18:50:33     at ServerResponse.setHeader (node:_http_outgoing:644:11)
2023-09-18 18:50:33     at ServerResponse.header (/double-take/api/node_modules/express/lib/response.js:794:10)
2023-09-18 18:50:33     at ServerResponse.send (/double-take/api/node_modules/express/lib/response.js:174:12)
2023-09-18 18:50:33     at ServerResponse.res.send (/double-take/api/src/middlewares/respond.js:41:18)
2023-09-18 18:50:33     at ServerResponse.json (/double-take/api/node_modules/express/lib/response.js:278:15)
2023-09-18 18:50:33     at ServerResponse.send (/double-take/api/node_modules/express/lib/response.js:162:21)
2023-09-18 18:50:33     at ServerResponse.res.send (/double-take/api/src/middlewares/respond.js:41:18)
2023-09-18 18:50:33     at /double-take/api/src/app.js:46:38
2023-09-18 18:50:33     at newFn (/double-take/api/node_modules/express-async-errors/index.js:16:20)

Version of Double Take
sha-f7b7bca (1.13.11.3)

Expected behavior
Sub-labels should arrives in frigate

Screenshots
If applicable, add screenshots to help explain your problem.

Hardware

  • Architecture or platform: x64
  • OS: [e.g. Ubuntu, macOS, Windows] Windows 10
  • Browser (if applicable) [e.g. Chrome, Safari]: Chrome
  • Docker image (if applicable) [e.g. skrashevich/double-take:latest] : skrashevich/double-take:latest skrashevich/double-take:v1.13.11.3

Additional context
Add any other context about the problem here.

[FEAT] Configurable Device Tracker Reset Duration

Feature Request: Configurable Device Tracker Reset Duration

Problem:
Currently, the device tracker in Double Take sets the "home" state for 30 minutes before changing back to "away." While this default behavior works for many scenarios, there are cases, such as a doorbell application, where users often return within a shorter timeframe, making the 30-minute duration less suitable.

Solution:
I propose adding a configuration option in Double Take that allows users to set a custom duration for the device tracker to reset its state. Specifically, it would be beneficial to have the ability to configure the duration after which the device tracker state transitions from "home" to "away" after the last facial recognition detection. For example, having an option to set the reset duration to 10 seconds or any other desired timeframe would greatly enhance the software's flexibility and adaptability to various use cases.

Additional Context:
In my specific use case, the doorbell application, users typically return within a minute. To address this, I have implemented a workaround in Home Assistant by setting the MQTT topic homeassistant/sensor/double-take/joeblack2k/ "camera" state to "none" after 10 seconds when there are no new detections.

However, having this functionality integrated (setting camera to none in x seconds/minutes preferably configurable) into Double Take itself would streamline the process and make the user experience more intuitive and efficient. (included screenshot)

Thank you for considering this feature request. I believe this customization option would significantly improve the usability of Double Take for users with specific use cases like mine.
Screenshot 2023-10-01 at 20 44 24

[BUG] cannot write response headers after response is sent

Describe the bug
The following error is repeatedly logged:

23-10-30 03:45:31 error: Error [ERR_HTTP_HEADERS_SENT]: Cannot set headers after they are sent to the client
at new NodeError (node:internal/errors:387:5)
at ServerResponse.setHeader (node:_http_outgoing:644:11)
at ServerResponse.header (/double-take/api/node_modules/express/lib/response.js:794:10)
at ServerResponse.send (/double-take/api/node_modules/express/lib/response.js:174:12)
at ServerResponse.res.send (/double-take/api/src/middlewares/respond.js:41:18)
at ServerResponse.json (/double-take/api/node_modules/express/lib/response.js:278:15)
at ServerResponse.send (/double-take/api/node_modules/express/lib/response.js:162:21)
at ServerResponse.res.send (/double-take/api/src/middlewares/respond.js:41:18)
at /double-take/api/src/app.js:46:38
at newFn (/double-take/api/node_modules/express-async-errors/index.js:16:20)

Version of Double Take
latest (I think this error does not happen in docker tag 1.13..11.4 or 1.13)

Expected behavior
I am thinking this might be causing some unknown issues and that it should not happen.

[FEAT] Can we get some of the plugins from Compreface working in double take

Is your feature request related to a problem? Please describe.
Would like to see gender and age for example. Not sure if it would interfere with double take, but a few extra lines we should be able to circumvent that and still show these items.

Describe the solution you'd like
See: https://github.com/exadel-inc/CompreFace/blob/master/docs/Face-services-and-plugins.md#face-plugins)

Seems the plugins can be active by just passing them in a comma separated list.
Additional context
Add any other context or screenshots about the feature request here.

[BUG] Filtering in matches view doesn't work

Describe the bug
When I'm in the Matches view, I want to filter for all unknown matches (among other selections).
No matter what I select in the filter section, I always see all the photos (trained, untrained, matched, unmatched,...)

Version of Double Take
1.13.10

Expected behavior
I would expect to select only certain photos, depending on my selection in the filters.

Screenshots
image

Hardware

  • amd64
  • Ubuntu 22.04
  • Firefox
  • skrashevich/double-take:latest

[FEAT] Add Zones Option to Detectors

Is your feature request related to a problem? Please describe.
I would like to use zones within a detector. Similar to how notify has the option.

    # only notify from specific zones
    # zones:
    #   - camera: garage
    #     zone: driveway

Describe the solution you'd like
I've been playing with Rekognition and it's been the best option for me for high quality matches. But the costs can get high, Ideally I would love to use zones to limit Rekognition to my frigate porch zone, but still use a cheaper option like AIServer for my other zones; front yard, driveway, etc..

Notes:

[Feature Request] Training Folder Download

Discussed in #142

Originally posted by Dvalin21 October 25, 2023
Would it be possible to add a backup or download button. Something simple that would just allow yo to download your training folders to your computer? I find myself in a situation where this would be amazing. Frigate, Compreface, and Double-Take are all installed on a Proxmox Lxc Container.

[BUG] Can't run on port 1883

i dunno why i can't use the default port 1883 on MQTT only 8883 is ok i don't want to change all my other stuff everything is on 1883. how can i make your fork work on 1883 ? thanks for your work and time :)

[BUG] Crash in logs

Describe the bug
Big crash on log on main.go ...

Version of Double Take
v1.3.10

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

Hardware

  • OS: Docker
  • Browser (if applicable) Brave

Additional context

2023/07/08 14:48:46 daemon started
SIGILL: illegal instruction
PC=0x94bb11 m=3 sigcode=2
signal arrived during cgo execution
instruction bytes: 0xc5 0xf9 0xef 0xc0 0x48 0x89 0xe5 0x41 0x57 0x41 0x56 0x41 0x55 0x41 0x54 0x53

goroutine 5 [syscall]:
runtime.cgocall(0x8b51c0, 0xc000057d78)
runtime/cgocall.go:157 +0x5c fp=0xc000057d50 sp=0xc000057d18 pc=0x40c4bc
github.com/Kagami/go-face._Cfunc_facerec_init(0x2130350)
_cgo_gotypes.go:203 +0x49 fp=0xc000057d78 sp=0xc000057d50 pc=0x6a9fc9
github.com/Kagami/go-face.NewRecognizer({0x7ffd02ae3ece?, 0xd15740?})
github.com/Kagami/[email protected]/face.go:68 +0x67 fp=0xc000057de0 sp=0xc000057d78 pc=0x6aa467
github.com/leandroveronezi/go-recognizer.(*Recognizer).Init(0xc000180060, {0x7ffd02ae3ece, 0x16})
github.com/leandroveronezi/[email protected]/recognizer.go:50 +0x7e fp=0xc000057e08 sp=0xc000057de0 pc=0x6ca81e
main.worker()
github.com/skrashevich/double-take/recognizer/main.go:172 +0xb5 fp=0xc000057fc0 sp=0xc000057e08 pc=0x8b2495
main.looper()
github.com/skrashevich/double-take/recognizer/main.go:154 +0x19 fp=0xc000057fe0 sp=0xc000057fc0 pc=0x8b2399
runtime.goexit()
runtime/asm_amd64.s:1598 +0x1 fp=0xc000057fe8 sp=0xc000057fe0 pc=0x46f561
created by main.main
github.com/skrashevich/double-take/recognizer/main.go:137 +0x991

goroutine 1 [runnable]:
runtime.gopark(0xc0000700c0?, 0xc000070120?, 0xeb?, 0xe9?, 0xc0000f5ac8?)
runtime/proc.go:381 +0xd6 fp=0xc0000f5a58 sp=0xc0000f5a38 pc=0x43fd16
runtime.chansend(0xc0001840c0, 0xc0000f5b58, 0x1, 0x4760c6?)
runtime/chan.go:259 +0x42e fp=0xc0000f5ae0 sp=0xc0000f5a58 pc=0x40e46e
runtime.chansend1(0x47601c?, 0x1486dbae728?)
runtime/chan.go:145 +0x1d fp=0xc0000f5b10 sp=0xc0000f5ae0 pc=0x40e01d
runtime.sigenable(0x1)
runtime/signal_unix.go:202 +0x65 fp=0xc0000f5b58 sp=0xc0000f5b10 pc=0x4520a5
os/signal.signal_enable(0xe86cdd64?)
runtime/sigqueue.go:223 +0x73 fp=0xc0000f5b78 sp=0xc0000f5b58 pc=0x46bfd3
os/signal.enableSignal(...)
os/signal/signal_unix.go:49
os/signal.Notify.func1(0xcdda80?)
os/signal/signal.go:145 +0x73 fp=0xc0000f5ba0 sp=0xc0000f5b78 pc=0x6cbcd3
os/signal.Notify(0xc000070060, {0xc000180000, 0x3, 0xc0000f5d00?})
os/signal/signal.go:165 +0x196 fp=0xc0000f5c18 sp=0xc0000f5ba0 pc=0x6cbb56
github.com/sevlyar/go-daemon.ServeSignals()
github.com/sevlyar/[email protected]/signal.go:33 +0x1af fp=0xc0000f5d40 sp=0xc0000f5c18 pc=0x6ce62f
main.main()
github.com/skrashevich/double-take/recognizer/main.go:139 +0x996 fp=0xc0000f5f80 sp=0xc0000f5d40 pc=0x8b2236
runtime.main()
runtime/proc.go:250 +0x207 fp=0xc0000f5fe0 sp=0xc0000f5f80 pc=0x43f8e7
runtime.goexit()
runtime/asm_amd64.s:1598 +0x1 fp=0xc0000f5fe8 sp=0xc0000f5fe0 pc=0x46f561

goroutine 2 [force gc (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
runtime/proc.go:381 +0xd6 fp=0xc000044fb0 sp=0xc000044f90 pc=0x43fd16
runtime.goparkunlock(...)
runtime/proc.go:387
runtime.forcegchelper()
runtime/proc.go:305 +0xb0 fp=0xc000044fe0 sp=0xc000044fb0 pc=0x43fb50
runtime.goexit()
runtime/asm_amd64.s:1598 +0x1 fp=0xc000044fe8 sp=0xc000044fe0 pc=0x46f561
created by runtime.init.6
runtime/proc.go:293 +0x25

goroutine 3 [GC sweep wait]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
runtime/proc.go:381 +0xd6 fp=0xc000045780 sp=0xc000045760 pc=0x43fd16
runtime.goparkunlock(...)
runtime/proc.go:387
runtime.bgsweep(0x0?)
runtime/mgcsweep.go:278 +0x8e fp=0xc0000457c8 sp=0xc000045780 pc=0x42c02e
runtime.gcenable.func1()
runtime/mgc.go:178 +0x26 fp=0xc0000457e0 sp=0xc0000457c8 pc=0x4214e6
runtime.goexit()
runtime/asm_amd64.s:1598 +0x1 fp=0xc0000457e8 sp=0xc0000457e0 pc=0x46f561
created by runtime.gcenable
runtime/mgc.go:178 +0x6b

goroutine 4 [GC scavenge wait]:
runtime.gopark(0xc00006c000?, 0xdfa260?, 0x1?, 0x0?, 0x0?)
runtime/proc.go:381 +0xd6 fp=0xc000045f70 sp=0xc000045f50 pc=0x43fd16
runtime.goparkunlock(...)
runtime/proc.go:387
runtime.(*scavengerState).park(0x127c560)
runtime/mgcscavenge.go:400 +0x53 fp=0xc000045fa0 sp=0xc000045f70 pc=0x429f53
runtime.bgscavenge(0x0?)
runtime/mgcscavenge.go:628 +0x45 fp=0xc000045fc8 sp=0xc000045fa0 pc=0x42a525
runtime.gcenable.func2()
runtime/mgc.go:179 +0x26 fp=0xc000045fe0 sp=0xc000045fc8 pc=0x421486
runtime.goexit()
runtime/asm_amd64.s:1598 +0x1 fp=0xc000045fe8 sp=0xc000045fe0 pc=0x46f561
created by runtime.gcenable
runtime/mgc.go:179 +0xaa

goroutine 18 [finalizer wait]:
runtime.gopark(0x1a0?, 0x127ce20?, 0xe0?, 0x24?, 0xc000044770?)
runtime/proc.go:381 +0xd6 fp=0xc000044628 sp=0xc000044608 pc=0x43fd16
runtime.runfinq()
runtime/mfinal.go:193 +0x107 fp=0xc0000447e0 sp=0xc000044628 pc=0x420527
runtime.goexit()
runtime/asm_amd64.s:1598 +0x1 fp=0xc0000447e8 sp=0xc0000447e0 pc=0x46f561
created by runtime.createfing
runtime/mfinal.go:163 +0x45

goroutine 6 [chan send, locked to thread]:
runtime.gopark(0x1?, 0x2?, 0x0?, 0x0?, 0x0?)
runtime/proc.go:381 +0xd6 fp=0xc000046ea0 sp=0xc000046e80 pc=0x43fd16
runtime.chansend(0xc000184000, 0xc000046f8f, 0x1, 0x2?)
runtime/chan.go:259 +0x42e fp=0xc000046f28 sp=0xc000046ea0 pc=0x40e46e
runtime.chansend1(0xc000000002?, 0xc000046f98?)
runtime/chan.go:145 +0x1d fp=0xc000046f58 sp=0xc000046f28 pc=0x40e01d
runtime.ensureSigM.func1()
runtime/signal_unix.go:1015 +0x15a fp=0xc000046fe0 sp=0xc000046f58 pc=0x46793a
runtime.goexit()
runtime/asm_amd64.s:1598 +0x1 fp=0xc000046fe8 sp=0xc000046fe0 pc=0x46f561
created by runtime.ensureSigM
runtime/signal_unix.go:987 +0xbd

goroutine 7 [syscall]:
runtime.notetsleepg(0x0?, 0x0?)
runtime/lock_futex.go:236 +0x34 fp=0xc0000477a0 sp=0xc000047768 pc=0x414614
os/signal.signal_recv()
runtime/sigqueue.go:152 +0x2f fp=0xc0000477c0 sp=0xc0000477a0 pc=0x46be0f
os/signal.loop()
os/signal/signal_unix.go:23 +0x19 fp=0xc0000477e0 sp=0xc0000477c0 pc=0x6cc499
runtime.goexit()
runtime/asm_amd64.s:1598 +0x1 fp=0xc0000477e8 sp=0xc0000477e0 pc=0x46f561
created by os/signal.Notify.func1.1
os/signal/signal.go:151 +0x2a

rax 0x7f4838000b50
rbx 0x7f4838000b20
rcx 0x7f4838000020
rdx 0x7f4838000b50
rdi 0x7f4838000b50
rsi 0x2130350
rbp 0x7f4838000b50
rsp 0x7f484604a810
r8 0x3
r9 0x6e
r10 0x0
r11 0x7f48380008d0
r12 0x2130350
r13 0xa
r14 0xc0000076c0
r15 0x100
rip 0x94bb11
rflags 0x10206
cs 0x33
fs 0x0
gs 0x0
2023/07/08 17:30:34 - - - - - - - - - - - - - - -

[FEAT] Additional NVR Options

I would love to see additional NVR options than Frigate. I am using Scrypted. I can already publish mqtt or have webhooks from the app, just cannot figure out how to get those working with double-take. I think others already have this one working so it's possibly just a matter of figuring out the correct configuration for Scrypted at least.

Home Assistant Install

I just found this repo after realizing that the jako one hasn't been updated in a year. Can I run this as a Home Assistant add on or do I need to install it on a separate Linux box with docker? I tried adding this to the add ins store and received and error. Should I take out the old version first? Does this one come bundled with CompreFace or does I need a separate instance of it running. Thanks for the help guys!

[BUG] Not receiving Snapshots, latest, or even MQTT jpg to DT

Describe the bug
Receiving errors after install

Version of Double Take
v1.13.11.3

Expected behavior
Expected to receive snapshots from mqtt: Snapshot and latest
"""
23-09-30 21:03:41 error: AxiosError: stream error: timeout of 5000ms exceeded
at RedirectableRequest.handleRequestTimeout (/double-take/api/node_modules/axios/dist/node/axios.cjs:3053:16)
at RedirectableRequest.emit (node:events:513:28)
at Timeout. (/double-take/api/node_modules/follow-redirects/index.js:179:12)
at listOnTimeout (node:internal/timers:559:17)
at processTimers (node:internal/timers:502:7)
23-09-30 21:04:19 error: Error [ERR_HTTP_HEADERS_SENT]: Cannot set headers after they are sent to the client
at new NodeError (node:internal/errors:387:5)
at ServerResponse.setHeader (node:_http_outgoing:644:11)
at ServerResponse.header (/double-take/api/node_modules/express/lib/response.js:794:10)
at ServerResponse.send (/double-take/api/node_modules/express/lib/response.js:174:12)
at ServerResponse.res.send (/double-take/api/src/middlewares/respond.js:41:18)
at ServerResponse.json (/double-take/api/node_modules/express/lib/response.js:278:15)
at ServerResponse.send (/double-take/api/node_modules/express/lib/response.js:162:21)
at ServerResponse.res.send (/double-take/api/src/middlewares/respond.js:41:18)
at /double-take/api/src/app.js:46:38
at newFn (/double-take/api/node_modules/express-async-errors/index.js:16:20)
"""
Hardware

  • Architecture or platform: [Proxmox/lxc]
  • OS: [Debian 11]
  • Browser (if applicable) [Chrome]
  • Docker image (if applicable) [skrashevich/double-take:latest]

Additional context
Add any other context about the problem here.

[FEAT] ai server license plate recognition?

Is your feature request related to a problem? Please describe.
I have recently started testing CodeProject.ai Server for face recognition with double-take and noticed that is supports LPR. Any chance double-take would support this in the future as well?

Describe the solution you'd like
Support added for LPR

Additional context

image

[BUG] Impossible to use API while running as HA addon

Describe the bug
Introduction of ipfilter in HA_ADDON mode breaks API access.

Version of Double Take
1.13.11.3

Expected behavior
Even with the listening port (3000 or otherwise) enabled requests from anywhere other than ingress get an access denied message, which breaks API access (e.g. for custom automation that attempts to provide another source of images.) Please revert this behavior or make in dependent on the listening port being enabled or otherwise configurable.

[BUG] Error and never sending MQTT message when face gets matched - "TypeError: MQTT: recognize error: Cannot read properties of undefined"

Thanks a lot for your great work and taking up the ball :)

Describe the bug
After uploading a known face, it gets matched correctly. However after that, an error message is created and the MQTT message never seems to get sent:

Getting error:
error: TypeError: MQTT: recognize error: Cannot read properties of undefined (reading 'DEVICE_TRACKER_TIMEOUT')

Also the MQTT Topic does not get generated, alas this is expected, as it does not ever seem to send the MQTT message.

23-11-13 20:24:48 info: processing manual: a3f9755a-844d-4829-989a-a29e4a71f268
23-11-13 20:24:48 info: Access granted to IP address: ::ffff:127.0.0.1
23-11-13 20:24:48 info: Access granted to IP address: ::ffff:127.0.0.1
23-11-13 20:24:48 info: done processing manual: a3f9755a-844d-4829-989a-a29e4a71f268 in 0.23 sec
23-11-13 20:24:48 info: {
  id: 'a3f9755a-844d-4829-989a-a29e4a71f268',
  duration: 0.23,
  timestamp: '2023-11-13T19:24:48.445Z',
  attempts: 1,
  camera: 'manual',
  zones: [],
  counts: { person: 1, match: 1, miss: 0, unknown: 0 },
  matches: [
    {
      name: 'ufo',
      confidence: 100,
      match: true,
      box: [Object],
      type: 'manual',
      duration: 0.2,
      detector: 'aiserver',
      filename: 'ccaf237e-1df6-4a42-a299-1e1503ce402a.jpg'
    }
  ],
  misses: [],
  unknowns: []
}
23-11-13 20:24:48 error: TypeError: MQTT: recognize error: Cannot read properties of undefined (reading 'DEVICE_TRACKER_TIMEOUT')
    at /double-take/api/src/util/mqtt.util.js:280:29
    at Array.forEach (<anonymous>)
    at module.exports.recognize (/double-take/api/src/util/mqtt.util.js:239:13)
    at module.exports.start (/double-take/api/src/controllers/recognize.controller.js:185:10)
23-11-13 20:24:48 info: undefined
23-11-13 20:24:49 info: Access granted to IP address: ::ffff:172.30.32.2

Version of Double Take
v1.13.11.6.1 from 12th of november 2023

Expected behavior
After manually uploading a trained picture / known face via the UI, in case it gets recognised, an MQTT message is sent to the MQTT broker (and in this case a sensor in HA is created).

Screenshots
N/A

Hardware

  • Architecture or platform: [amd64]
  • OS: [Home Assistant OS - Linux]

Additional context
Detection is working well so far when manually uploading trained faces. Training is working well.

Please see config (took out comments here for clarity only):

# Double Take
mqtt:
  host: 192.168.128.235
  username: !secret mqtt_username
  password: !secret mqtt_password
  port: 1883
  client_id: double-take

  topics:
    homeassistant: homeassistant
    matches: double-take/matches
    cameras: double-take/cameras

detectors:
  aiserver:
    url: http://192.168.128.235:32168
    timeout: 15
    opencv_face_required: false

detect:
  match:
    save: true
    base64: false
    confidence: 60
    purge: 168
    min_area: 1000

  unknown:
    save: true
    base64: false
    confidence: 40
    purge: 8
    min_area: 0

Also, there is an old issue with the original double-take, but I found nothing there that would help me: jakowenko#169

Timeout when connecting to AIserver

I don't really know if this is a bug in Double-Take but I'm hoping it is possible to get some more debugging info.

I run a RaspberryPi 4 with CodeServer.AI as docker image, plain and simple:

  double-take:
    container_name: double-take
    image: skrashevich/double-take
#    image: jakowenko/double-take
    restart: unless-stopped
    volumes:
      - /home/pi/.config/appdata/double-take:/.storage
    ports:
      - 3000:3000

I'm running the latest version: 1.35.5:

Piece of logging:

info: Double Take v1.13.5
info: MQTT: connected
info: MQTT: subscribed to frigate/events, frigate/+/person/snapshot
info: Double Take v1.13.5
info: MQTT: connected
info: MQTT: subscribed to frigate/events, frigate/+/person/snapshot

It times out when trying to connect to the AIserver. In the config:

  aiserver:
    url: http://192.168.21.9:32168
    # number of seconds before the request times out and is aborted
    timeout: 15
    # require opencv to find a face before processing with detector
    opencv_face_required: false
    # only process images from specific cameras, if omitted then all cameras will be processed
    # cameras:
    #   - front-door
    #   - garage

In the same config it does connect to comprefase.

Both double take and aiserver are running on the same docker host. It doesn't matter if I use the redirected port or the internal IP of the docker container.

On the docker host I see the port 32168 being open.

Config of the aiserver:

  codeproject.ai:
    container_name: codeproject.ai
    image: codeproject/ai-server:rpi64
    restart: unless-stopped
    volumes:
      - ${DOCKERCONFDIR}/codeproject.ai/data:/etc/codeproject/ai
      - ${DOCKERCONFDIR}/codeproject.ai/modules:/app/modules
    ports:
      - 32168:32168

Any thoughts of where to start troubleshooting? I tried running the ping command inside the doubletake container, but it the executable isn't there (docker exec -it double-take /bin/bash).

ps. in aiserver I installed the face processing module and the object-detection modules too.

Difference between this and the old doubletake

Many thanks for taking this on, i wasn't aware there was an active fork.

Is there an active change log for this one ? Are they mainly bug fixes or has functionality also been added? Very interested in this!

[BUG] Error 502 in the Home Assistant proxy addon

Since a couple of days the following error is shown in the log:

}
error: Error [ERR_HTTP_HEADERS_SENT]: Cannot set headers after they are sent to the client
    at new NodeError (node:internal/errors:387:5)
    at ServerResponse.setHeader (node:_http_outgoing:644:11)
    at ServerResponse.header (/double-take/api/node_modules/express/lib/response.js:794:10)
    at ServerResponse.send (/double-take/api/node_modules/express/lib/response.js:174:12)
    at ServerResponse.res.send (/double-take/api/src/middlewares/respond.js:41:18)
    at ServerResponse.json (/double-take/api/node_modules/express/lib/response.js:278:15)
    at ServerResponse.send (/double-take/api/node_modules/express/lib/response.js:162:21)
    at ServerResponse.res.send (/double-take/api/src/middlewares/respond.js:41:18)
    at /double-take/api/src/app.js:46:38
    at newFn (/double-take/api/node_modules/express-async-errors/index.js:16:20)

Using v1.13.11.4

[BUG] MQTT connection not possible on other port

Describe the bug
Connection to MQTT server not possible if not on default port

Version of Double Take
since c94d769

Expected behavior
with the refactoring of the mqtt connection it isn't possible to connect to server, when the port isn't default

[BUG] ui -> path setting broken w/ v1.13.11

Describe the bug

Upgrading to v1.13.11 while running docker compose. I was getting these 404 errors.
Screenshot 2023-08-17 at 6 53 47 PM

I tracked it back to the ui: -> path config setting.

ui:
  path: /matches

It looks like the API isn't getting the path added to them being called from the UI.
Screenshot 2023-08-17 at 6 58 05 PM

I confirmed removing ui:-> path from my config fixed loading. But this is long term problematic as I use nginx as a auth Proxy_Pass in front of it.

Version of Double Take
1.13.11

Expected behavior
I expect the ui->path setting to not break the web interface

Hardware

  • Architecture or platform: x86_64, Intel(R) Celeron(R) N5095A @ 2.00GHz
  • OS: Ubuntu
  • Browser (if applicable): chrome
  • Docker image (if applicable) skrashevich/double-take:latest

Additional context
Nope.

[BUG] exec /bin/bash: exec format error

Describe the bug
Home Assistant Addon fails to start with exec /bin/bash: exec format error message in the Logs tab.

It was working for couple of weeks before without any changes in configuration. The only thing that could change is the Home Assistant versions. After noticing the problem I've upgraded to latest version of OS and Core but it didn't help. Reinstallation of addon also didn't solve the issue.

Version of Double Take
1.13.11

Expected behavior
Application starts.

Hardware

  • Architecture or platform: raspberry4
  • OS: Home Assistant
  • Docker image (if applicable): skrashevich/double-take:latest

[BUG] Error loading Double Take on v1.13.11

Describe the bug
Load Double Take including previous recognised images

Version of Double Take
1.13.11

Expected behavior
Interface opens with no errors

Screenshots
IMG_6474

when I click on Config in DT I see the following:
IMG_6475

Hardware

  • HA running in a VM on Proxmox

Additional context
HA is supervisor 2023.8.2
OS is 10.4

[BUG] Aiserver Face Processing, no face found in image

Creating issue here based on the recommendation of Mr. s.krashevich on CodeProject.AI discussion forum. I hope I have submitted all information correctly. Please let me know if any information is missing. It's my first time posting on GitHub, hence formatting might be off a little, please correct me if I have done something wrong!

CodeProject.AI Question Link

Describe the bug
I currently have the CUDA enabled version of the CodeProject.AI server running in a docker container, alongside Frigate and double-take. In the config section of double-take, Aiserver shows as green so it can be reached. It does find faces as well. Just that its very random, most often double-take matches page shows a red bounding box on the face, lists aiserver and gives a 0 confidence level and labels it as unknown when the face is super clear and in the database. Clicking the refresh button does nothing. In the CPAI's logs, Im noticing a lot of:
Face Processing: No Face found in image
in red.
In the snapshots that had this problem, I directly downloaded the image from the matches page and manually went to the CPAI project explorer and submitted the image for recognition and it does an excellent job. It's just that when double-take submits automatically, not all the time but very randomly, it fails to find a face in the image...

Note: After typing all of this out, I'm doubting if CPAI is running out of GPU memory as well. Please let me know if you think so..

Version of Double Take
v1.13.10
Im using the latest pull.

Expected behavior
Well the expected behavior is it to have identified every face.

Screenshots
If applicable, add screenshots to help explain your problem.

Hardware

  • Architecture or platform:
    Intel i5 8500 with NVIDIA GeForce GTX 750 1GB with 8GB RAM.
  • OS: Ubuntu 20.04.06 LTS 64-bit
  • Browser Firefox
  • Docker image skrashevich/double-take:latest

Additional context

My Double-take config file:
# Double Take
# Learn more at https://github.com/skrashevich/double-take/#configuration
# frigate settings (default: shown below)
frigate:
  url: <url>

  # if double take should send matches back to frigate as a sub label
  # NOTE: requires frigate 0.11.0+
  update_sub_labels: true

  # stop the processing loop if a match is found
  # if set to false all image attempts will be processed before determining the best match
  stop_on_match: true

  # ignore detected areas so small that face recognition would be difficult
  # quadrupling the min_area of the detector is a good start
  # does not apply to MQTT events
  min_area: 0

  # object labels that are allowed for facial recognition
  labels:
    - person

  attempts:
    # number of times double take will request a frigate latest.jpg for facial recognition
    latest: 10
    # number of times double take will request a frigate snapshot.jpg for facial recognition
    snapshot: 10
    # process frigate images from frigate/+/person/snapshot topics
    mqtt: true
    # add a delay expressed in seconds between each detection loop
    delay: 0

  image:
    # height of frigate image passed for facial recognition
    height: 500

  # only process images from specific cameras
  cameras:
     - CAM1
     - CAM2
    # - garage

  # only process images from specific zones
  #zones: []
    # - camera: garage
    #   zone: driveway

  # override frigate attempts and image per camera
  #events: []
    # front-door:
    #   attempts:
    #     # number of times double take will request a frigate latest.jpg for facial recognition
    #     latest: 5
    #     # number of times double take will request a frigate snapshot.jpg for facial recognition
    #     snapshot: 5
    #     # process frigate images from frigate/<camera-name>/person/snapshot topic
    #     mqtt: false
    #     # add a delay expressed in seconds between each detection loop
    #     delay: 1

    #   image:
    #     # height of frigate image passed for facial recognition (only if using default latest.jpg and snapshot.jpg)
    #     height: 1000
    #     # custom image that will be used in place of latest.jpg
    #     latest: http://camera-url.com/image.jpg
    #     # custom image that will be used in place of snapshot.jpg
    #     snapshot: http://camera-url.com/image.jpg
    
    # detector settings (default: shown below)
detectors:
  aiserver:
    url: <url>
    # number of seconds before the request times out and is aborted
    timeout: 20
    # require opencv to find a face before processing with detector
    opencv_face_required: false
    # only process images from specific cameras, if omitted then all cameras will be processed
    # cameras:
    #   - front-door
    #   - garage
    # enable mqtt subscribing and publishing (default: shown below)
mqtt:
  host: <host>
  username: <username>
  password: <password>
  # client_id: frigate

  topics:
    # mqtt topic for frigate message subscription
    frigate: frigate/events
    #  mqtt topic for home assistant discovery subscription
    homeassistant: homeassistant
    # mqtt topic where matches are published by name
    matches: double-take/matches
    # mqtt topic where matches are published by camera name
    cameras: double-take/cameras
# camera settings (default: shown below)
detect:
  match:
    # save match images
    save: true
    # include base64 encoded string in api results and mqtt messages
    # options: true, false, box
    base64: false
    # minimum confidence needed to consider a result a match
    confidence: 65
    # hours to keep match images until they are deleted
    purge: 168
    # minimum area in pixels to consider a result a match
    min_area: 2000

  unknown:
    # save unknown images
    save: true
    # include base64 encoded string in api results and mqtt messages
    # options: true, false, box
    base64: false
    # minimum confidence needed before classifying a match name as unknown
    confidence: 40
    # hours to keep unknown images until they are deleted
    purge: 8
    # minimum area in pixels to keep an unknown result
    min_area: 0
cameras:
  CAM1:
    # apply masks before processing image
     masks:
    #   # list of x,y coordinates to define the polygon of the zone
       coordinates:
         - 1920,0,1920,328,1638,305,1646,0
    #   # show the mask on the final saved image (helpful for debugging)
       visible: true
    #   # size of camera stream used in resizing masks
       size: 1920x1080

    # override global detect variables per camera
     detect:
       match:
    #     # save match images
         save: true
    #     # include base64 encoded string in api results and mqtt messages
    #     # options: true, false, box
         base64: true
    #     # minimum confidence needed to consider a result a match
         confidence: 60
    #     # minimum area in pixels to consider a result a match
         min_area: 2000

       unknown:
    #     # save unknown images
         save: true
    #     # include base64 encoded string in api results and mqtt messages
    #     # options: true, false, box
         base64: true
    #     # minimum confidence needed before classifying a match name as unknown
         confidence: 40
    #     # minimum area in pixels to keep an unknown result
         min_area: 0
         
  CAM2:
    # apply masks before processing image
     masks:
    #   # list of x,y coordinates to define the polygon of the zone
       coordinates:
          - 0,0,2688,0,2688,1520,0,1520
    #   # show the mask on the final saved image (helpful for debugging)
       visible: true
    #   # size of camera stream used in resizing masks
       size: 2688x1520

    # override global detect variables per camera
     detect:
       match:
    #     # save match images
         save: true
    #     # include base64 encoded string in api results and mqtt messages
    #     # options: true, false, box
         base64: false
    #     # minimum confidence needed to consider a result a match
         confidence: 65
    #     # minimum area in pixels to consider a result a match
         min_area: 2000

       unknown:
    #     # save unknown images
         save: true
    #     # include base64 encoded string in api results and mqtt messages
    #     # options: true, false, box
         base64: false
    #     # minimum confidence needed before classifying a match name as unknown
         confidence: 40
    #     # minimum area in pixels to keep an unknown result
         min_area: 0
notify:
  telegram:
    token: <token>
    chat_id: <id>
    
     

Then my frigate config file: 

mqtt:
  # Optional: Enable mqtt server (default: shown below)
  enabled: True
  # Required: host name
  host: <url>
  # Optional: port (default: shown below)
  port: 1883
  # Optional: topic prefix (default: shown below)
  # NOTE: must be unique if you are running multiple instances
  topic_prefix: frigate
  # Optional: client id (default: shown below)
  # NOTE: must be unique if you are running multiple instances
  client_id: frigate
  # Optional: user
  # NOTE: MQTT user can be specified with an environment variables that must begin with 'FRIGATE_'.
  #       e.g. user: '{FRIGATE_MQTT_USER}'
  user: <username>
  # Optional: password
  # NOTE: MQTT password can be specified with an environment variables that must begin with 'FRIGATE_'.
  #       e.g. password: '{FRIGATE_MQTT_PASSWORD}'
  password: <password>
  

# Optional: Detectors configuration. Defaults to a single CPU detector
detectors:
  # Required: name of the detector
  detector_name:
    # Required: type of the detector
    # Frigate provided types include 'cpu', 'edgetpu', and 'openvino' (default: shown below)
    # Additional detector types can also be plugged in.
    # Detectors may require additional configuration.
    # Refer to the Detectors configuration page for more information.
    type: cpu

# Optional: Database configuration
database:
  # The path to store the SQLite DB (default: shown below)
  path: /media/frigate/frigate.db


# Optional: logger verbosity settings
logger:
  # Optional: Default log verbosity (default: shown below)
  default: info
  # Optional: Component specific logger overrides
  logs:
    frigate.event: debug

# Optional: ffmpeg configuration
# More information about presets at https://docs.frigate.video/configuration/ffmpeg_presets
ffmpeg:
  # Optional: global ffmpeg args (default: shown below)
  global_args: -hide_banner -loglevel warning -threads 2
  # Optional: global hwaccel args (default: shown below)
  # NOTE: See hardware acceleration docs for your specific device
  hwaccel_args: preset-vaapi
  # Optional: global input args (default: shown below)
  input_args: preset-rtsp-generic
  # Optional: global output args
  output_args:
    # Optional: output args for detect streams (default: shown below)
    detect: -threads 2 -f rawvideo -pix_fmt yuv420p
    # Optional: output args for record streams (default: shown below)
    record: preset-record-generic
    # Optional: output args for rtmp streams (default: shown below)
    rtmp: preset-rtmp-generic

# Optional: Detect configuration
# NOTE: Can be overridden at the camera level
detect:
  # Optional: width of the frame for the input with the detect role (default: shown below)
  #width: 1280
  # Optional: height of the frame for the input with the detect role (default: shown below)
  #height: 720
  # Optional: desired fps for your camera for the input with the detect role (default: shown below)
  # NOTE: Recommended value of 5. Ideally, try and reduce your FPS on the camera.
  fps: 5
  # Optional: enables detection for the camera (default: True)
  enabled: True
  # Optional: Number of frames without a detection before Frigate considers an object to be gone. (default: 5x the frame rate)
  max_disappeared: 25
  # Optional: Configuration for stationary object tracking
  stationary:
    # Optional: Frequency for confirming stationary objects (default: shown below)
    # When set to 0, object detection will not confirm stationary objects until movement is detected.
    # If set to 10, object detection will run to confirm the object still exists on every 10th frame.
    interval: 0
    # Optional: Number of frames without a position change for an object to be considered stationary (default: 10x the frame rate or 10s)
    threshold: 50
    # Optional: Define a maximum number of frames for tracking a stationary object (default: not set, track forever)
    # This can help with false positives for objects that should only be stationary for a limited amount of time.
    # It can also be used to disable stationary object tracking. For example, you may want to set a value for person, but leave
    # car at the default.
    # WARNING: Setting these values overrides default behavior and disables stationary object tracking.
    #          There are very few situations where you would want it disabled. It is NOT recommended to
    #          copy these values from the example config into your config unless you know they are needed.
    max_frames:
      # Optional: Default for all object types (default: not set, track forever)
      default: 3000
      # Optional: Object specific values
      objects:
        person: 1000

# Optional: Object configuration
# NOTE: Can be overridden at the camera level
objects:
  # Optional: list of objects to track from labelmap.txt (default: shown below)
  track:
    - person
    - eye glasses
    - bottle
    - cup
    - chair
    - desk
    - laptop
    - mouse
    - keyboard
    - cell phone
    - book
    - scissors
  # Optional: mask to prevent all object types from being detected in certain areas (default: no mask)
  # Checks based on the bottom center of the bounding box of the object.
  # NOTE: This mask is COMBINED with the object type specific mask below
  #mask: 0,0,1000,0,1000,200,0,200
  # Optional: filters to reduce false positives for specific object types
  #filters:
    #person:
      # Optional: minimum width*height of the bounding box for the detected object (default: 0)
      #min_area: 5000
      # Optional: maximum width*height of the bounding box for the detected object (default: 24000000)
      #max_area: 100000
      # Optional: minimum width/height of the bounding box for the detected object (default: 0)
      #min_ratio: 0.5
      # Optional: maximum width/height of the bounding box for the detected object (default: 24000000)
      #max_ratio: 2.0
      # Optional: minimum score for the object to initiate tracking (default: shown below)
      #min_score: 0.5
      # Optional: minimum decimal percentage for tracked object's computed score to be considered a true positive (default: shown below)
      #threshold: 0.7
      # Optional: mask to prevent this object type from being detected in certain areas (default: no mask)
      # Checks based on the bottom center of the bounding box of the object
      #mask: 0,0,1000,0,1000,200,0,200

# Optional: Record configuration
# NOTE: Can be overridden at the camera level
record:
  # Optional: Enable recording (default: shown below)
  # WARNING: If recording is disabled in the config, turning it on via
  #          the UI or MQTT later will have no effect.
  enabled: true
  # Optional: Number of minutes to wait between cleanup runs (default: shown below)
  # This can be used to reduce the frequency of deleting recording segments from disk if you want to minimize i/o
  expire_interval: 60
  # Optional: Retention settings for recording
  retain:
    # Optional: Number of days to retain recordings regardless of events (default: shown below)
    # NOTE: This should be set to 0 and retention should be defined in events section below
    #       if you only want to retain recordings of events.
    days: 30
    # Optional: Mode for retention. Available options are: all, motion, and active_objects
    #   all - save all recording segments regardless of activity
    #   motion - save all recordings segments with any detected motion
    #   active_objects - save all recording segments with active/moving objects
    # NOTE: this mode only applies when the days setting above is greater than 0
    mode: all
  # Optional: Event recording settings
  events:
    # Optional: Number of seconds before the event to include (default: shown below)
    pre_capture: 5
    # Optional: Number of seconds after the event to include (default: shown below)
    post_capture: 5
    # Optional: Objects to save recordings for. (default: all tracked objects)
    objects:
      - person
    # Optional: Restrict recordings to objects that entered any of the listed zones (default: no required zones)
    required_zones: []
    # Optional: Retention settings for recordings of events
    retain:
      # Required: Default retention days (default: shown below)
      default: 10
      # Optional: Mode for retention. (default: shown below)
      #   all - save all recording segments for events regardless of activity
      #   motion - save all recordings segments for events with any detected motion
      #   active_objects - save all recording segments for event with active/moving objects
      #
      # NOTE: If the retain mode for the camera is more restrictive than the mode configured
      #       here, the segments will already be gone by the time this mode is applied.
      #       For example, if the camera retain mode is "motion", the segments without motion are
      #       never stored, so setting the mode to "all" here won't bring them back.
      mode: motion
      # Optional: Per object retention days
      objects:
        person: 15

# Optional: Restream configuration
# Uses https://github.com/AlexxIT/go2rtc (v1.2.0)
go2rtc:
  streams:
    CAM1:
      - <url>
    CAM2:
      - <url>

# Optional: in-feed timestamp style configuration
# NOTE: Can be overridden at the camera level
timestamp_style:
  # Optional: Position of the timestamp (default: shown below)
  #           "tl" (top left), "tr" (top right), "bl" (bottom left), "br" (bottom right)
  position: "tl"
  # Optional: Format specifier conform to the Python package "datetime" (default: shown below)
  #           Additional Examples:
  #             german: "%d.%m.%Y %H:%M:%S"
  format: "%d/%m/%Y %H:%M:%S"
  # Optional: Color of font
  color:
    # All Required when color is specified (default: shown below)
    red: 255
    green: 255
    blue: 255
  # Optional: Line thickness of font (default: shown below)
  thickness: 2
  # Optional: Effect of lettering (default: shown below)
  #           None (No effect),
  #           "solid" (solid background in inverse color of font)
  #           "shadow" (shadow for font)
  effect: "solid"

# Required
cameras:
  # Required: name of the camera
  CAM1:
    # Optional: Enable/Disable the camera (default: shown below).
    # If disabled: config is used but no live stream and no capture etc.
    # Events/Recordings are still viewable.
    enabled: True
    # Required: ffmpeg settings for the camera
    ffmpeg:
      # Required: A list of input streams for the camera. See documentation for more information.
      inputs:
        # Required: the path to the stream
        # NOTE: path may include environment variables, which must begin with 'FRIGATE_' and be referenced in {}
        - path: rtsp://127.0.0.1:8554/CAM1
          input_args: preset-rtsp-restream
          # Required: list of roles for this stream. valid values are: detect,record,rtmp
          # NOTICE: In addition to assigning the record and rtmp roles,
          # they must also be enabled in the camera config.
          roles:
            - detect
            #- record
            - rtmp
         

    # Optional: timeout for highest scoring image before allowing it
    # to be replaced by a newer image. (default: shown below)
    best_image_timeout: 60

    # Optional: zones for this camera
    #zones:
      # Required: name of the zone
      # NOTE: This must be different than any camera names, but can match with another zone on another
      #       camera.
      #front_steps:
        # Required: List of x,y coordinates to define the polygon of the zone.
        # NOTE: Presence in a zone is evaluated only based on the bottom center of the objects bounding box.
        #coordinates: 545,1077,747,939,788,805
        # Optional: List of objects that can trigger this zone (default: all tracked objects)
        #objects:
         # - person
        # Optional: Zone level object filters.
        # NOTE: The global and camera filters are applied upstream.
        #filters:
          #person:
            #min_area: 5000
            #max_area: 100000
            #threshold: 0.7

    # Optional: Configuration for the jpg snapshots published via MQTT
    mqtt:
      # Optional: Enable publishing snapshot via mqtt for camera (default: shown below)
      # NOTE: Only applies to publishing image data to MQTT via 'frigate/<camera_name>/<object_name>/snapshot'.
      # All other messages will still be published.
      enabled: True
      # Optional: print a timestamp on the snapshots (default: shown below)
      timestamp: false
      # Optional: draw bounding box on the snapshots (default: shown below)
      bounding_box: false
      # Optional: crop the snapshot (default: shown below)
      crop: True
      # Optional: height to resize the snapshot to (default: shown below)
      height: 500
      # Optional: jpeg encode quality (default: shown below)
      #quality: 70
      # Optional: Restrict mqtt messages to objects that entered any of the listed zones (default: no required zones)
      #required_zones: []

    # Optional: Configuration for how camera is handled in the GUI.
    ui:
      # Optional: Adjust sort order of cameras in the UI. Larger numbers come later (default: shown below)
      # By default the cameras are sorted alphabetically.
      order: 0
      # Optional: Whether or not to show the camera in the Frigate UI (default: shown below)
      dashboard: True

  # Required: name of the camera
  CAM2:
    # Optional: Enable/Disable the camera (default: shown below).
    # If disabled: config is used but no live stream and no capture etc.
    # Events/Recordings are still viewable.
    enabled: True
    # Required: ffmpeg settings for the camera
    ffmpeg:
      # Required: A list of input streams for the camera. See documentation for more information.
      
      inputs:
        # Required: the path to the stream
        # NOTE: path may include environment variables, which must begin with 'FRIGATE_' and be referenced in {}
        - path: rtsp://127.0.0.1:8554/CAM2
          input_args: preset-rtsp-restream
          # Required: list of roles for this stream. valid values are: detect,record,rtmp
          # NOTICE: In addition to assigning the record and rtmp roles,
          # they must also be enabled in the camera config.
          
          roles:
            - detect
            #- record
            - rtmp

    # Optional: timeout for highest scoring image before allowing it
    # to be replaced by a newer image. (default: shown below)
    best_image_timeout: 60

    # Optional: zones for this camera
    #zones:
      # Required: name of the zone
      # NOTE: This must be different than any camera names, but can match with another zone on another
      #       camera.
      #front_steps:
        # Required: List of x,y coordinates to define the polygon of the zone.
        # NOTE: Presence in a zone is evaluated only based on the bottom center of the objects bounding box.
        #coordinates: 545,1077,747,939,788,805
        # Optional: List of objects that can trigger this zone (default: all tracked objects)
        #objects:
         # - person
        # Optional: Zone level object filters.
        # NOTE: The global and camera filters are applied upstream.
        #filters:
          #person:
            #min_area: 5000
            #max_area: 100000
            #threshold: 0.7

    # Optional: Configuration for the jpg snapshots published via MQTT
    mqtt:
      # Optional: Enable publishing snapshot via mqtt for camera (default: shown below)
      # NOTE: Only applies to publishing image data to MQTT via 'frigate/<camera_name>/<object_name>/snapshot'.
      # All other messages will still be published.
      enabled: True
      # Optional: print a timestamp on the snapshots (default: shown below)
      timestamp: false
      # Optional: draw bounding box on the snapshots (default: shown below)
      bounding_box: false
      # Optional: crop the snapshot (default: shown below)
      crop: True
      # Optional: height to resize the snapshot to (default: shown below)
      height: 500
      # Optional: jpeg encode quality (default: shown below)
      quality: 100
      # Optional: Restrict mqtt messages to objects that entered any of the listed zones (default: no required zones)
      #required_zones: []

    # Optional: Configuration for how camera is handled in the GUI.
    ui:
      # Optional: Adjust sort order of cameras in the UI. Larger numbers come later (default: shown below)
      # By default the cameras are sorted alphabetically.
      order: 1
      # Optional: Whether or not to show the camera in the Frigate UI (default: shown below)
      dashboard: True
  
# Optional
ui:
  # Optional: Set the default live mode for cameras in the UI (default: shown below)
  live_mode: mse
  # Optional: Set a timezone to use in the UI (default: use browser local time)
  timezone: Asia/Colombo
  # Optional: Use an experimental recordings / camera view UI (default: shown below)
  use_experimental: False
  # Optional: Set the time format used.
  # Options are browser, 12hour, or 24hour (default: shown below)
  time_format: 12hour
  # Optional: Set the date style for a specified length.
  # Options are: full, long, medium, short
  # Examples:
  #    short: 2/11/23
  #    medium: Feb 11, 2023
  #    full: Saturday, February 11, 2023
  # (default: shown below).
  date_style: full
  # Optional: Set the time style for a specified length.
  # Options are: full, long, medium, short
  # Examples:
  #    short: 8:14 PM
  #    medium: 8:15:22 PM
  #    full: 8:15:22 PM Mountain Standard Time
  # (default: shown below).
  time_style: medium
  # Optional: Ability to manually override the date / time styling to use strftime format
  # https://www.gnu.org/software/libc/manual/html_node/Formatting-Calendar-Time.html
  # possible values are shown above (default: not set)
  strftime_fmt: "%Y/%m/%d %H:%M"

# Optional: Telemetry configuration
telemetry:
  # Optional: Enable the latest version outbound check (default: shown below)
  # NOTE: If you use the HomeAssistant integration, disabling this will prevent it from reporting new versions
  version_check: True

    


A snippet of the log from CPAI:


15:40:58:Face Processing: Queue request for Face Processing command 'recognize' (...64c617) took 279ms
15:40:58:Face Processing: No face found in image
15:40:58:Face Processing: Queue request for Face Processing command 'recognize' (...173772) took 345ms
15:41:00:Face Processing: No face found in image
15:41:00:Face Processing: Queue request for Face Processing command 'recognize' (...97254c) took 208ms
15:41:00:Face Processing: No face found in image
15:41:00:Face Processing: Queue request for Face Processing command 'recognize' (...c95ead) took 279ms
15:41:00:Face Processing: No face found in image
15:41:00:Face Processing: Queue request for Face Processing command 'recognize' (...ce7a76) took 238ms
15:41:01:Face Processing: No face found in image
15:41:01:Face Processing: Queue request for Face Processing command 'recognize' (...9b4a6b) took 200ms
15:41:04:Face Processing: No face found in image
15:41:04:Face Processing: Queue request for Face Processing command 'recognize' (...25b123) took 192ms
15:41:04:Face Processing: No face found in image
15:41:04:Face Processing: Queue request for Face Processing command 'recognize' (...8268b4) took 316ms
15:41:04:Face Processing: No face found in image
15:41:04:Face Processing: Queue request for Face Processing command 'recognize' (...1037b8) took 388ms
15:41:05:Face Processing: No face found in image
15:41:05:Face Processing: Queue request for Face Processing command 'recognize' (...63d37e) took 194ms
15:41:05:Face Processing: No face found in image
15:41:05:Face Processing: Queue request for Face Processing command 'recognize' (...861c7e) took 169ms
15:41:08:Face Processing: No face found in image
15:41:08:Face Processing: Queue request for Face Processing command 'recognize' (...45158a) took 218ms
15:41:08:Face Processing: No face found in image
15:41:08:Face Processing: Queue request for Face Processing command 'recognize' (...d54edc) took 207ms
15:41:11:Face Processing: No face found in image
15:41:11:Face Processing: Queue request for Face Processing command 'recognize' (...5fb51d) took 217ms
15:41:15:Face Processing: No face found in image



Build error, doesn't build on my ARM64 system (Raspberry PI4)

➜ double-take git:(beta) uname -a
Linux docker 5.15.61-v8+ #1579 SMP PREEMPT Fri Aug 26 11:16:44 BST 2022 aarch64 GNU/Linux

Error:

➜  double-take git:(beta) docker build -f ./.build/Dockerfile -t double-take .
[+] Building 55.7s (28/39)
 => [internal] load build definition from Dockerfile                                                                                                       0.1s
 => => transferring dockerfile: 2.88kB                                                                                                                     0.0s
 => [internal] load .dockerignore                                                                                                                          0.0s
 => => transferring context: 118B                                                                                                                          0.0s
 => resolve image config for docker.io/docker/dockerfile-upstream:master-labs                                                                              0.9s
 => CACHED docker-image://docker.io/docker/dockerfile-upstream:master-labs@sha256:c1838e7edf8678c5a4e18c7bdc5c35070f377efed0fe8ce48ec000bc1ba02e50         0.0s
 => [internal] load metadata for docker.io/library/node:16                                                                                                 0.5s
 => [internal] load metadata for gcr.io/distroless/nodejs16-debian11:latest                                                                                0.5s
 => CACHED [stage-4 1/4] FROM gcr.io/distroless/nodejs16-debian11@sha256:221337292c9fcb2614697d135ff5d176d95fb22730bd6ea117ba593abdd5e491                  0.0s
 => [internal] load build context                                                                                                                          0.1s
 => => transferring context: 22.29kB                                                                                                                       0.0s
 => https://sqlite.org/2023/sqlite-amalgamation-3410000.zip                                                                                                0.7s
 => [frontend-builder  1/10] FROM docker.io/library/node:16@sha256:241f152c0dc9d3efcbd6a4426f52dc50fa78f3a63cff55b2419dc2bf48efe705                        0.0s
 => CACHED [build  2/15] RUN rm -f /etc/apt/apt.conf.d/docker-clean   && echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' >/etc/apt/apt.conf.d/k  0.0s
 => CACHED [build  3/15] RUN --mount=type=cache,target=/var/cache/apt,sharing=locked --mount=type=cache,target=/var/lib/apt,sharing=locked <<EOT (apt -y   0.0s
 => CACHED [build  4/15] WORKDIR /double-take/api                                                                                                          0.0s
 => CACHED [build  5/15] COPY /api/package.json .                                                                                                          0.0s
 => CACHED [frontend-builder  2/10] WORKDIR /build                                                                                                         0.0s
 => CACHED [frontend-builder  3/10] RUN apt -y update && apt install -y --no-install-recommends curl bash unzip                                            0.0s
 => CACHED [frontend-builder  4/10] RUN curl -fsSL https://bun.sh/install | bash                                                                           0.0s
 => CACHED [frontend-builder  5/10] COPY /frontend/package.json .                                                                                          0.0s
 => CACHED [frontend-builder  6/10] RUN --mount=type=cache,target=/root/.npm bun install --cache-dir=/root/.npm/_buncache                                  0.0s
 => CACHED [frontend-builder  7/10] COPY /frontend/src ./src                                                                                               0.0s
 => CACHED [frontend-builder  8/10] COPY /frontend/public ./public                                                                                         0.0s
 => CACHED [frontend-builder  9/10] COPY /frontend/.env.production /frontend/vue.config.js /frontend/vite.config.js /frontend/.eslintrc.js /frontend/inde  0.0s
 => CACHED [frontend-builder 10/10] RUN --mount=type=cache,target=/root/.npm npm run build                                                                 0.0s
 => ERROR [build  6/15] RUN --mount=type=cache,target=/root/.npm npm install                                                                              52.8s
 => CACHED [better-sqlite3-builder 2/5] ADD https://sqlite.org/2023/sqlite-amalgamation-3410000.zip /tmp/                                                  0.0s
 => CACHED [better-sqlite3-builder 3/5] WORKDIR /build                                                                                                     0.0s
 => CACHED [better-sqlite3-builder 4/5] RUN --mount=type=cache,target=/root/.npm <<EOT (#!/bin/bash...)                                                    0.0s
 => CANCELED [better-sqlite3-builder 5/5] RUN npm install --build-from-source --install-links better-sqlite3@'^8.2.0' --sqlite3="/src/sqlite"             52.9s
------
 > [build  6/15] RUN --mount=type=cache,target=/root/.npm npm install:
#14 33.09 npm WARN deprecated [email protected]: Use your platform's native performance.now() and performance.timeOrigin.
#14 52.43 npm ERR! code 1
#14 52.43 npm ERR! path /double-take/api/node_modules/canvas
#14 52.44 npm ERR! command failed
#14 52.44 npm ERR! command sh -c -- node-pre-gyp install --fallback-to-build --update-binary
#14 52.44 npm ERR! Failed to execute '/usr/local/bin/node /usr/local/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js configure --fallback-to-build --                                                                                                                                                                                                                                                                                                                                         update-binary --module=/double-take/api/node_modules/canvas/build/Release/canvas.node --module_name=canvas --module_path=/double-take/api/node_modules/canvas/bu                                                                                                                                                                                                                                                                                                                                         ild/Release --napi_version=8 --node_abi_napi=napi --napi_build_version=0 --node_napi_label=node-v93' (1)
#14 52.44 npm ERR! node-pre-gyp info it worked if it ends with ok
#14 52.44 npm ERR! node-pre-gyp info using [email protected]
#14 52.44 npm ERR! node-pre-gyp info using [email protected] | linux | arm64
#14 52.44 npm ERR! node-pre-gyp http GET https://github.com/Automattic/node-canvas/releases/download/v2.11.2/canvas-v2.11.2-node-v93-linux-glibc-arm64.tar.gz
#14 52.45 npm ERR! node-pre-gyp ERR! install response status 404 Not Found on https://github.com/Automattic/node-canvas/releases/download/v2.11.2/canvas-v2.11.2                                                                                                                                                                                                                                                                                                                                         -node-v93-linux-glibc-arm64.tar.gz
#14 52.45 npm ERR! node-pre-gyp WARN Pre-built binaries not installable for [email protected] and [email protected] (node-v93 ABI, glibc) (falling back to source compile                                                                                                                                                                                                                                                                                                                                          with node-gyp)
#14 52.45 npm ERR! node-pre-gyp WARN Hit error response status 404 Not Found on https://github.com/Automattic/node-canvas/releases/download/v2.11.2/canvas-v2.11                                                                                                                                                                                                                                                                                                                                         .2-node-v93-linux-glibc-arm64.tar.gz
#14 52.45 npm ERR! gyp info it worked if it ends with ok
#14 52.45 npm ERR! gyp info using [email protected]
#14 52.45 npm ERR! gyp info using [email protected] | linux | arm64
#14 52.45 npm ERR! gyp info ok
#14 52.45 npm ERR! gyp info it worked if it ends with ok
#14 52.45 npm ERR! gyp info using [email protected]
#14 52.45 npm ERR! gyp info using [email protected] | linux | arm64
#14 52.45 npm ERR! gyp info find Python using Python version 3.7.3 found at "/usr/bin/python3"
#14 52.45 npm ERR! gyp http GET https://nodejs.org/download/release/v16.20.0/node-v16.20.0-headers.tar.gz
#14 52.45 npm ERR! gyp http 200 https://nodejs.org/download/release/v16.20.0/node-v16.20.0-headers.tar.gz
#14 52.45 npm ERR! gyp http GET https://nodejs.org/download/release/v16.20.0/SHASUMS256.txt
#14 52.45 npm ERR! gyp http 200 https://nodejs.org/download/release/v16.20.0/SHASUMS256.txt
#14 52.45 npm ERR! gyp info spawn /usr/bin/python3
#14 52.45 npm ERR! gyp info spawn args [
#14 52.45 npm ERR! gyp info spawn args   '/usr/local/lib/node_modules/npm/node_modules/node-gyp/gyp/gyp_main.py',
#14 52.45 npm ERR! gyp info spawn args   'binding.gyp',
#14 52.45 npm ERR! gyp info spawn args   '-f',
#14 52.45 npm ERR! gyp info spawn args   'make',
#14 52.45 npm ERR! gyp info spawn args   '-I',
#14 52.45 npm ERR! gyp info spawn args   '/double-take/api/node_modules/canvas/build/config.gypi',
#14 52.45 npm ERR! gyp info spawn args   '-I',
#14 52.45 npm ERR! gyp info spawn args   '/usr/local/lib/node_modules/npm/node_modules/node-gyp/addon.gypi',
#14 52.45 npm ERR! gyp info spawn args   '-I',
#14 52.45 npm ERR! gyp info spawn args   '/root/.cache/node-gyp/16.20.0/include/node/common.gypi',
#14 52.45 npm ERR! gyp info spawn args   '-Dlibrary=shared_library',
#14 52.46 npm ERR! gyp info spawn args   '-Dvisibility=default',
#14 52.46 npm ERR! gyp info spawn args   '-Dnode_root_dir=/root/.cache/node-gyp/16.20.0',
#14 52.46 npm ERR! gyp info spawn args   '-Dnode_gyp_dir=/usr/local/lib/node_modules/npm/node_modules/node-gyp',
#14 52.46 npm ERR! gyp info spawn args   '-Dnode_lib_file=/root/.cache/node-gyp/16.20.0/<(target_arch)/node.lib',
#14 52.46 npm ERR! gyp info spawn args   '-Dmodule_root_dir=/double-take/api/node_modules/canvas',
#14 52.46 npm ERR! gyp info spawn args   '-Dnode_engine=v8',
#14 52.46 npm ERR! gyp info spawn args   '--depth=.',
#14 52.46 npm ERR! gyp info spawn args   '--no-parallel',
#14 52.46 npm ERR! gyp info spawn args   '--generator-output',
#14 52.46 npm ERR! gyp info spawn args   'build',
#14 52.46 npm ERR! gyp info spawn args   '-Goutput_dir=.'
#14 52.46 npm ERR! gyp info spawn args ]
#14 52.46 npm ERR! Package pangocairo was not found in the pkg-config search path.
#14 52.46 npm ERR! Perhaps you should add the directory containing `pangocairo.pc'
#14 52.46 npm ERR! to the PKG_CONFIG_PATH environment variable
#14 52.46 npm ERR! No package 'pangocairo' found
#14 52.46 npm ERR! gyp: Call to 'pkg-config pangocairo --libs' returned exit status 1 while in binding.gyp. while trying to load binding.gyp
#14 52.46 npm ERR! gyp ERR! configure error
#14 52.46 npm ERR! gyp ERR! stack Error: `gyp` failed with exit code: 1
#14 52.46 npm ERR! gyp ERR! stack     at ChildProcess.onCpExit (/usr/local/lib/node_modules/npm/node_modules/node-gyp/lib/configure.js:284:16)
#14 52.46 npm ERR! gyp ERR! stack     at ChildProcess.emit (node:events:513:28)
#14 52.46 npm ERR! gyp ERR! stack     at Process.ChildProcess._handle.onexit (node:internal/child_process:293:12)
#14 52.46 npm ERR! gyp ERR! System Linux 5.15.61-v8+
#14 52.46 npm ERR! gyp ERR! command "/usr/local/bin/node" "/usr/local/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "configure" "--fallback-to-bui                                                                                                                                                                                                                                                                                                                                         ld" "--update-binary" "--module=/double-take/api/node_modules/canvas/build/Release/canvas.node" "--module_name=canvas" "--module_path=/double-take/api/node_modu                                                                                                                                                                                                                                                                                                                                         les/canvas/build/Release" "--napi_version=8" "--node_abi_napi=napi" "--napi_build_version=0" "--node_napi_label=node-v93"
#14 52.46 npm ERR! gyp ERR! cwd /double-take/api/node_modules/canvas
#14 52.46 npm ERR! gyp ERR! node -v v16.20.0
#14 52.46 npm ERR! gyp ERR! node-gyp -v v9.1.0
#14 52.46 npm ERR! gyp ERR! not ok
#14 52.46 npm ERR! node-pre-gyp ERR! build error
#14 52.46 npm ERR! node-pre-gyp ERR! stack Error: Failed to execute '/usr/local/bin/node /usr/local/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js c                                                                                                                                                                                                                                                                                                                                         onfigure --fallback-to-build --update-binary --module=/double-take/api/node_modules/canvas/build/Release/canvas.node --module_name=canvas --module_path=/double-                                                                                                                                                                                                                                                                                                                                         take/api/node_modules/canvas/build/Release --napi_version=8 --node_abi_napi=napi --napi_build_version=0 --node_napi_label=node-v93' (1)
#14 52.46 npm ERR! node-pre-gyp ERR! stack     at ChildProcess.<anonymous> (/double-take/api/node_modules/@mapbox/node-pre-gyp/lib/util/compile.js:89:23)
#14 52.46 npm ERR! node-pre-gyp ERR! stack     at ChildProcess.emit (node:events:513:28)
#14 52.46 npm ERR! node-pre-gyp ERR! stack     at maybeClose (node:internal/child_process:1100:16)
#14 52.46 npm ERR! node-pre-gyp ERR! stack     at Process.ChildProcess._handle.onexit (node:internal/child_process:304:5)
#14 52.46 npm ERR! node-pre-gyp ERR! System Linux 5.15.61-v8+
#14 52.46 npm ERR! node-pre-gyp ERR! command "/usr/local/bin/node" "/double-take/api/node_modules/.bin/node-pre-gyp" "install" "--fallback-to-build" "--update-b                                                                                                                                                                                                                                                                                                                                         inary"
#14 52.46 npm ERR! node-pre-gyp ERR! cwd /double-take/api/node_modules/canvas
#14 52.46 npm ERR! node-pre-gyp ERR! node -v v16.20.0
#14 52.46 npm ERR! node-pre-gyp ERR! node-pre-gyp -v v1.0.10
#14 52.46 npm ERR! node-pre-gyp ERR! not ok
#14 52.47
#14 52.47 npm ERR! A complete log of this run can be found in:
#14 52.47 npm ERR!     /root/.npm/_logs/2023-04-14T14_04_54_796Z-debug-0.log
------
process "/bin/sh -c npm install" did not complete successfully: exit code: 1

Any pointers of what I can do?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.