Giter VIP home page Giter VIP logo

roflcoopter / viseron Goto Github PK

View Code? Open in Web Editor NEW
1.4K 1.4K 139.0 21.08 MB

Self-hosted, local only NVR and AI Computer Vision software. With features such as object detection, motion detection, face recognition and more, it gives you the power to keep an eye on your home, office or any other place you want to monitor.

License: MIT License

Dockerfile 0.31% Python 77.21% Shell 0.18% CMake 0.08% HTML 0.26% CSS 0.09% JavaScript 0.21% TypeScript 21.60% Mako 0.06%
coral cuda darknet edgetpu face-recognition google-coral hacktoberfest hardware-acceleration ip-camera license-plate-recognition motion-detection network-video-capture network-video-recorder nvr object-detection rtsp surveillance tensorflow viseron yolo

viseron's People

Contributors

danielperna84 avatar developideas avatar l-maia avatar magicmonkey avatar olekenneth avatar roflcoopter avatar twodarek avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

viseron's Issues

[FR] MQTT subs to forcibly trigger camera, reload config, etc.

Similar to my signal handler FR, this would have Viseron subscribe to a one or a few topics so I could send a message and have Viseron trigger a normal recording (i.e. the triggered recording would have length of the sum of lookback and timeout values in the recorder section).

Other things that would be very helpful for Viseron to be able to subscribe to would be a way to reload the config and maybe even forcibly run object detection (i.e. simulate a motion trigger).

Piggybacking on this FR - having Viseron publish when it sees motion (with the message containing data such as the bounding box coordinates, area and frame count) would make it possible to have a third party tool watch the camera feed and draw bounding boxes around the image as Viseron does its thing.

[FR] utility to take a .mp4 and .yaml and very verbosely debug

This is something that would really help configure a system.

I have many, many recordings which should absolutely be tripping the motion and object detectors but for one reason or another, aren't. The request is for a command line utility which can take an .mp4, a camera "label" (so it knows which settings to apply) and an (optional) config.yaml and output the following:

  • individual pictures (camname-0.jpg, camname-1.jpg, etc., numbers are frame numbers) with the motion, zones, masks, and objects coloured/overlaid (kind of like you do now, but for every single frame that is processed according to the config.yaml)
  • verbose (debug) logs on stdout
  • verbose logs which state what it would send to the MQTT server (maybe optional flag to actually talk to the MQTT server)

This blob of information would allow people to adjust parameters and re-test again and again without having to wave in front of the camera or drive by.

one of the command line options might also be an "offset" or "skip first x frames" into the file so you can tweak which images the motion detector is seeing. default would be an offset of zero, but you could re-run the test and see if skipping the first 'x' frames results in different outputs.

object_detection logging doesn't appear to work

I've got both motion detection and object detection logging set to debug. I get lots of motion logs, but so far not any object detection ones.

config snippet:

logging:
  level: info

object_detection:
  type: darknet
  interval: 2
  logging:
    level: debug

...

motion_detection:
  interval: 2
  trigger: true
  timeout: true
  width: 640
  height: 360
  area: 1000
  frames: 3
  logging:
    level: debug

Log snippet:

[2020-09-14 10:10:33] [lib.motion.cam03        ] [DEBUG   ] - Motion frames: 1, area: 1200.5
[2020-09-14 10:10:37] [lib.motion.cam03        ] [DEBUG   ] - Motion frames: 1, area: 2645.5
[2020-09-14 10:10:39] [lib.motion.cam03        ] [DEBUG   ] - Motion frames: 2, area: 1552.0
[2020-09-14 10:10:43] [lib.motion.cam03        ] [DEBUG   ] - Motion frames: 1, area: 3750.0
[2020-09-14 10:11:01] [lib.motion.cam03        ] [DEBUG   ] - Motion frames: 1, area: 4719.0
[2020-09-14 10:11:03] [lib.motion.cam03        ] [DEBUG   ] - Motion frames: 2, area: 4960.0
[2020-09-14 10:11:05] [lib.motion.cam03        ] [DEBUG   ] - Motion frames: 3, area: 1454.5
[2020-09-14 10:11:05] [lib.motion.cam03        ] [DEBUG   ] - Motion has ended
[2020-09-14 10:11:09] [lib.motion.cam03        ] [DEBUG   ] - Motion frames: 1, area: 2650.0
[2020-09-14 10:11:11] [lib.motion.cam03        ] [DEBUG   ] - Motion frames: 2, area: 4262.5
[2020-09-14 10:11:13] [lib.motion.cam03        ] [DEBUG   ] - Motion frames: 3, area: 1907.0
[2020-09-14 10:11:13] [lib.motion.cam03        ] [DEBUG   ] - Motion frames: 4, area: 1871.5
[2020-09-14 10:11:13] [lib.recorder.cam03      ] [INFO    ] - Starting recorder
[2020-09-14 10:11:13] [lib.recorder.cam03      ] [INFO    ] - Folder already exists
[2020-09-14 10:11:15] [lib.motion.cam03        ] [DEBUG   ] - Motion frames: 5, area: 3953.0
[2020-09-14 10:11:17] [lib.motion.cam03        ] [DEBUG   ] - Motion frames: 6, area: 4338.0
[2020-09-14 10:11:19] [lib.motion.cam03        ] [DEBUG   ] - Motion frames: 7, area: 4457.0
[2020-09-14 10:11:21] [lib.motion.cam03        ] [DEBUG   ] - Motion frames: 8, area: 5615.5
[2020-09-14 10:11:23] [lib.motion.cam03        ] [DEBUG   ] - Motion frames: 9, area: 8847.0
[2020-09-14 10:11:25] [lib.motion.cam03        ] [DEBUG   ] - Motion frames: 10, area: 9902.0
[2020-09-14 10:11:27] [lib.motion.cam03        ] [DEBUG   ] - Motion frames: 11, area: 9362.5
[2020-09-14 10:11:29] [lib.motion.cam03        ] [DEBUG   ] - Motion frames: 12, area: 9030.5
[2020-09-14 10:11:31] [lib.motion.cam03        ] [DEBUG   ] - Motion frames: 13, area: 8749.5

Incorrect FPS reported?

I'm not sure if this is an issue with the detection of my camera or if this is reporting bitrate instead of FPS?

[2020-09-04 10:42:08] [lib.camera.front_yard ] [INFO ] - Resolution: 640x480 @ 180000 FPS

Date and Time not correct

The date and time don't seem to be correct in the container. It seems to be the UTC time.
I have added /etc/localtime to the docker run command.

I think it is missing the tzdata package in the container. After installing it in the container the local date and time worked.

I have also added TZ=Europe/Amsterdam as variable to the docke run command but I am not sure if this is also necessary.

no motion detection after updating to 1.5.0 (area: 2.0)

Every since updating to 1.5.0 and playing with my motion percentage, I don't see any motion detection at all.

logging:
  level: INFO

motion_detection:
  interval: 1
  trigger_detector: true
  timeout: true
  max_timeout: 600
  width: 960
  height: 720
  area: 2.0
  frames: 2
  logging:
    level: DEBUG

The motion detectors don't seem to fire; I was originally aiming for ~10%, but when I noticed nothing was working, I reduced it to 2% and still nothing. 2% of a 960x720 image is pretty small, isn't it? (I also had a 640x480 image originally but wanted to bump it up a bit so the object detection had more to look at)

[FR] - Zoning

Based on cameras 'view' it would be great (probably when you get to the UI) to be able to create zones on the camera image. This way you can exclude certain areas or dial in sensitivity in others

object_detection: suppression not supported

Looks like the suppression key isn't used yet:

extra keys not allowed @ data['object_detection']['labels'][5]['suppression']

Config example:

object_detection:
  type: darknet
  interval: 2
  logging:
    level: debug
  labels:
    - label: person
      confidence: 0.5
    - label: bicycle
      confidence: 0.6
    - label: bird
      confidence: 0.2
      suppression: 0.01

[FR] implement signal handler to re-read config file

Presently the only way to update the config is to quit and re-load. SIGHUP might be handy to re-read the config file. I'm also noticing that ^C to quit stops all the recorders/etc. but doesn't actually quit until I ^C 3 or 4 times.

MQTT Connection Errors Non-Obvious

I had a typo in my username, which resulted in the MQTT connection not working. However, this wasn't apparently from the logs until I enabled DEBUG-level logging:

[2020-10-11 10:47:46] [lib.mqtt    ] [DEBUG   ] - MQTT connected with result code 5, message repeated 66 times

Looking at the library's source code (I don't see any docs that explain this), it looks anything non-zero is an error. It would be great to have an ERROR-level message that gives more context that the MQTT connection is not working.

Missing License

The LICENSE file in the repo is empty. Please add a license so we know under what terms we can use this. Thanks!

never-ending recording

I had a strange event this morning.

snapshot provided by Viseron. That picture was from a little over an hour ago. There is no movement there, and the info log just showed that the recording kept getting retriggered. I had to quit Viseron to get the recording to stop (and interestingly, also to be able to view the 4.4+GB .mp4, it appears the header is missing information until the recorder stops, which is suboptimal...

How do I help debug this? My config file looks like this. The confidence levels and area are both intentionally "low" because it's not detecting hardly anything at the moment (car goes by, me waving in front of the camera, etc.) and I'm debugging that. I have SecuritySpy watching these same cameras and performing its motion/object detection as well.

logging:
  level: info

object_detection:
  type: darknet
  interval: 6
  labels:
    - label: person
      confidence: 0.6
    - label: bicycle
      confidence: 0.6
    - label: car
      confidence: 0.6
    - label: truck
      confidence: 0.6
    - label: motorbike
      confidence: 0.6
    - label: bird
      confidence: 0.6
    - label: cat
      confidence: 0.6
    - label: dog
      confidence: 0.6

motion_detection:
  interval: 1
  trigger: true
  timeout: true
  width: 640
  height: 360
  area: 100
  frames: 3

recorder:
  lookback: 15
  timeout: 15
  retain: 31
  folder: /recordings

mqtt:
  broker: 192.168.x.x
  port: 1883

cameras:
  - name: cam01
    mqtt_name: cam01
    host: 192.168.x.x
    port: 554
    path: /user=...

(repeat for other cameras)

better understanding of motion detection

Note sure if this should be an Issue or perhaps brought up elsewhere (a forum, Reddit or HA?)

I'm trying to tune the motion detection system, but the logging is creating more questions than answers. :-)

[2020-09-10 09:34:36] [lib.motion              ] [DEBUG   ] - Motion frames: 32, area: 285.5
[2020-09-10 09:34:36] [lib.motion              ] [DEBUG   ] - Motion frames: 9, area: 189.0
[2020-09-10 09:34:36] [lib.motion              ] [DEBUG   ] - Motion frames: 25, area: 161.0
[2020-09-10 09:34:36] [lib.motion              ] [DEBUG   ] - Motion frames: 164, area: 562.0
[2020-09-10 09:34:37] [lib.motion              ] [DEBUG   ] - Motion frames: 3, area: 136.0

is the area value in the same units as in the motion_detection heading in the config file? What about the fames? At 20fps, 164 frames is over two seconds of video, yet I see these messages sometimes a few times per second. How can that be?

Is it possible to have the motion detector spit out images with bounding boxes of where it's seeing video (similar to how the object classifier could spit out an image with a bounding box and label what it identified with confidence level)? This would make tuning the system a lot easier, especially with the ability to send a signal or MQTT message to reload the config file.

[FR] how to supply new object detection models

At present, Viseron currently pulls in the object models when Docker creates the image. Would it be possible to specify a second directory (something like -v /host/path/models/custom:/models/darknet/custom or somesuch) so that instances could include customized models for their specific application? Can the model library/libraries be re-read with a signal or MQTT topic subscription to update models without restarting Viseron?

RPi4

Thought I'd try to fire up this docker on a Pi4. All of the initialization works fine, then getting the errors below with ffmpeg. Any thoughts?

ffmpeg -hide_banner -loglevel panic -avoid_negative_ts make_zero -fflags nobuffer -flags low_delay -strict experimental -fflags +genpts -stimeout 5000000 -use_wallclock_as_timestamps 1 -vsync 0 -c:v h264_mmal -rtsp_transport tcp -i rtsp://admin:[email protected]:554/Stream/Channels/102/ -f rawvideo -pix_fmt nv12 pipe:1

mmal: mmal_vc_port_info_set: failed to set port info (2:0): EINVAL
mmal: mmal_vc_port_set_format: mmal_vc_port_info_set failed 0x8e4bf0 (EINVAL)
mmal: mmal_port_disable: port vc.ril.video_decode:in:0(H264)(0x8e4bf0) is not enabled
mmal: mmal_port_disable: port vc.ril.video_decode:out:0(0x8e2f00) is not enabled
mmal: mmal_port_disable: port vc.ril.video_decode:ctr:0(0x8e48e0) is not enabled

MQTT sensor/[cam_name]/state has empty set

I'm using MQTT Explorer to watch what's going on on the MQTT side of things.

I've noticed that when objects are detected, the detections list is either "null" or an empty set "[ ]".

Is this expected? I thought perhaps it would show the names of the objects detected and perhaps their confidence level.

Support Parsing MJPEG Streams

I’ve got an older camera that only has MJPEG streams. Wondering if you have any idea how to get it working in your configurations, or if I need to basically re-write all of the ffmpeg arguments.

I’ll admin I’m not all to familiar with the vast array of ffmpeg arguments...

[FR] Video in temp folder

Hello, i am using Viseron and i like it very much. I like that you make motion detection snapshot and video, but for me the video is unusable with automations. What i use is folder_watcher in Home assistant and when the video file is created automation is sending it to me. The problem is that the video that i get is still blank. I need to set waiting time for this, and to decide how long would be the record and wait to finish first and then i can send it to my phone. What i need and i think that it is good to have, is the video that is still recorded to be in some temp folder, and to be moved in the actual folder after the end of the record. I think that many dvr software use the temp way for safety measures...
Current situation:
/recordings/date/camera/actual_file.mp4 - zero bytes at the creation of the file and growing until the record end
What i mean in the FR:
/temp/current_record.mp4 -> /recordings/date/camera/actual_file.mp4 - with full size and ready to be sended, copied, downloaded or anything

Queue is full

Hello with latest beta i have problem, viseron is stoping after few hours... Reverted to :latest docker and there everything is ok
2020-10-05T16:57:22.260473219Z [2020-10-05 19:57:22] [lib.camera.koridor ] [WARNING ] - object_decoder_queue queue is full. Removing oldest entry [2020-10-05 19:57:22] [lib.camera.koridor] [ERROR ] - Unable to decode frame. FFMPEG pipe seems broken, message repeated 2 times [2020-10-05 19:57:22] [lib.camera.koridor] [ERROR ] - Unable to decode frame. FFMPEG pipe seems broken, message repeated 3 times 2020-10-05T16:57:22.260928793Z [2020-10-05 19:57:22] [lib.camera.koridor ] [WARNING ] - motion_decoder_queue queue is full. Removing oldest entry

[DOC] Better explanation of motion/object detection pipeline

I would suggest to improve the documentation, or at least to give some hit here, on how the motion and object detection work inside Viseron.
Some open questions trying to understand the configuration parameters:

  1. is object detection applied to a frame only if motion has been detected? is applied to the whole frame or just the area where detection was found?
  2. looking a the interval parameter of the motion detection, I understand that it is not applied to all the frames coming from the camera; some of them are skipped; right?
  3. if the first guess is right (point 1), why there is an interval parameter in object detection; is it applied periodically or all the frame where motion is detected?
  4. still related to point 1: if motion is detected, is the whole frame resized to model_widthxmodel_height and passed, for example, to EdgeTPU? The Coral, I remember, is limited to 300x300. Does it means that my 2592x1944 frame is resized to a bunch of pixels hopping to detect something? I hope no... :-)
  5. I'm not sure to understand when the recording stops when timeout is false; how it is related to timeout in recording section?

Sorry for the multiple questions but I would understand in a better way how Viseron works in order to contribute to its development.

[FR] allow points to define a "cutout" of area(s) NOT to trigger on

I love the zone implementation, I wonder if a "mask" implementation could be similar, where you define points to enclose an area you want no detection to occur. I thought perhaps it could be done with another 'zone' definition with the object detection confidence set to 100.0, but I'm not sure if this introduces "priority" issues. e.g. you make a zone that covers the entire image, then cut out a rectangle with confidence 100.0 -- the first zone would "hit" but the second one would never hit, which isn't quite what we want.

Also, what happens if you define overlapping zones - do they both fire if an object is detected in the overlapped area? What happens if I name two zones the same, does the resultant zone equal the areas covered by both? If that were the case then this is perhaps a solved problem, although in an ugly way.

[DOC] object_detection and motion_detection intervals

Two questions:

  1. interval is how often to run the specified detection. Does that mean that all the frames between the last run and this run will be analyzed, or only a few frames around this point in time will be analyzed? e.g. if I have motion_detection:interval: 1.0 and my camera is running at 30fps then the motion detection interval will look at the last 30 frames every second to find motion, or only the last frames: x number of frames to see if any motion occurred?

I believe that I understand clearly that with motion_detection:trigger: true then object_detection will not run unless motion is detected. Otherwise, object detection will run every interval seconds (with the same question about how many frames are analyzed). Is that the case?

  1. The note under interval for object_detection states "For optimal performance this should the same as motion detection interval." Is this only true when motion_detection:trigger is false?

clue for super long/never ending recordings

I noticed this occurring periodically in my logs, while the motion ended several minutes ago:

[2020-09-15 15:27:21] [lib.recorder.cam02      ] [ERROR   ] - Timed out
[2020-09-15 15:28:25] [lib.recorder.cam02      ] [ERROR   ] - Timed out

I haven't seen the usual Stopping recording in: x messages, although I do see it for other cameras that have triggered and correctly stopped after the errant camera has been triggered.

motion_detected MQTT not triggering although log shows motion

I'm not sure if I'm just misunderstanding the MQTT implementation or not. I have a camera with a big old spider web in front of it which should be generating lots of motion events. At least I see SecuritySpy going nuts with it so I assume Viseron should also be seeing it.

I see homeassistant/binary_sensor/cam05/motion_detected/config with the following value:

{
  "name": "cam05 motion_detected",
  "state_topic": "homeassistant/binary_sensor/cam05/motion_detected/state",
  "value_template": "{{ value | upper }}",
  "availability_topic": "viseron/lwt",
  "payload_available": "alive",
  "payload_not_available": "dead",
  "json_attributes_topic": "homeassistant/binary_sensor/cam05/motion_detected/state"
}

I have mosquitto_sub -h localhost -t homeassistant/binary_sensor/cam05/motion_detected/state running in a shell on the mqtt server.

I see the following in the docker log:

[2020-09-13 20:25:07] [lib.motion.cam05        ] [DEBUG   ] - Motion frames: 141, area: 3617.0
[2020-09-13 20:25:11] [lib.motion.cam05        ] [DEBUG   ] - Motion frames: 142, area: 5486.5
[2020-09-13 20:25:15] [lib.motion.cam05        ] [DEBUG   ] - Motion frames: 143, area: 5007.5
[2020-09-13 20:25:19] [lib.motion.cam05        ] [DEBUG   ] - Motion frames: 144, area: 2571.0

and finally, the relevant bits of my config.yaml:

motion_detection:
  interval: 2
  trigger: true
  timeout: true
  width: 640
  height: 360
  area: 1000
  frames: 3
  logging:
    level: debug

cameras:
  - name: cam05
    mqtt_name: cam05
    host: 192.168.x.x
    port: 554
    path: /x
    zones:
      - name: garage_side
        points:
        - x: 0
          y: 0
        - x: 1919
          y: 0
        - x: 1919
          y: 1079
        - x: 0
          y: 1079

with area: 1000 and frames: 3, shouldn't the log I showed above be enough to trigger a motion event? I'm wondering if the "frames" number in the log is just the count of motion-detected frames since the start of the docker instance, rather than "I found 3 frames in a row with area 5123". i.e. the log doesn't help me see how many continuous frames it found with the respective areas.

Support Coral on NUC

I am trying to take a look at you software but I have a problema with NUC and Coral.

My config:


# See the README for the full list of configuration options.
cameras:
  - name: Backyard
    host: 192.168.1.50
    port: 554
    username: admin
    password: foobar
    path: /cam/realmonitor?channel=4&subtype=0
    motion_detection:
      interval: 1
      trigger: true
    object_detection:
      type: edgetpu
      interval: 1
      labels:
        - label: person
          confidence: 0.9
        - label: dog
          confidence: 0.9
        - label: car
          confidence: 0.9

# MQTT is optional
mqtt:
 broker: 192.168.1.100
 port: 1883
 username: mqtt
 password: foobar

logging:
  level: debug

I am giving this error:

Traceback (most recent call last):
  File "viseron.py", line 7, in <module>
    from lib.config import ViseronConfig
  File "/src/viseron/lib/config/__init__.py", line 118, in <module>
    VALIDATED_CONFIG = VISERON_CONFIG_SCHEMA(raw_config)
  File "/usr/local/lib/python3.6/dist-packages/voluptuous/schema_builder.py", line 272, in __call__
    return self._compiled([], data)
  File "/usr/local/lib/python3.6/dist-packages/voluptuous/schema_builder.py", line 594, in validate_dict
    return base_validate(path, iteritems(data), out)
  File "/usr/local/lib/python3.6/dist-packages/voluptuous/schema_builder.py", line 432, in validate_mapping
    raise er.MultipleInvalid(errors)
voluptuous.error.MultipleInvalid: extra keys not allowed @ data['cameras'][0]['object_detection']['type']

I am running docker using this cmd line :

# docker run --rm -v /usr/share/hassio/share/viseron/recordings:/recordings -v /usr/share/hassio/share/viseron/config:/config -v /etc/localtime:/etc/localtime:ro -v /dev/bus/usb:/dev/bus/usb --privileged --name viseron --device /dev/dri roflcoopter/viseron:latest

Thank you in advanced.

[FR] MJPEG stream of the cameras

Frigate exposes several endpoints; I found the realtime MJPEG video (https://github.com/blakeblackshear/frigate#camera_name) very useful flor:

  • realtime debug of the engine detection behavior; does Viseron update the MQTT camera when nothing (motion and object) is detected?
  • I personally use the MJPEG stream to show the realtime video of the cameras on a Chromecast-like device (a Google Nest Hub Max); it is very useful because there is no lag or delay; HA cameras management is a nightmare from this point of view....

A note: Frigate warns that when the MJPEG stream is used the CPU load spikes but this is fine for my personal type of usage.

I hope this fits in your project idea.

Interval cannot be a float

For the interval option under motion_detection and object_detection, the documentation states it can be a float.
However, setting it to a float value gives the following error:

Traceback (most recent call last):
  File "viseron.py", line 7, in <module>
    from lib.config import ViseronConfig
  File "/src/viseron/lib/config/__init__.py", line 118, in <module>
    VALIDATED_CONFIG = VISERON_CONFIG_SCHEMA(raw_config)
  File "/usr/local/lib/python3.6/dist-packages/voluptuous/schema_builder.py", line 272, in __call__
    return self._compiled([], data)
  File "/usr/local/lib/python3.6/dist-packages/voluptuous/schema_builder.py", line 594, in validate_dict
    return base_validate(path, iteritems(data), out)
  File "/usr/local/lib/python3.6/dist-packages/voluptuous/schema_builder.py", line 432, in validate_mapping
    raise er.MultipleInvalid(errors)
voluptuous.error.MultipleInvalid: expected int for dictionary value @ data['cameras'][0]['motion_detection']['interval']

or

Traceback (most recent call last):
  File "viseron.py", line 7, in <module>
    from lib.config import ViseronConfig
  File "/src/viseron/lib/config/__init__.py", line 118, in <module>
    VALIDATED_CONFIG = VISERON_CONFIG_SCHEMA(raw_config)
  File "/usr/local/lib/python3.6/dist-packages/voluptuous/schema_builder.py", line 272, in __call__
    return self._compiled([], data)
  File "/usr/local/lib/python3.6/dist-packages/voluptuous/schema_builder.py", line 594, in validate_dict
    return base_validate(path, iteritems(data), out)
  File "/usr/local/lib/python3.6/dist-packages/voluptuous/schema_builder.py", line 432, in validate_mapping
    raise er.MultipleInvalid(errors)
voluptuous.error.MultipleInvalid: expected int for dictionary value @ data['cameras'][0]['object_detection']['interval']

I might be able to create a pull request somewhere next week.

[DOC] does "area" scale with image size?

Just looking for clarification on motion_detection: area - is this in square pixels? is it normalized somehow?

e.g. with height: 300, width: 300 and area: 1000 the area is approximately 1.1% of the image. With height: 640, width: 360 and the same area: 1000, that area is now only 0.43% of the image.

If this is true, would it make better sense to specify area in percent?

Docker container consumes all available memory and gets killed - "exited with code 137"

I have a Privileged Debian LXC in Proxmox where I am trying to stand up viseron using Docker.
Starting off with CPU before I battle GPU passthrough (which I've done for Plex and Shinobi in the past).

But if I start the container the memory usage of the container simply grows until it is killed (8Gb) off by the OS.

Running top inside the container just shows the Python task slowly growing to the memory limit before the container is killed - take 5minutes or so. Logs show visceron seems to just be waiting for something to detect.

services:
  viseron:
    image: roflcoopter/viseron:dev
    container_name: viseron
    volumes:
      - /mnt/videos:/recordings
      - ./:/config
      - /etc/localtime:/etc/localtime:ro
viseron     | [2020-09-09 17:18:00] [lib.camera.front_porch_camera] [INFO    ] - Resolution: 2304x1296 @ 180000 FPS
viseron     | [2020-09-09 17:18:00] [lib.camera.front_porch_camera] [DEBUG   ] - FFMPEG decoder command: ffmpeg -hide_banner -loglevel panic -avoid_negative_ts make_zero -fflags nobuffer -flags low_delay -strict experimental -fflags +genpts -stimeout 5000000 -use_wallclock_as_timestamps 1 -vsync 0 -rtsp_transport tcp -i rtsp://rudders:[email protected]:554/h264Preview_01_main -f rawvideo -pix_fmt nv12 pipe:1
viseron     | [2020-09-09 17:18:00] [lib.nvr.front_porch_camera] [DEBUG   ] - <lib.config.ViseronConfig object at 0x7fc24b1c0b70>
viseron     | [2020-09-09 17:18:00] [lib.camera.front_porch_camera] [DEBUG   ] - Starting capture process
viseron     | [2020-09-09 17:18:00] [lib.camera.front_porch_camera] [DEBUG   ] - Starting decoder thread
viseron     | [2020-09-09 17:18:00] [lib.recorder            ] [DEBUG   ] - Initializing ffmpeg recorder
viseron     | [2020-09-09 17:18:00] [lib.recorder            ] [DEBUG   ] - FFMPEG encoder command: ffmpeg -hide_banner -loglevel panic -f rawvideo -pix_fmt nv12 -s:v <width>x<height> -r <fps> -i pipe:0 -y <file>
viseron     | [2020-09-09 17:18:00] [lib.nvr.front_porch_camera] [DEBUG   ] - NVR thread initialized
viseron     | [2020-09-09 17:18:00] [lib.nvr.front_porch_camera] [DEBUG   ] - Waiting for first frame
viseron     | [2020-09-09 17:18:00] [root                    ] [INFO    ] - Initialization complete
viseron     | [2020-09-09 17:18:03] [lib.camera.front_porch_camera] [DEBUG   ] - Running object detection at 1s interval, every 180000 frame(s)
viseron     | [2020-09-09 17:18:03] [lib.camera.front_porch_camera] [DEBUG   ] - Running motion detection at 1s interval, every 180000 frame(s)
viseron     | [2020-09-09 17:18:07] [lib.nvr.front_porch_camera] [DEBUG   ] - First frame received
viseron     | [2020-09-09 17:18:07] [lib.nvr.front_porch_camera] [DEBUG   ] - Objects: []
viseron exited with code 137
cameras:
  - name: Front Porch Camera
    mqtt_name: viseron_front_porch
    host: 192.168.x.xx
    port: 554
    username: #####
    password: ######
    path: /h264Preview_01_main
    motion_detection:
      interval: 1
      trigger: false
    object_detection:
      interval: 1
      labels:
        - label: person
          confidence: 0.9
        - label: cat
          confidence: 0.8

# MQTT is optional
mqtt:
  broker: 192.168.0.252
  port: 1883
  username: hamqtt
  password: hamqtt


recorder:
  lookback: 10
  timeout: 10
  retain: 7
  folder: /recordings

logging:
  level: debug

[FR] include camera name in recording filename

On multiple-camera systems we don't presently have any way of knowing which camera has been recorded to disk. I really like the directory approach you have. Could we perhaps add a level to it, instead of recordings/date/file, perhaps recordings/camera/date/file? Ideally the order of the directory (camera/date or date/camera) would be in the config file, as would the directory and snapshot/video filename templates.

Error starting decoder pipe! mp3float Header missing

Hello,

I've just started playing with this, and i'm having an issue i don't know how to fix... And i'm sure this would also help others in the future 👍

With docker image viseron :

[lib.nvr.salon           ] [DEBUG   ] - NVR thread initialized
[lib.nvr.salon           ] [DEBUG   ] - Waiting for first frame
[root                    ] [INFO    ] - Initialization complete
[lib.camera.salon        ] [ERROR   ] - Error starting decoder pipe! [mp3 @ 0x55838c023740] Header missing

[lib.camera.salon        ] [ERROR   ] - Error starting decoder pipe! [mp3 @ 0x562193204740] Header missing

[lib.camera.salon        ] [ERROR   ] - Error starting decoder pipe! [mp3 @ 0x55ae76efd740] Header missing

[lib.camera.salon        ] [ERROR   ] - Error starting decoder pipe! [mp3 @ 0x562dc7756740] Header missing

[lib.camera.salon        ] [INFO    ] - Succesful reconnection!
[lib.camera.salon        ] [DEBUG   ] - Running object detection at 1s interval, every 25 frame(s)
[lib.camera.salon        ] [DEBUG   ] - Running motion detection at 1s interval, every 25 frame(s)
[lib.nvr.salon           ] [DEBUG   ] - First frame received

With docker image viseron-vaapi :

[lib.nvr.salon           ] [DEBUG   ] - NVR thread initialized
[lib.nvr.salon           ] [DEBUG   ] - Waiting for first frame
[root                    ] [INFO    ] - Initialization complete
[lib.camera.salon        ] [ERROR   ] - Error starting decoder pipe! [mp3float @ 0x561b137d2d00] Header missing

[lib.camera.salon        ] [ERROR   ] - Error starting decoder pipe! [mp3float @ 0x55f7ed259d00] Header missing

using vaapi, it endlesly says the last line and never actually starts.
Also, i've noticed that the decoder is mp3float instead of mp3.

After some googling i've found the option -c:a mp3 but i'm not sure where to put it (in viseron config), or that it would help.

As a debug step i've attempted this, and with or without the mp3 codec specified i get mostly the same result (replace mp3 with mp3float)

docker exec -it prod_viseron_1 ffmpeg -hide_banner -avoid_negative_ts make_zero -fflags nobuffer -flags low_delay -strict experimental -fflags +genpts -stimeout 5000000 -use_wallclock_as_timestamps 1 -vsync 0 -hwaccel vaapi -vaapi_device /dev/dri/renderD128 -rtsp_transport tcp -c:a mp3 -i rtsp://root:[email protected]:8554/unicast -f null /dev/null
[mp3 @ 0x5622b4438c40] Header missing
Input #0, rtsp, from 'rtsp://root:[email protected]:8554/unicast':
  Metadata:
    title           : LIVE555 Streaming Media v2020.03.06
    comment         : LIVE555 Streaming Media v2020.03.06
  Duration: N/A, start: 1600894569.567244, bitrate: N/A
    Stream #0:0: Video: h264 (Main), yuv420p(progressive), 1280x720, 25 fps, 25 tbr, 90k tbn, 50 tbc
    Stream #0:1: Audio: mp3, 44100 Hz, mono, s16p, 64 kb/s
Stream mapping:
  Stream #0:0 -> #0:0 (h264 (native) -> wrapped_avframe (native))
  Stream #0:1 -> #0:1 (mp3 (native) -> pcm_s16le (native))
Press [q] to stop, [?] for help
Too many packets buffered for output stream 0:1.32:22.77 bitrate=N/A speed=N/A    
Conversion failed!

Additional info : i have a i3-7100T CPU with /dev/dri/renderD128 existing and /dev/dri mounted as a device trough docker.

i can't pinpoint what's wrong in my setup...
Can you help me ? 👍
Thanks !

ZeroDivisionError

I'm getting this exception:

viseron    | [2020-10-11 08:25:38] [lib.camera.front_porch  ] [DEBUG   ] - Running motion detection at 1.0s interval, every 0 frame(s)
viseron    | Exception in thread Thread-7:
viseron    | Traceback (most recent call last):
viseron    |   File "/usr/lib/python3.6/threading.py", line 916, in _bootstrap_inner
viseron    |     self.run()
viseron    |   File "/usr/lib/python3.6/threading.py", line 864, in run
viseron    |     self._target(*self._args, **self._kwargs)
viseron    |   File "/src/viseron/lib/camera.py", line 305, in capture_pipe
viseron    |     if motion_frame_number % motion_decoder_interval_calculated == 0:
viseron    | ZeroDivisionError: integer division or modulo by zero

Given the code, it looks like somehow the FPS is being calculated in a way that makes this turn out to be zero.

If I set my height, width, and fps in my config file, I get a slightly more useful error:

[2020-10-11 08:33:57] [lib.camera.front_porch] [ERROR   ] - Unable to decode frame. FFMPEG pipe seems broken, message repeated 2 times

It looks like a sanity check on the stream needs to happen sooner to provide a more useful message.

What constitutes "motion"

It sounds silly, but I would like to understand a bit more about what Viseron considers "motion"

Is it strictly drawing a bounding box around any changed pixels, calculating the area relative to the image area, and triggering object detection if it's big enough, or is there more to it?

Here is why I'm asking:

[2020-09-30 19:59:51] [lib.nvr.cam02.object] [DEBUG   ] - Objects: [], message repeated 599 times
[2020-09-30 19:59:56] [lib.nvr.cam01.object] [DEBUG   ] - Objects: [], message repeated 3 times
[2020-09-30 19:59:57] [lib.nvr.cam02.object] [DEBUG   ] - Objects: [], message repeated 2 times
[2020-09-30 19:59:59] [lib.nvr.cam01.object] [DEBUG   ] - Objects: [], message repeated 5 times
[2020-09-30 20:00:00] [lib.nvr.cam02.object    ] [DEBUG   ] - Objects: []
[2020-09-30 20:00:00] [lib.nvr.cam01.object] [DEBUG   ] - Objects: [], message repeated 2 times
[2020-09-30 20:00:02] [lib.nvr.cam02.object] [DEBUG   ] - Objects: [], message repeated 5 times
[2020-09-30 20:00:30] [lib.nvr.cam01.object] [DEBUG   ] - Objects: [], message repeated 32 times
[2020-09-30 20:02:19] [lib.nvr.cam02.object] [DEBUG   ] - Objects: [], message repeated 118 times
[2020-09-30 20:02:36] [lib.nvr.cam03.object] [DEBUG   ] - Objects: [], message repeated 22 times
[2020-09-30 20:02:58] [lib.nvr.cam02.object] [DEBUG   ] - Objects: [], message repeated 3 times
[2020-09-30 20:03:14] [lib.nvr.cam05.object] [DEBUG   ] - Objects: [], message repeated 3 times
[2020-09-30 20:14:09] [lib.nvr.cam02.object] [DEBUG   ] - Objects: [], message repeated 528 times

Clearly the motion detector is going nuts (it has been windy these last few days which is also causing LOOOOONG recordings) and I can't really increase the area threshold because then it won't detect smaller more important motion events which would find objects when the object detection is run.

If it is a simple "bounding box around ANY changes" -- would it be worth considering some kind of threshold or sensitivity knob in the motion_detector? Something that would be used in addition to the bounding box and give some thing like "percentage of pixels which changed more than x% within the bounding box?

In a similar vein, would it be possible to change area to area_min and add area_max which would help prevent things like spider webs dancing over the entire frame from constantly triggering? The real saving grace is the object detection which declines to record if nothing interesting is in the frame.

One thing which I don't know how to approach is the issue where there's an object of interest in the scene but it is not moving. An example would be the motion detector triggering because a bush moved in the wind, or the reflection of clouds blowing across the sky constantly triggering LONG recordings because there's a car parked in the scene. The object detector is right, there's absolutely a car there, and the motion detector is right because yes, something moved in the scene, but the object detected wasn't what was moving.

[FR] better logging

Right now, the debug logging is a little confusing:

[2020-09-10 09:28:26] [lib.nvr.cam01           ] [DEBUG   ] - Not recording, pausing object detector
[2020-09-10 09:28:26] [lib.motion              ] [DEBUG   ] - Motion frames: 25, area: 294.0
[2020-09-10 09:28:26] [lib.motion              ] [DEBUG   ] - Motion frames: 1, area: 284.5
[2020-09-10 09:28:27] [lib.motion              ] [DEBUG   ] - Motion frames: 3, area: 203.0
[2020-09-10 09:28:27] [lib.nvr.cam04           ] [DEBUG   ] - Motion detected! Starting object detector
[2020-09-10 09:28:27] [lib.motion              ] [DEBUG   ] - Motion frames: 21, area: 285.0
[2020-09-10 09:28:27] [lib.motion              ] [DEBUG   ] - Motion frames: 7, area: 338.0
[2020-09-10 09:28:27] [lib.motion              ] [DEBUG   ] - Motion frames: 26, area: 355.0

perfect that lib.nvr.cam01 is emitted, but the lib.motion sources don't say what camera it's processing when there are multiple cameras that could be feeding frames to the motion detector.

[FR] Multiple redundant detectors

I’m actually using Frigate (on Coral USB) + Deepstack (on CPU) + HA to get mobile notifications with snapshot&video of detected people on my cameras. I combine Frigate with Deepstack because TFlite models raise too many false positives. Coral is great to keep CPU load negligible but is very limited on detector capability/tuning: to use the CPU just to confirm the coral detections is fine to keep low CPU usage. The idea to combine (as a pipe) multiple detectors makes false positives disappear for me.

At some point you speak of “multiple detectors” on the future of Viseron: would be possible to run multiple detectors on the same event/image to confirm its goodness?

v1.5.0: AttributeError: 'list' object has no attribute 'rel_contours'

I'm seeing this pop up in the log. It doesn't cause the system to stop, but doesn't look like it should be happening either:

Exception in thread Thread-13:
Traceback (most recent call last):
  File "/usr/lib/python3.6/threading.py", line 916, in _bootstrap_inner
    self.run()
  File "/src/viseron/lib/nvr.py", line 502, in run
    self.camera.resolution,
  File "/src/viseron/lib/nvr.py", line 54, in publish_image
    self.config.motion_detection.area,
  File "/src/viseron/lib/helpers.py", line 136, in draw_contours
    for relative_contour, area in zip(contours.rel_contours, contours.contour_areas):
AttributeError: 'list' object has no attribute 'rel_contours'

object_detection + MQTT - Expose number of detected objects

First of all, thanks for all the excellent work on Viseron, it works impressively well with CUDA.

The binary sensors in Home Assistant showing if a person car etc was detected are great, but it would be better if they could report a count of how many of that object is detected.

For my usecase, I want to be notified when a car enters my driveway, but also want to be notified if a second car enters (assuming the first car is parked for hours).

Would it be possible to expose counts of each identified label?

motion with no object detection is resetting the record timout

This is a capture from 1.5.0-vaapi:latest that occured just a few moments ago:

[2020-09-30 12:45:58] [lib.nvr.cam03           ] [INFO    ] - Stopping recording in: 9
[2020-09-30 12:46:01] [lib.nvr.cam03.object] [DEBUG   ] - Objects: [], message repeated 2 times
[2020-09-30 12:46:01] [lib.nvr.cam03.object    ] [DEBUG   ] - Objects: []
[2020-09-30 12:46:02] [lib.nvr.cam03           ] [INFO    ] - Stopping recording in: 8
[2020-09-30 12:46:03] [lib.nvr.cam03.object    ] [DEBUG   ] - Objects: []
[2020-09-30 12:46:03] [lib.camera.cam03] [ERROR   ] - Unable to decode frame. FFMPEG pipe seems broken, message repeated 4 times
[2020-09-30 12:46:08] [lib.camera.cam03        ] [ERROR   ] - Restarting frame pipe
[2020-09-30 12:46:08] [lib.recorder.cam03      ] [ERROR   ] - Timed out
[2020-09-30 12:46:08] [lib.camera.cam03        ] [ERROR   ] - Successful reconnection!
[2020-09-30 12:46:14] [lib.nvr.cam03.object] [DEBUG   ] - Objects: [], message repeated 7 times
[2020-09-30 12:46:15] [lib.nvr.cam03           ] [INFO    ] - Stopping recording in: 14
[2020-09-30 12:46:15] [lib.nvr.cam03.object] [DEBUG   ] - Objects: [], message repeated 2 times
[2020-09-30 12:46:16] [lib.nvr.cam03           ] [INFO    ] - Stopping recording in: 13

Now I see that there was a streaming error (very common with cheap cameras, but UDP transport seems worse), but nowhere in the logs does the system seem to indicate that it found anything that should restart the timeout countdown timer.

Setting up streams from motion eye (Config help)

Hey, Might be an obvious one
Description of streaming url from motioneye:

Streaming URL provides MJPEG streaming. It can be used as a source for other applications that deal with video streams and know how to handle MJPEGs, or it can be used as the src attribute of an HTML tag.

I'm given the url in the format http://192.168.1.111:8082

Initial config for testing:

# See the README for the full list of configuration options.
cameras:
  - name: front_room
    host: 192.168.1.111
    port: 6082
    path: /
    width: 640
    height: 360
    fps: 15
    motion_detection:
      interval: 1
      trigger: false
    object_detection:
      interval: 1
      labels:
        - label: person
          confidence: 0.8

Then I get

[2020-09-04 08:20:01] [lib.camera.front_room ] [ERROR ] - Error starting decoder pipe! rtsp://None:[email protected]:6082/: Invalid data found when processing input

in the docker logs

Do you have any advice for setting this up?

debugging RTSP that ffmpeg isn't happy with

I have a bunch of cameras, regular old h264/265 ONVIF cameras from MarvioTech. They work great.

I bought a camera doorbell which has really crappy software. VLC can play the stream fine with rtsp://admin:[email protected]:554/onvif1. I believe I have it set up correctly in Viseron, but this is what I get in the debug log on startup:

[2020-09-10 09:47:50] [lib.camera.cam09        ] [DEBUG   ] - FFMPEG decoder command: ffmpeg -hide_banner -loglevel panic -avoid_negative_ts make_zero -fflags nobuffer -flags low_delay -strict experimental -fflags +genpts -stimeout 5000000 -use_wallclock_as_timestamps 1 -vsync 0 -hwaccel vaapi -vaapi_device /dev/dri/renderD128 -rtsp_transport tcp -i rtsp://admin:[email protected]:554/onvif1 -f rawvideo -pix_fmt nv12 pipe:1
[2020-09-10 09:47:51] [lib.camera.cam09        ] [ERROR   ] - Error starting decoder pipe! [rtsp @ 0x55e6c75d7780] Nonmatching transport in server reply rtsp://admin:[email protected]:554/onvif1: Invalid data found when processing input

I'll see if I can get ffmpeg to dump some frames outside of Viseron, but is there anything more specific I can do to help debug this?

object detection: identifying a not-configured model?

config.yaml snippet:

object_detection:
  type: darknet
  interval: 1
  logging:
    level: DEBUG
  labels:
    - label: person
      confidence: 0.5
    - label: bicycle
      confidence: 0.6
    - label: car
      confidence: 0.6
    - label: truck
      confidence: 0.6
    - label: motorbike
      confidence: 0.6
    - label: bird
      confidence: 0.5
    - label: cat
      confidence: 0.5
    - label: dog
      confidence: 0.5

and in the logs I noticed this, but haven't been able to recreate:

[2020-09-27 23:34:52] [lib.nvr.cam02.object    ] [DEBUG   ] - Objects: [{'label': 'tie', 'confidence': 0.517, 'rel_width': 0.057, 'rel_height': 0.164, 'rel_x1': 0.772, 'rel_y1': 0.632, 'rel_x2]
[2020-09-27 23:35:13] [lib.nvr.cam02.object    ] [DEBUG   ] - Objects: [{'label': 'tie', 'confidence': 0.524, 'rel_width': 0.068, 'rel_height': 0.181, 'rel_x1': 0.781, 'rel_y1': 0.627, 'rel_x2]
[2020-09-27 23:35:51] [lib.nvr.cam02.object    ] [DEBUG   ] - Objects: [{'label': 'tie', 'confidence': 0.53, 'rel_width': 0.139, 'rel_height': 0.267, 'rel_x1': 0.736, 'rel_y1': 0.589, 'rel_x2']
[2020-09-27 23:35:55] [lib.nvr.cam02.object    ] [DEBUG   ] - Objects: [{'label': 'tie', 'confidence': 0.543, 'rel_width': 0.058, 'rel_height': 0.156, 'rel_x1': 0.769, 'rel_y1': 0.635, 'rel_x2]
[2020-09-27 23:36:05] [lib.nvr.cam02.object    ] [DEBUG   ] - Objects: [{'label': 'tie', 'confidence': 0.536, 'rel_width': 0.134, 'rel_height': 0.257, 'rel_x1': 0.736, 'rel_y1': 0.589, 'rel_x2]
[2020-09-27 23:36:13] [lib.nvr.cam02.object    ] [DEBUG   ] - Objects: [{'label': 'tie', 'confidence': 0.546, 'rel_width': 0.058, 'rel_height': 0.159, 'rel_x1': 0.767, 'rel_y1': 0.637, 'rel_x2]
[2020-09-27 23:36:17] [lib.nvr.cam02.object    ] [DEBUG   ] - Objects: [{'label': 'tie', 'confidence': 0.623, 'rel_width': 0.058, 'rel_height': 0.147, 'rel_x1': 0.769, 'rel_y1': 0.644, 'rel_x2]
[2020-09-27 23:36:19] [lib.nvr.cam02.object    ] [DEBUG   ] - Objects: [{'label': 'tie', 'confidence': 0.543, 'rel_width': 0.058, 'rel_height': 0.158, 'rel_x1': 0.769, 'rel_y1': 0.635, 'rel_x2]
[2020-09-27 23:36:31] [lib.nvr.cam02.object    ] [DEBUG   ] - Objects: [{'label': 'tie', 'confidence': 0.574, 'rel_width': 0.058, 'rel_height': 0.156, 'rel_x1': 0.769, 'rel_y1': 0.637, 'rel_x2]
[2020-09-27 23:36:33] [lib.nvr.cam02.object    ] [DEBUG   ] - Objects: [{'label': 'tie', 'confidence': 0.626, 'rel_width': 0.056, 'rel_height': 0.158, 'rel_x1': 0.769, 'rel_y1': 0.635, 'rel_x2]
[2020-09-27 23:36:35] [lib.nvr.cam02.object    ] [DEBUG   ] - Objects: [{'label': 'tie', 'confidence': 0.503, 'rel_width': 0.056, 'rel_height': 0.161, 'rel_x1': 0.769, 'rel_y1': 0.635, 'rel_x2]

I am definitely not interested in finding ties, I have no mention of tie anywhere in my setup, but I know it's one of the EdgeTPU modules in the image.

What's caused this? Is it a feature or a bug?

[FR] support for PCIe based EdgeTPU devices

There are a couple PCIe/M.2 based devices from Coral.ai along with their USB version. Outside of the US it's very difficult to obtain the USB one, but the PCIe/M.2 ones are ubiquitous.

This feature request would be to support these other models as well; it looks like you interact with them through a device driver and /dev file, and it appears that it's the same hardware doing the heavy lifting, so I am hopeful that the interface you use would be similar as well.

I have one of these coming this week (for a different project) - I'd be happy to run any testing you may need.

[FR] - Unraid (Docker) community App

This one was on the reddit thread.

Unraid is an OS that provides a GUI for a PC behaving as a NAS, but includes VMs and dockers.

There is a community app store (similar to the Synology NAS docker repository) where you can download an 'app' (docker) and specify variables (volumes, ports, env etc) inside the GUI.

App store here:
To create a template info here:

Benefit to this is it checks for updates and the docker can be updated via the GUI easily.
HTH

how to debug "out of memory"

I just saw this exception when trying out the new (20200913) roflcoopter/verison-latest:

[2020-09-13 16:32:41] [lib.motion.cam04        ] [DEBUG   ] - Motion frames: 1, area: 1363.5
Exception in thread Thread-46:
Traceback (most recent call last):
  File "/usr/lib/python3.6/threading.py", line 916, in _bootstrap_inner
    self.run()
  File "/usr/lib/python3.6/threading.py", line 864, in run
    self._target(*self._args, **self._kwargs)
  File "/src/viseron/lib/camera.py", line 213, in capture_pipe
    pipe = self.pipe(stderr=True, single_frame=True)
  File "/src/viseron/lib/camera.py", line 190, in pipe
    bufsize=10 ** 8,
  File "/usr/lib/python3.6/subprocess.py", line 729, in __init__
    restore_signals, start_new_session)
  File "/usr/lib/python3.6/subprocess.py", line 1295, in _execute_child
    restore_signals, start_new_session, preexec_fn)
OSError: [Errno 12] Cannot allocate memory

I didn't change anything from the earlier version other than to make the default logging info and motion detection level debug.

[FR] Masks

I can't see any FR and in the actual "to do" list so I'm requesting this basic, but useful, feature: masking of the frame to suppress motion and/or detection on specific areas of the frame. This should save computational power too.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.