Giter VIP home page Giter VIP logo

abhitronix / vidgear Goto Github PK

View Code? Open in Web Editor NEW
3.2K 61.0 237.0 118.54 MB

A High-performance cross-platform Video Processing Python framework powerpacked with unique trailblazing features :fire:

Home Page: https://abhitronix.github.io/vidgear

License: Apache License 2.0

Python 98.76% Shell 1.24%
opencv multithreading python video-processing ffmpeg youtube video-stabilization video framework twitch

vidgear's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

vidgear's Issues

Error With YouTube Live

I'm running the example code to use a live YouTube stream as a video source, but am receiving an error:

"YouTube Mode is enabled and the input YouTube Url is invalid!"

I'm using the sample code verbatim, and only changed the YouTube URL to a live stream (https://youtu.be/Nk4HAS-HOr4) instead of the Rick Roll video.

I'm pretty sure all of my packages are up to date, including youtube-dl

Camera_num option in PiCamera

Question

I'm using a Raspberry Pi Compute module with two RPI cameras. I'm using the example code for the Raspberry Pi camera and I'm trying to use the camera_num option from here:

class picamera.PiCamera(camera_num=0, stereo_mode='none', stereo_decimate=False, resolution=None, framerate=None, sensor_mode=0, led_pin=None, clock_mode='reset', framerate_range=None)

I tried putting it in

stream = PiGear(camera_num=1, resolution=...

and also in the options string but both times it says
'PiCamera' object has no attribute 'camera_num'.

Is this supported? If I use the PiCamera module directly this works.

Acknowledgment

  • A brief but descriptive Title of your issue.
  • I have searched the issues for my issue and found nothing related or helpful.
  • I have read the FAQ.
  • I have read the Wiki.

Context

I'm trying to synchronize the two Raspberry Pi Cameras. If I use PiCamera library and thread them they are still slightly out of sync. I'm hoping to use this library to simply write a timestamp to a log file for each video and then I'll attempt to sync them offline. The built in stereoscopic option in the PiCamera library doesn't support anything over about 640x480 and I'm looking to record at 1920x1080 on each camera.

Your Environment

  • VidGear version: 0.1.5
  • Branch:
  • Python version: 3.7
  • pip version: 3
  • Operating System and version: Raspbian

Optional

[Proposal] Can NetGear Client receive input from multiple Servers?

[Enhancement] NetGear Client can receive inpunt from multipule Server and show them into a Grid View

Detailed Description

Same concept is here https://www.pyimagesearch.com/2019/04/15/live-video-streaming-over-network-with-opencv-and-imagezmq/
Biside frame, NetGear Server can send other information, eg: Frame type, Camera Name... some thing like that: server.send(frame, dict(msg_type=2, context='camera continue frame')}) and NetGear Client can receive data like this: frame, extra_data = client.recv()

How to read lenght of file in frame and how to jump to specific frame

Hi.
I want to use vidgear with for webcam and also the file loading.
For webcam or youtube run everything well problem is when I load the video.
My program is based on frame position so I use code like this:

video_pos =int(cap.get(cv2.CAP_PROP_POS_FRAMES))

but I get error
AttributeError: 'CamGear' object has no attribute 'get' how I can fix it?

Some question is how I can jump to specific frame.

Thanks for the answer

Question

Acknowledgment

  • A brief but descriptive Title of your issue.
  • I have searched the issues for my issue and found nothing related or helpful.
  • I have read the FAQ.
  • I have read the Wiki.

Context

Your Environment

  • VidGear version:
  • Branch:
  • Python version:
  • pip version:
  • Operating System and version:

Optional

I want to get the live stream from youtube, and for that, I have used opencv along with the package vidgear. But while running the code, I am getting the following error. I am sure that there is no problem with the URL. I have tried with pafy and streamlink. Even though both have given the result but after few frames, it was getting stuck and I want sequential frames without any pause.

import cv2
from vidgear.gears import CamGear
stream = CamGear(source="https://www.youtube.com/watch?v=VIk_6OuYkSo", y_tube =True, time_delay=1, logging=True).start() # YouTube Video URL as input

while True:

frame = stream.read()
if frame is None:
    break

cv2.imshow("Output Frame", frame)


key = cv2.waitKey(30) 

if key == ord("q"):

    break

cv2.destroyAllWindows()
stream.stop()

'NoneType' object has no attribute 'extension'
Traceback (most recent call last):
File "C:\Users\CamfyVision\AppData\Local\Programs\Python\Python36\lib\site-packages\vidgear\gears\camgear.py", line 120, in init
print('Extension: {}'.format(_source.extension))
AttributeError: 'NoneType' object has no attribute 'extension'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "DronrStream.py", line 4, in
stream = CamGear(source="https://www.youtube.com/watch?v=VIk_6OuYkSo", y_tube =True, time_delay=1, logging=True).start() # YouTube Video URL as input
File "C:\Users\CamfyVision\AppData\Local\Programs\Python\Python36\lib\site-packages\vidgear\gears\camgear.py", line 125, in init
raise ValueError('YouTube Mode is enabled and the input YouTube Url is invalid!')

Video didn't write when packaging the program with PyInstaller

I use PyInstaller to package my app. I create one file, windowed application. And I use the path function written by myself.
https://github.com/Seraphli/TestField/blob/master/Examples/logging_and_tqdm/util.py#L4-L43
I create a config file in subfolder config and that is fine.
I then pass the absolute path to WriteGear and it creates the video file. But after closing the WriteGear, the video file has 0kb size. Seems the video is not written into it.

# import libraries
from vidgear.gears import ScreenGear
from vidgear.gears import WriteGear
import cv2
import time

options = {'top': 40, 'left': 0, 'width': 100,
           'height': 100}  # define dimensions of screen w.r.t to given monitor to be captured

output_params = {"-vcodec": "libx264", "-crf": 0,
                 "-preset": "fast"}  # define (Codec,CRF,preset) FFmpeg tweak parameters for writer

stream = ScreenGear(monitor=1,
                    logging=True).start()  # Open Live Screencast on current monitor

writer = WriteGear(output_filename='Output.mp4', compression_mode=True,
                   logging=True,
                   **output_params)  # Define writer with output filename 'Output.mp4'

# infinite loop
while True:

    frame = stream.read()
    # read frames

    # check if frame is None
    if frame is None:
        # if True break the infinite loop
        break

    # {do something with frame here}
    # gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

    # write a modified frame to writer
    writer.write(frame)

    # Show output window
    cv2.imshow("Output Frame", frame)

    key = cv2.waitKey(1) & 0xFF
    # check for 'q' key-press
    if key == ord("q"):
        # if 'q' key-pressed break out
        break

    # delay of about  0.1 seconds
time.sleep(0.1)

cv2.destroyAllWindows()
# close output window

stream.stop()
# safely close video stream
writer.close()
# safely close writer

I create a one file app for this small demo.
https://drive.google.com/file/d/1Wgd4bm9bzK079TL73XfEuwGxsZpLcExt/view?usp=sharing

[Proposal] WebRTC Real-time video streaming with vidgear

Detailed Description

WebRTC is a free, open project that provides browsers and mobile applications with Real-Time Communications (RTC) capabilities via simple APIs. The WebRTC components have been optimized to best serve this purpose. This proposal is to test whether it is possible to add WebRTC python support to transfer video frames from source using webRTC in real-time and bring this implementation under vidgear's multithreaded API environment.

Context

Our goal through this proposal is to test whether it is possible to add WebRTC python support to transfer video frames from source using webRTC in real-time and if possible, then implement this with vidgear multithreaded API.

Your Environment

  • VidGear version: all
  • Branch: Development
  • Python version: all
  • pip version: non applicable
  • Operating System and version: all

Any Other Important Information

Helpful Resource/Library : Aiortc is a library for Web Real-Time Communication (WebRTC) and Object Real-Time Communication (ORTC) in Python. It is built on top of asyncio, Python's standard asynchronous I/O framework. Source: https://github.com/aiortc/aiortc

How can we to make transmit frame faster?

I want to use this project's NetGear. When I use frame 10x10 pixel, sending time is 0.0065s per frame. Using 600x600 pixel, the spend time is 0.095s, And 1000x1000 is 0.26s. This is too slow. I want to have a real-time show about transmitting frame. what can we do to faster the transmitting time?

I try to use asynic to improve transmitting speed, but I'm failed, as follow is my code:

async def send_img(server):

    # print('send the number is: ', queue.qsize())
    while True:
        if not queue.empty():
            frame = queue.get()
            print('sending....')
            frame = cv2.resize(frame)
            server.send(frame)


def main():
    print(os.getpid())
    pid = os.fork()
    if pid == 0:
        detection()
    else:

        options = {'copy': False, 'track': False}
        # change following IP address '192.168.1.xxx' with yours
        server = NetGear(address='192.168.10.189', port='5454', protocol='tcp', pattern=0, receive_mode=False,
                     logging=True, **options)  # Define netgear server at your system IP address.

        loop = asyncio.get_event_loop()
        loop.run_until_complete(send_img(server))

This is code segment, can anyone give some suggest?
Thank you

How to set framerate with ScreenGear

Question

I would like to set the frame rate with which the ScreenGear module acquires the frames from my monitor.
How can I do it?

Other details:

If I grab a small frame, I will have very slow down video effect (i.e. I record for 10 seconds but the writed video is 1 minute), since it is able to grab many fps and put them with the fixed fps of the WriteGear module (e.g. 25).

On the other hand, If I grab a large frame (e.g. my whole 4k monitor), I will have a very fast video effect (i.e. I record for 30 seconds but the writed video is 5 seconds), since it is able to grab few fps because of the large images.

How can I manage this?

Set VideoGear's decoder.

Is there any way to set the video decoder for the VideoGear class? I've been looking through docs but not so many details found.
I would like to use the GPU in order to decode the video, in order to speed up the process a little.

How to set webcam resolution?

Thank you very much for this amazing work, it came just when I needed!

I'm trying to set the resolution to 1080p but isn't working. My code looks like this:

Read the video files

options = {"hflip": True, "exposure_mode": "auto", "iso": 'auto', 
           "exposure_compensation": 'auto', "awb_mode": "horizon",
           "sensor_mode": 0} 

video1 = VideoGear(enablePiCamera = False, resolution= (1920, 1080),
                   framerate=24, time_delay=2, **options).start() 

How Get Frame from Youtube Live Stream

Hello
With some youtube videos,this code works.
But this code not works with these youtube live stream.

https://www.youtube.com/watch?v=17Deeq8N2e4
https://www.youtube.com/watch?v=1y5dcfnv-Ss
https://www.youtube.com/watch?v=tbLXWVhu8-Q

  1. Method
    vPafy = pafy.new(videoUrl)
    play = vPafy.getbest(preftype="mp4")
    return play.url

  2. Method
    streams = streamlink.streams(videoUrl)
    return streams["best"].url

cap = cv.VideoCapture(videoUrl)

All 2 Method not works.

The error screen like this

[ERROR:0] global C:\projects\opencv-python\opencv\modules\videoio\src\cap.cpp (116) cv::VideoCapture::open VIDEOIO(CV_IMAGES): raised OpenCV exception:

OpenCV(4.1.1) C:\projects\opencv-python\opencv\modules\videoio\src\cap_images.cpp:235: error: (-5:Bad argument) CAP_IMAGES: error, expected '0?[1-9][du]' pattern, got: https://manifest.googlevideo.com/api/manifest/hls_playlist/expire/1574433696/ei/QJ_XXfqBHNOTgQOBqpGwBw/ip/222.112.215.2/id/1EiC9bvVGnk.1/itag/96/source/yt_live_broadcast/requiressl/yes/ratebypass/yes/live/1/goi/160/sgoap/gir%3Dyes%3Bitag%3D140/sgovp/gir%3Dyes%3Bitag%3D137/hls_chunk_host/r5---sn-3u-bh2ll.googlevideo.com/playlist_type/DVR/initcwndbps/7760/mm/44/mn/sn-3u-bh2ll/ms/lva/mv/m/mvi/4/pl/23/dover/11/keepalive/yes/fexp/23842630/mt/1574412027/sparams/expire,ei,ip,id,itag,source,requiressl,ratebypass,live,goi,sgoap,sgovp,playlist_type/sig/ALgxI2wwRAIgG6xA4SgrD4PZGAfyup1jpL003-U3CQomrURDSKrCbZQCIElX0iQYvSGuZK-aoDsbY9Zv6SVTCNHOXGoUXhPCj0bN/lsparams/hls_chunk_host,initcwndbps,mm,mn,ms,mv,mvi,pl/lsig/AHylml4wRQIhAO1AS0iv1JaOu9igx-i3uGV-52UNCvd1Kd4Fu9SSC6OqAiAVjNjYYdr37w4Id111zdRsu8csAkIAfynBOk4SEO9f3w%3D%3D/playlist/index.m3u8 in function 'cv::icvExtractPattern'

Traceback (most recent call last):
File "D:/VideoFeedProcessing/VideoFeed/main.py", line 226, in
out = cv.VideoWriter('_output.avi', fourcc, 15, (ori_wid, ori_hei))
TypeError: must be real number, not tuple
[tcp @ 000001601c64fbc0] Connection to tcp://manifest.googlevideo.com:443 failed: Error number -138 occurred

video sending using multithreading

In context, I transfer video from a camera on a raspberry pi 3, to a local computer. The local network is only used for this purpose so there is no congestion on the network, besides I am connected by cable, on the computer and on the raspberry, (LAN). The shipping is slow and I had an idea to increase the transfer using double wire delivery

Question

I have a question, to verify if it is possible or someone else has tried to do it.

In context, I transfer video from a camera on a raspberry pi 3, to a local computer. The local network is only used for this purpose so there is no congestion on the network, besides I am connected by cable, on the computer and on the raspberry, (LAN).
The video quality I transfer has a size of 1920 x 1080.

Using the NetGear library I transfer the video correctly, but at an average speed of 3 FPS.

For my purposes I need more video at a higher speed, but without reducing the quality of the video or resizing it.

server = NetGear (address = '192.168.x.xxx', port = '5454', protocol = 'tcp', pattern = 1, receive_mode = False, logging = True, ** options) #Define netgear client at Server IP address .
server2 = NetGear ...... different port

Try to implement a communication using 2 threads, where in each thread you send a video frame to the computer so that in this way while sending a video frame you can also make another sending at the same time of a subsequent frame. In theory this could help me increase the amount of SPF.

But what I have observed, is that internally even if you use threads, the video sending is only done one by one, even if you have two sending objects, one or the other will wait internally for the other to finish sending to send yours.

for example, implement 3 threads to send video at the same time but with consecutive frames, executing the code for 10 seconds. Each thread could only send 10 frames, where in total for the ten seconds were 30, 3 FPS.

Each thread is constantly checking on a corresponding video stack if there is a frame to send.

while True:
    if len (queue1)> 0:
        server.send (frame)

Acknowledgment

  • [*] A brief but descriptive Title of your issue.
  • [*] I have searched the issues for my issue and found nothing related or helpful.
  • [*] I have read the FAQ.
  • [*] I have read the Wiki.

Context

I mainly want to increase the amount of video frames I receive

In context, I transfer video from a camera on a raspberry pi 3, to a local computer. The local network is only used for this purpose so there is no congestion on the network, besides I am connected by cable, on the computer and on the raspberry, (LAN).
The video quality I transfer has a size of 1920 x 1080.

Your Environment

  • VidGear version: 0.1.5
  • Branch:
  • Python version: 3.5
  • pip version: current version
  • Operating System and version: Ubuntu 16.04

Optional

multihilo

speed up video sending when compressing the image

<! --- Provide a general summary of the problem in the previous Title ->
Accelerate video streaming by compressing information

Detailed description

<! --- Provide a detailed description of the change or addition you are proposing ->
NetGear works correctly for sending video between clients and servers, but it can improve this speed in situations where high quality video is handled or in saturated networks, reducing the size when compressing the viden sent.

What I propose is, internally add a preprocessing of the image, compressing it in jpeg format, with a variable that indicates to what quality it is correct to compress it, it could even be compressed as png.

Something like, before sending it, apply the compression:

encode_param = [int (cv2.IMWRITE_JPEG_QUALITY), 90]
result, encimg = cv2.imencode ('. jpg', img, encode_param)

and upon receiving it, decompress it:

decimg = cv2.imdecode (encimg, 1)

information here:
https://docs.opencv.org/master/d4/da8/group__imgcodecs.html#ga292d81be8d76901bff7988d18d2b42ac

Context

<! --- Why is this change important to you? How would you use it? ->
I think it would be helpful to transmit video in large modifications where its weight in memory causes a slow delivery. In addition to processes where the image quality is not of great importance.
<! --- Will this change the existing VidGear APIs? How? ->
It would not affect much, due to serious compression and decompression from point to point, when sending and receiving it.
<! --- How can it benefit other users? ->
Helping them to speed up the process of sending information.

Its environment

<! --- Include so many relevant details about the environment in which you worked ->

  • VidGear version: v0.1.5
  • Branch: <! --- Teacher / Testing / Development / PyPi -> Teacher
  • Python version: 3.5
  • pip version: 19.3.1
  • Operating system and version: Linux

Any other important information

<! --- This is an example / screenshot that I want to share ->
mejora

The output video is slower than the input

Hi,

I don't know if I'm the only one experiencing this, but the output video after framewise processing using OpenCV is slower than the input video.

Basically the workflow is this:

  1. Read frame from stream;
  2. Apply OpenCV functions over it
  3. Write the frame back using Writer.

In my case, an one minute video(4K 60FPS) ends up having 2 minutes and a half.

Python 2 MultiServer Mode, ValueError: [ERROR]: Failed to connect address

I use the testing branch of vidgear to use use Multiserver Mode.

running this part of code on a raspberry pi 3B + that transmits the video from a USB camera to a computer.

from vidgear.gears import NetGear
from vidgear.gears import CamGear
import cv2

#Open live video stream from device at index 0
stream = cv2.VideoCapture(0) 

#activate multiserver_mode
options = {'multiserver_mode': True, 'flag' : 0, 'copy' : False, 'track' : False}

#change following IP address '192.168.1.xxx' with Client's IP address and assign unique port address(for e.g 5566).
server = NetGear(address = 192.168.x.x, port = '5566', protocol = 'tcp',  pattern = 2, receive_mode = False, **options) # and keep rest of settings similar to Client

# infinite loop until [Ctrl+C] is pressed
while True:
	try: 
		(grabbed, frame) = stream.read()
		# read frames

		# check if frame is not grabbed
		if not grabbed:
			#if True break the infinite loop
			break

		# do something with frame and data(to be sent) here

		text = "Hello, I'm Server-1 at Port Address: 5566."

		# send frame and data through server
		server.send(frame, message = text)
	
	except KeyboardInterrupt:
		#break the infinite loop
		break

# safely close video stream.
stream.release()
# safely close server-1
server.close()

I get this error:
ValueError: [ERROR]: Failed to connect address: tcp://192.168.0.107:5566 and pattern: 2! Kindly recheck all parameters.

Tracking the problem, it originates in line 477:
self.msg_socket.bind(protocol+'://' + str(address) + ':' + str(port))

testing this code in python3 works fine.

Is the problem due to the python version?

[Proposal] Crop and Zoom feature for Stabilizer class

Detailed Description

A new parameter which can handle cropping and zooming frames to reduce the black borders arise because of stabilization being too noticeable.

Context

Currently border_size parameter in Stabilizer class only adds a border to output frame to reduce the effect of wrapping during stabilization. This proposal is to additionally introduce a new parameter that can handle cropping and zooming frames to reduce the black borders(similar to the feature available in Adobe AfterEffects)

Your Environment

  • VidGear version: 0.1.6-dev
  • Branch: developemnt
  • Python version: all 3+
  • pip version: all
  • Operating System and version: all

Enhancement: Real-time Video Stabilization in vidgear

Real-time Video Stabilization in vidgear

Introduction:

Video stabilization refers to a family of methods used to reduce the blurring & distortion associated with the motion of the camera. In other words, it compensates for any angular movement, equivalent to yaw, pitch, roll, and x and y translations of the camera. A related problem common in videos shot from mobile phones. The camera sensors in these phones contain what is known as an electronic rolling shutter. When taking a picture with a rolling shutter camera, the image is not captured instantaneously. Instead, the camera captures the image one row of pixels at a time, with a small delay when going from one row to the next. Consequently, if the camera moves during capture, it will cause image distortions ranging from shear in the case of low-frequency motions (for instance an image captured from a drone) to wobbly distortions in the case of high-frequency perturbations (think of a person walking while recording video). These distortions are especially noticeable in videos where the camera shake is independent across frames. The ability to locate, identify, track and stabilize objects at different poses and backgrounds is important in many real-time video applications. Object detection, tracking, alignment, and stabilization have been a research area of great interest in computer vision and pattern recognition due to the challenging nature of some slightly different objects such as faces, where algorithms should be precise enough to identify, track and focus one individual from the rest.

Real-Time Video Stabilization:

A few months back, while researching on my humanoid, I experienced significant jitteriness at the output due to motion in the cameras/Servos/platform that was causing tracked features to get lost on the way and thus resulting in false-positive movement of humanoid eyes. So, In order to eliminate this problem, I decided to implement a real-time video stabilizer. Therefore I studied & experimented various methods published in various research papers and online resources and finally came to the conclusion that some state-of-the-art video stabilization methods can achieve a quite good visual effect, but they always cost a lot of time. On the other hand, other real-time video stabilization methods cannot generate satisfactory results.

Goal:

Our goal is to implement real-time video stabilization for vidgear which can provide a good balance between stabilization and latency at expense of little to no computational power requirement thereby ideal for the raspberry pi too. Secondly, It must be implemented using OpenCV Computer Vision library for open-source considerations.

Resources:

  • Going through various methods published in various research papers and online resources, I think the Simple video stabilization using OpenCV by nghiaho12 works the best on my Raspberry Pi. It is less computationally expensive and there is a C++ implementation available for getting things started with.

TODO

  • Implement a Real-Time Video Stabilizer from scratch in python
  • Not extra dependency must be used except the existing ones
  • Must provide good stabilization and low latency with no extra resources
  • Merge stabilizer with VideoGear Class
  • Must be compatible with any video stream and able to perform at High FPS.

Enhancement: Multithreaded Live Screen Cast support in vidgear

Multithreaded Live Screen Cast

Introduction:

A screencast is a digital recording of computer screen output, also known as a video screen capture. The term screencast compares with the related term screenshot; whereas screenshot generates a single picture of a computer screen, a screencast is essentially a movie of the changes over time that a user sees on a computer screen.

Available Resources:

Python MSS:

MSS stands for Multiple Screen Shots, is an ultra-fast cross-platform multiple screenshots module in pure python using ctypes. With MSS we can easily define an area on the computer screen or an open window to record the live screen.

Goal

Our goal is to implement live Screen Cast support in vidgear by implementing a high-level wrapper around Python MSS at the expense of little to no latency in python.

TODO

  • Implement Live Screen Cast support in vidgear
  • Prepare a Multi-Threaded wrapper around Python-MSS
  • Create new ScreenGear class from scratch to implement this feature
  • Make ScreenGear compatible with existing vidgear Classes.
  • Must provide higher framerate at low latency with fewer resources
  • Optimize overall performance
  • Fix related bugs

Dropped support for Python 2.7 legacy

Hello everyone,

VidGear as of now supports both python 3 and python 2.7 legacies. But the support for Python 2.7 will be dropped in the upcoming major release i.e. v0.1.6 as most of the vidgear critical dependency's are already been migrated or in process of migrating their source-code to Python3. Therefore, This issue is a reminder of everyone using vidgear to start migrating into Python 3 as soon as possible.

WriteGear Bare-Minimum example (Non-Compression) not working

Description

  1. I followed the demo here: https://github.com/abhiTronix/vidgear/wiki/Non-Compression-Mode:-OpenCV#1-writegear-bare-minimum-examplenon-compression-mode

  2. Run the code

  3. The following error showed:

Compression Mode is disabled, Activating OpenCV In-built Writer!
InputFrame => Height:360 Width:640 Channels:1
FILE_PATH: /******/Output.mp4, FOURCC = 1196444237, FPS = 30.0, WIDTH = 640, HEIGHT = 360, BACKEND =
OpenCV: FFMPEG: tag 0x47504a4d/'MJPG' is not supported with codec id 7 and format 'mp4 / MP4 (MPEG-4 Part 14)'
OpenCV: FFMPEG: fallback to use tag 0x7634706d/'mp4v'
Warning: RGBA and 16-bit grayscale video frames are not supported by OpenCV yet, switch to `compression_mode` to use them!
Traceback (most recent call last):
  File "cam_demo.py", line 31, in <module>
    writer.write(gray)
  File "/Users/*****/lib/python3.7/site-packages/vidgear/gears/writegear.py", line 221, in write
    raise ValueError('All frames in a video should have same size')
ValueError: All frames in a video should have same size

Acknowledgment

  • A brief but descriptive Title of your issue
  • I have searched the issues for my issue and found nothing related or helpful.
  • I have read the FAQ.
  • I have read the Wiki.
  • I have read the Contributing Guidelines.

Environment

  • VidGear version: 0.1.5
  • Branch: PyPi
  • Python version: 3.7.3
  • pip version: 19.1.1
  • Operating System and version: macOS 10.14.3

Expected Behavior

Write frame to file.

Actual Behavior

Frames in different sizes.

Possible Fix

WriteGear specify the output frame size?

Steps to reproduce

(Write your steps here:)

See description.

Optional

FFmpeg backend

FFmpeg parameters for opencv capture

I have started looking through CamGear and am hoping it will allow for more control of the ffmpeg backend

Question

So is it possible to launch an opencv capture object with an ffmpeg command.

The reason being i would like to use cuvid/ h264_cuvid (hardware decoder)
(i know you can kinda do it with # OPENCV_FFMPEG_CAPTURE_OPTIONS="video_codec;h264_cuvid|rtsp_transport;tcp")
but this doesnt give access to all the other ffmpeg features)

but also pipe that capture to another containter .mp4 with a codec copy so we dont waste resources.

and also potentially map it to another sink like an ffserver feed

Regards Andrew

How to synchronize between two cameras?

Thank you for that great repository.

Is it possible to have two cameras taking images synchronized to each other? Or will there be an offset due to the multithreading nature?

module 'zmq' has no attribute '0'

When I use NetGear, I got this error: module 'zmq' has no attribute '0'
I install pyzmp==17.1.2
numpy ==1.15.4
OpenCV-contirb-python==3.4.2.16
vidgear==0.1.5

And I run this server in container base ubuntu16.04

here is my code:

# import libraries
from vidgear.gears import NetGear
import cv2

stream = cv2.VideoCapture('hamilton_clip.mp4') #Open any video stream

options = {'flag': '0', 'copy': False, 'track': False}
#change following IP address '192.168.1.xxx' with yours
server = NetGear(address = '192.168.10.189', port = '5454', protocol = 'tcp',  pattern = 0, receive_mode = False, logging = True, **options) #Define netgear server at your system IP address.

# infinite loop until [Ctrl+C] is pressed
while True:
	try: 
		(grabbed, frame) = stream.read()
		# read frames

		# check if frame is not grabbed
		if not grabbed:
			#if True break the infinite loop
			break

		# do something with frame here

		# send frame to server
		server.send(frame)
	
	except KeyboardInterrupt:
		#break the infinite loop
		break

# safely close video stream
stream.release()
# safely close server
writer.close()

it come from netgear demo

Replace print with a logging module

Detailed Description

Proposal to replace print command with python's logging module completely and add the severity level for dynamic usage.

Context

The proposal is to replace print errors with the logging python module. The logging library has a lot of useful features like:

  • Easy to see where and when (even what line no.) a logging call is being made from.
  • You can log to files, sockets, pretty much anything, all at the same time.
  • You can differentiate your logging based on severity.

But Print doesn't have any of these. Also, if vidgear is meant to be imported by other python tools, it's bad practice to print things to stdout since the user likely won't know where the print messages are coming from. With the logging module, a user can choose whether or not they want to propagate logging messages from vidgear or not.

Your Environment

  • VidGear version: All
  • Branch: Testing
  • Python version: All 3+
  • pip version: Latest
  • Operating System and version: All

Pulling Youtube Video

Copy and pasted your code from :github. The code fail to work. Seems like its returning some None frames similar to feeding cv2 w/ links generated using Pafy.

CODE:

import cv2

stream = CamGear(source='https://youtu.be/dQw4w9WgXcQ', y_tube =True,  time_delay=1, logging=True).start() # YouTube Video URL as input

# infinite loop
while True:

    frame = stream.read()
    # read frames

    # check if frame is None
    if frame is None:
        #if True break the infinite loop
        break

    # do something with frame here

    cv2.imshow("Output Frame", frame)
    # Show output window

    key = cv2.waitKey(1) & 0xFF
    # check for 'q' key-press
    if key == ord("q"):
        #if 'q' key-pressed break out
        break

cv2.destroyAllWindows()
# close output window

stream.stop()
# safely close video stream.

OUTPUT:

python vidgear_test.py 
Title: Rick Astley - Never Gonna Give You Up (Video)
Extension: webm

pipenv config:

[[source]]
name = "pypi"
url = "https://pypi.org/simple"
verify_ssl = true

[dev-packages]

[packages]
numpy = "*"
opencv-python = "*"
jupyter = "*"
tensorflow = "*"
pillow = "*"
matplotlib = "*"
youtube-dl = "*"
pafy = "*"
vidgear = "*"

[requires]
python_version = "3.6"

Remove duplicate code to import MSS

Detailed Description

The code to import the MSS module is duplicated. You first try to import OS specific code, and if no OS was good, you still do from mss import mss.
But the current code is a duplicate from the mss.mss() factory, which already handle OS specific imports for you :)

The current code will simply double the checks and the result would be the same in any cases (when the OS is not handled).

The patch would be quite simple:

-			import platform			
-			if platform.system() == 'Linux':
-				from mss.linux import MSS as mss
-			elif platform.system() == 'Windows':
-				from mss.windows import MSS as mss
-			elif platform.system() == 'Darwin':
-				from mss.darwin import MSS as mss
-			else:
-				from mss import mss
+			from mss import mss

Do you are interested in a PR?

BTW I am the MSS author, glad to see the module in use there 👍

Context

Your Environment

  • VidGear version: 0.1.5
  • Branch: all
  • Python version: all
  • pip version: all
  • Operating System and version: all

Any Other Important Information

Writegear - use hardware encoder

Writegear uses libx264 or libx265 encoders. Is it possible to make use of existing hardware encoders - for example on an Intel CPU the h264_vaapi encoder. I ask as using libx264 my CPU is saturating and the output video is dropping frames. In tests I have done outside of Writegear, using the h264_vaapi encoder significantly reduces the CPU load.

How to recover from picamera I/O operation on closed file?

Question

Sometimes when fetching frames from PiGear in an infinite loop, my program would encounter this async exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.7/dist-packages/vidgear/gears/pigear.py", line 158, in update
    for stream in self.stream:
  File "/usr/local/lib/python3.7/dist-packages/picamera-1.13-py3.7.egg/picamera/camera.py", line 1889, in capture_continuous
    if not encoder.wait(self.CAPTURE_TIMEOUT):
  File "/usr/local/lib/python3.7/dist-packages/picamera-1.13-py3.7.egg/picamera/encoders.py", line 395, in wait
    self.stop()
  File "/usr/local/lib/python3.7/dist-packages/picamera-1.13-py3.7.egg/picamera/encoders.py", line 419, in stop
    self._close_output()
  File "/usr/local/lib/python3.7/dist-packages/picamera-1.13-py3.7.egg/picamera/encoders.py", line 349, in _close_output
    mo.close_stream(output, opened)
  File "/usr/local/lib/python3.7/dist-packages/picamera-1.13-py3.7.egg/picamera/mmalobj.py", line 428, in close_stream
    stream.flush()
  File "/usr/local/lib/python3.7/dist-packages/picamera-1.13-py3.7.egg/picamera/array.py", line 237, in flush
    super(PiRGBArray, self).flush()
ValueError: I/O operation on closed file.

If it happens, there seems no obvious way for the caller program to know about it, and there is no instruction on how to recover from this exception. Is there any way to detect or recover from this situation?

Acknowledgment

  • A brief but descriptive Title of your issue.
  • I have searched the issues for my issue and found nothing related or helpful.
  • I have read the FAQ.
  • I have read the Wiki.

Context

Your Environment

  • VidGear version: 0.1.5
  • Branch: PyPi
  • Python version: 3.7.3
  • pip version: 18.1
  • Operating System and version: Raspbian Buster

How to add audio to WriteGear class?

Many thanks for your VidGear code. In real time I am trying to process a
webcam video input using Opencv (using Ubuntu) then write the output to a file using
writegear (no audio at this point) and all works fine.
The problem comes when I try and add the audio from the webcam to the output. I thought
inserting something simple like
-f alsa -ac 1 -i hw:0
so the ffmpeg command line looks like
ffmpeg -y -f rawvideo -vcodec rawvideo -s 1280x720 -pix_fmt gray -i - -f alsa -ac 1 -i hw:0 -vcodec libx264 -crf 0 -preset fast output.mp4
at the point you build the ffmpeg string cmd (line 318 in writegear) would do the trick but that
simply throws an ffmpeg error. However I structure the ffmpeg command I
can't get ffmpeg to accept the command string.
Is it possible to add audio to the writegear ffmpeg output?

Support for rtmp broadcast

Is there any way I can implement rtmp broadcast using your api, NetGear for example.

Question

Acknowledgment

  • A brief but descriptive Title of your issue.
  • I have searched the issues for my issue and found nothing related or helpful.
  • I have read the FAQ.
  • I have read the Wiki.

Context

I already have a rtmp server running and a vlc rtmp subscriber. Now, I need an rtmp publisher using your library or opencv.

Your Environment

  • VidGear version: 0.1.5
  • Branch:
  • Python version: 3.6
  • pip version: 19.1.1
  • Operating System and version: ubuntu 18.04

Optional

The solution I expect should be like this

from vidgear.gears import VideoGear
from vidgear.gears import NetGear

stream = VideoGear(source=0).start() 
options = {flag : 0, copy : False, track : False}
server = NetGear(address = '127.0.0.1', port = '1935', protocol = 'rtmp',  pattern = 0, receive_mode = False, logging = True, **options) 

while True:
	try: 
		frame = stream.read()
		if frame is None:
			break
		server.send(frame)
	
	except KeyboardInterrupt:
		break

stream.stop()
server.close()

Multi-camera frame refresh issue

Description

I have four USB cameras connected to an RPI4. The while loop captures a fresh frame from each camera each time it loops. The issue is - 3 of 4 cameras always capture a fresh frame. Sometimes the fourth camera does for a few frames and then it stops. I have no idea what's causing this issue.

Acknowledgment

  • A brief but descriptive Title of your issue
  • I have searched the issues for my issue and found nothing related or helpful.
  • I have read the FAQ.
  • I have read the Wiki.
  • I have read the Contributing Guidelines.

Environment

  • VidGear version: 0.14.0
  • Branch:
  • Python version: 3.7
  • pip version: 0.18.1
  • Operating System and version: Raspbian Buster

Expected Behavior

All four cameras should capture a fresh frame each loop

Actual Behavior

three of four cameras capture a fresh frame

Possible Fix

I know a lot of work-arounds, but no idea on a fix

Steps to reproduce

(Write your steps here:)

You'd likely need an RPI4 and these same ELP USB cameras. The odd thing is sometimes the fourth camera works for the first few frames and then it stops. Also, I've placed markers throughout the loop to make sure the variable for which the frame data is named is cleared each loop - yet somehow the python script is presenting the same image over and over.

Optional

[Bug]: Assertion error in CamGear API during colorspace manipulation

Description

This bug directly affects colorspace manipulation in CamGear API. Due to this bug, the CamGear API currently exits itself with (-215:Assertion failed) !_src.empty() in function 'cvtColor' error with any colorspace because of the improper handling of threaded queue structure.

Acknowledgment

  • A brief but descriptive Title of your issue
  • I have searched the issues for my issue and found nothing related or helpful.
  • I have read the FAQ.
  • I have read the Wiki.
  • I have read the Contributing Guidelines.

Environment

  • VidGear version: 0.1.6-dev
  • Branch: Development
  • Python version: All
  • pip version: All
  • Operating System and version: All

Expected Behavior

No Assertion error while self-terminating.

Actual Behavior

Throws (-215:Assertion failed) !_src.empty() in function 'cvtColor' error on self-termination.

Code to reproduce

from vidgear.gears import CamGear
stream = CamGear(source=test.mp4, colorspace = 'COLOR_BGR2YUV', logging=True).start()
while True:
	frame = stream.read()
	# check if frame is None
	if frame is None:
		#if True break the infinite loop
		break
stream.stop()

On executing this function, It will self-terminate with (-215:Assertion failed) !_src.empty() in function 'cvtColor' error.

WriteGear from ScreenGear

Follow this code:

# IMPORT  
from vidgear.gears import ScreenGear
from vidgear.gears import WriteGear
import cv2

# SHOW WINDOW
cv2.namedWindow('Output_Frame',cv2.WINDOW_NORMAL)

# SCREEN
options = {'top': 300, 'left': 300, 'width': 300, 'height': 200} 
stream = ScreenGear(monitor=1, logging=True, **options).start()

# WRITE
output_params = {"-input_framerate":25}
writer = WriteGear(output_filename = 'video_Screen2Write_example.mp4', logging = True, **output_params) 

# MAIN LOOP
while True: 
	
    # read frame from SCREEN
    frame = stream.read()
    if frame is None:
        break

    # show frame in window and WRITE
    cv2.imshow("Output_Frame", frame)
    writer.write(frame)

    # if 'q' then EXIT
    key = cv2.waitKey(1) & 0xFF
    if key == ord("q"):
        break


# CLOSE ALL
cv2.destroyAllWindows()
stream.stop()
writer.close()

After running for 30 seconds press "q" to stop. Then go to the saved file video, look at its Properties-> Duration. It will be much more than 30 seconds, with Framerate = 25.
(In fact during the writing process I see the speed>1.0x and the fps>25)

I suppose that the reason is that my writer is faster than 25 fps, so it is able to screen at higher framerate, generating a longer output video.


NOTE:
If I set output_params = {"-input_framerate":stream.framerate}, where stream = ScreenGear(monitor=1, logging=True, **options).start()

I get:
AttributeError: 'ScreenGear' object has no attribute 'framerate'


How can I use WRITE tacking video from SCREEN with a temporal coherence with a fixed framerate?

Bug: Non-Blocking frame handling of Video Files in CamGear Class

Bug Insights

Currently, CamGear(or common VidGear Class) uses separate threads to process frames from the given source at certain high speed one after another, let's say, 50fps(50 frames per second). Now for a video source, imagine we are performing a heavy computational task where the frames are being processed at 5fps only. Under these conditions due to multi-threaded frame capturing, CamGear will keep on cycling frames from the source at 50fps in the background even while no one is requesting for them and thereby finally end up returning a NoneType frame at the output if video file(as source) is of fixed length. This erroneous behavior also leads to the wrong frame being processed at any given instance and another undesired behavior called Frame Skipping

Affected VideoCapture Streams::warning:

  • This bug is present in any video stream of fixed length including network streams
  • This bug doesn't affect camera (H.W) devices since input is automatically managed by IO resource handler

Code to Reproduce Bug:

This code will call sleep function for a duration of 2secs at each loop cycle to imitate a heavy computation task being performed and counts frame number that is being processed:

from vidgear.gears import CamGear
import cv2
import time

stream = CamGear(source='test.mp4').start() #Open any video file stream of fixed length

# infinite loop
frame_num = 0
while True:
	
	frame = stream.read()
	# read frames

	# check if frame is None
	if frame is None:
		#if True break the infinite loop
                print(frame_num)
		break
	
	# Sleep for 1 seconds (imitating a heavy computational task)
	time.sleep(2)
	   
	# Show output window
	cv2.imshow("Output Frame", frame)

	key = cv2.waitKey(1) & 0xFF
	# check for 'q' key-press
	if key == ord("q"):
		#if 'q' key-pressed break out
		break
        frame_num+=1
cv2.destroyAllWindows()
# close output window

stream.stop()
# safely close video stream

Current Behavior(Output)

Due to the above-mentioned bug, when this algorithm is run, it will exit immediately without processing each frame or skipped frames will visible depending upon the length of the input video. The final frame_num value will be around 0~10 (when a 20-120 sec video is used), which is far less than in comparison to actual numbers.

TODO

  • Implement a Blocking Mode in CamGear Class with a threaded queue
  • Use collections.deque() for performance consideration
  • Add a test for this Mode
  • Fix Bugs and robust testing of this implementation.

Processing(opencv) and Streaming multiple IPCamera to the Client

Since I am new to this library, I want someone to help/advise me about How to stream and do process(e.g: detect a face, motion detection etc) the connected multiple IP cameras videos to the web client?

Your Environment

  • VidGear version: latest
  • Branch:
  • Python version: 3.6.8
  • pip version:
  • Operating System and version: rasperry pi 3 and pi4 (4gb)

Please help

How can I send this video to javascript

Question

I was trying to use a socket but found this. My question is how can I send this video to web and using javascript and this lib

this is my javascript this is how I receive image using javascript

    var arrayBuffer = msg.data;
    var bytes = new Uint8Array(arrayBuffer);

    var image = document.getElementById('image');
    image.src = 'data:image/png;base64,'+encode(bytes);
};

How do i use Vidgear for stabilizing series of images?

Question

Acknowledgment

  • A brief but descriptive Title of your issue.
  • I have searched the issues for my issue and found nothing related or helpful.
  • I have read the FAQ.
  • I have read the Wiki.

Context

Your Environment

  • VidGear version:
  • Branch:
  • Python version:
  • pip version:
  • Operating System and version:

Optional

Bug: sys.stderr.close() throws ValueError bug in WriteGear class

Bug Description:

The use of sys.stderr.close() at line:

sys.stderr.close()
breaks stderr and throws ValueError: I/O operation on closed file at the output. This also results in hidden tracebacks, since tracebacks are written to stder thereby making it difficult to identify this bug.

Code to reproduce:

from vidgear.gears import WriteGear
import numpy as np

try:
	np.random.seed(0)
	# generate random data for 10 frames
	random_data = np.random.random(size=(10, 1080, 1920, 3)) * 255
	input_data = random_data.astype(np.uint8)
	writer = WriteGear("garbage.garbage")
	writer.write(input_data)
	writer.close()
except Exception as e:
        print(e)
finally:
	einfo = sys.exc_info()
	import traceback; print(''.join(traceback.format_exception(*einfo)))
	print('finally?')

This code will result in traceback with ultimately leads to ValueError: I/O operation on closed file at the output.

Enhancement: Real-Time Video Frame Transferring over the network through Messaging

Real-Time Video Frame Transferring over the network through Messaging

Messaging

PatternHierarchy

Messaging makes applications loosely coupled by communicating asynchronously, which also makes the communication more reliable because the two applications do not have to be running at the same time. Messaging makes the messaging system responsible for transferring data from one application to another, so the applications can focus on what data they need to share but not worry so much about how to share it.

Message Oriented protocols

Message Oriented protocols send data in distinct chunks or groups. The receiver of data can determine where one message ends and another begins. Message protocols are usually built over streams but there is one layer in between which takes care to separate each logical part from another. It parses input stream for you and gives you result only when the whole dataset arrives and not all states in between.

Available Resources

  1. MQTT - Mosquitto is a lightweight MQTT is a machine-to-machine (M2M)/"Internet of Things" connectivity protocol broker messaging library. It works on top of the TCP/IP protocol. It is designed for connections with remote locations where a "small code footprint" is required or the network bandwidth is limited. The publish-subscribe messaging pattern requires a message broker.

  2. ZeroMQ (also spelled ØMQ, 0MQ or ZMQ) is a high-performance asynchronous brokerless messaging library, aimed at use in distributed or concurrent applications. It provides a message queue, but unlike message-oriented middleware, a ZeroMQ system can run without a dedicated message broker.

Since ZeroMQ outperformed MQTT in various tests and it's well-documented as well. I decided to go with ZeroMQ for messaging implementation in vidgear.

Goal

Our goal is to implement real-time video frames transferring over the network in vidgear by implementing a high-level wrapper around PyZmQ that contains Python bindings for ZeroMQ. This wrapper will provide both read and write functionality and read function will be multi-threaded for high-speed frame capturing with minimum latency and memory constraints.

TODO

  • Implement a new Netgear class: a high-level wrapper around ZeroMQ
  • Add both send() and recv() function for transferring frames
  • Make send() function multi-threaded and error-free with Threaded Queue Mode.
  • Add support for various possible messaging synchronous patterns
  • frame-transferring between server/client must be synchronized and ultrafast with minimum latency
  • Robustly handle the server and client end, even if any of them is started at a different instant.
  • Server end must able to terminate stream at the client(s) end automatically.
  • Server and Client must able to talk/send messages at any instance while transferring frames.

TLS Connection was non-properly terminated

Hi, i want to get live stream from youtube using my raspberry pi. But, i got output:

The TLS connection was non-properly terminated. The specified session has been invalidated for some reason.

Please help me

[Proposal] Can NetGear Client/Server can send/receive data with custom certificates?

Detailed Description

I think someone in the middle can capture the frame that Server sends to the client without security mechanisms. Can NetGear Client/Server can send/receive data with custom certificates?

Context

This proposal is to add Secure the connection between Servers and Client with custom certificates. This gives us strong encryption on data, and (as far as we know) unbreakable authentication. Stonehouse is the minimum you would use over public networks and assures clients that they are speaking to an authentic server while allowing any client to connect. More information can be found in these links:
https://github.com/zeromq/pyzmq/blob/master/examples/security/stonehouse.py
https://github.com/zeromq/pyzmq/blob/master/examples/security/generate_certificates.py

Your Environment

  • VidGear version: latest
  • Branch: PyPi
  • Python version: all
  • pip version: latest
  • Operating System and version: not applicable

Any Other Important Information

Not available

[Proposal] OSX environment support for Travis Cli tests

Detailed Description

VidGear as of now do not officially support for macOS(OSX) systems/environments. Thereby this proposal is for bringing OSX environments support to VidGear officially by implementing automated Travis CLI Tests environment for it(similar to Linux).

Context

This proposal aim at bringing this OSX python support to VidGear by implementing automated Travis CLI pytest similar to Linux environments and fixing the bugs encountered, so that VidGear can work seamlessly on OSX systems too.

Your Environment

  • VidGear version: 0.16.0
  • Branch: Development
  • Python version: All
  • pip version: Latest
  • Operating System and version: Not applicable

New Feature Request: Bi-Directional Messaging

Hello!
Firstly, thank you very much for your work, VidGear is working really well with my project! I'm able to send one way video and other information from a Raspberry Pi (server) to a second computer (client).

Looking at the docs, the "pattern" used to configure Netgear is ZMQ.pair by default - which is said to be bidirectional. Could this be used to send information back to the server?

For instance, the server could send the frame and a string of information over to the client, which could reply with confirmation that it has recieved the data, as well as any other information. In my case, it could send data back to a raspberry pi which can be parsed to turn on an LED or drive a motor.

That would be super useful to my project, and may help others with theirs!

Thank you!

How to record color video?

Question

I try the example in the document. The gray image can be recorded. The output video is fine.
But when I comment the color convert code, the video seems corrupt.

# import libraries
from vidgear.gears import ScreenGear
from vidgear.gears import WriteGear
import cv2


options = {'top': 40, 'left': 0, 'width': 100, 'height': 100} # define dimensions of screen w.r.t to given monitor to be captured

output_params = {"-vcodec":"libx264", "-crf": 0, "-preset": "fast"} #define (Codec,CRF,preset) FFmpeg tweak parameters for writer


stream = ScreenGear(monitor=1, logging=True, **options).start() #Open Live Screencast on current monitor 

writer = WriteGear(output_filename = 'Output.mp4', compression_mode = True, logging = True, **output_params) #Define writer with output filename 'Output.mp4' 

# infinite loop
while True:
	
	frame = stream.read()
	# read frames

	# check if frame is None
	if frame is None:
		#if True break the infinite loop
		break
	

	# {do something with frame here}
        # gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

	# write a modified frame to writer
        writer.write(frame) 
       
        # Show output window
	cv2.imshow("Output Frame", frame)

	key = cv2.waitKey(1) & 0xFF
	# check for 'q' key-press
	if key == ord("q"):
		#if 'q' key-pressed break out
		break

cv2.destroyAllWindows()
# close output window

stream.stop()
# safely close video stream
writer.close()
# safely close writer

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.