Giter VIP home page Giter VIP logo

python_video_stab's Introduction

Python Video Stabilization

Build Status codecov Maintainability PyPi version Last Commit Downloads

Python video stabilization using OpenCV. Full searchable documentation here.

This module contains a single class (VidStab) used for video stabilization. This class is based on the work presented by Nghia Ho in SIMPLE VIDEO STABILIZATION USING OPENCV. The foundation code was found in a comment on Nghia Ho's post by the commenter with username koala.

Input Output

Video used with permission from HappyLiving

Contents:

  1. Installation
  2. Basic Usage
  3. Advanced Usage

Installation

+ Please report issues if you install/try to install and run into problems!

Install vidstab without installing OpenCV

If you've already built OpenCV with python bindings on your machine it is recommended to install vidstab without installing the pypi versions of OpenCV. The opencv-python python module can cause issues if you've already built OpenCV from source in your environment.

The below commands will install vidstab without OpenCV included.

From PyPi

pip install vidstab

From GitHub

pip install git+https://github.com/AdamSpannbauer/python_video_stab.git

Install vidstab & OpenCV

If you don't have OpenCV installed already there are a couple options.

  1. You can build OpenCV using one of the great online tutorials from PyImageSearch, LearnOpenCV, or OpenCV themselves. When building from source you have more options (e.g. platform optimization), but more responsibility. Once installed you can use the pip install command shown above.
  2. You can install a pre-built distribution of OpenCV from pypi as a dependency for vidstab (see command below)

The below commands will install vidstab with opencv-contrib-python as dependencies.

From PyPi

pip install vidstab[cv2]

From Github

 pip install -e git+https://github.com/AdamSpannbauer/python_video_stab.git#egg=vidstab[cv2]

Basic usage

The VidStab class can be used as a command line script or in your own custom python code.

Using from command line

# Using defaults
python3 -m vidstab --input input_video.mov --output stable_video.avi
# Using a specific keypoint detector
python3 -m vidstab -i input_video.mov -o stable_video.avi -k GFTT

Using VidStab class

from vidstab import VidStab

# Using defaults
stabilizer = VidStab()
stabilizer.stabilize(input_path='input_video.mov', output_path='stable_video.avi')

# Using a specific keypoint detector
stabilizer = VidStab(kp_method='ORB')
stabilizer.stabilize(input_path='input_video.mp4', output_path='stable_video.avi')

# Using a specific keypoint detector and customizing keypoint parameters
stabilizer = VidStab(kp_method='FAST', threshold=42, nonmaxSuppression=False)
stabilizer.stabilize(input_path='input_video.mov', output_path='stable_video.avi')

Advanced usage

Plotting frame to frame transformations

from vidstab import VidStab
import matplotlib.pyplot as plt

stabilizer = VidStab()
stabilizer.stabilize(input_path='input_video.mov', output_path='stable_video.avi')

stabilizer.plot_trajectory()
plt.show()

stabilizer.plot_transforms()
plt.show()
Trajectories Transforms

Using borders

from vidstab import VidStab

stabilizer = VidStab()

# black borders
stabilizer.stabilize(input_path='input_video.mov', 
                     output_path='stable_video.avi', 
                     border_type='black')
stabilizer.stabilize(input_path='input_video.mov', 
                     output_path='wide_stable_video.avi', 
                     border_type='black', 
                     border_size=100)

# filled in borders
stabilizer.stabilize(input_path='input_video.mov', 
                     output_path='ref_stable_video.avi', 
                     border_type='reflect')
stabilizer.stabilize(input_path='input_video.mov', 
                     output_path='rep_stable_video.avi', 
                     border_type='replicate')

border_size=0

border_size=100

border_type='reflect' border_type='replicate'

Video used with permission from HappyLiving

Using Frame Layering

from vidstab import VidStab, layer_overlay, layer_blend

# init vid stabilizer
stabilizer = VidStab()

# use vidstab.layer_overlay for generating a trail effect
stabilizer.stabilize(input_path=INPUT_VIDEO_PATH,
                     output_path='trail_stable_video.avi',
                     border_type='black',
                     border_size=100,
                     layer_func=layer_overlay)


# create custom overlay function
# here we use vidstab.layer_blend with custom alpha
#   layer_blend will generate a fading trail effect with some motion blur
def layer_custom(foreground, background):
    return layer_blend(foreground, background, foreground_alpha=.8)

# use custom overlay function
stabilizer.stabilize(input_path=INPUT_VIDEO_PATH,
                     output_path='blend_stable_video.avi',
                     border_type='black',
                     border_size=100,
                     layer_func=layer_custom)
layer_func=vidstab.layer_overlay layer_func=vidstab.layer_blend

Video used with permission from HappyLiving

Automatic border sizing

from vidstab import VidStab, layer_overlay

stabilizer = VidStab()

stabilizer.stabilize(input_path=INPUT_VIDEO_PATH,
                     output_path='auto_border_stable_video.avi', 
                     border_size='auto',
                     # frame layering to show performance of auto sizing
                     layer_func=layer_overlay)

Stabilizing a frame at a time

The method VidStab.stabilize_frame() can accept numpy arrays to allow stabilization processing a frame at a time. This can allow pre/post processing for each frame to be stabilized; see examples below.

Simplest form

from vidstab.VidStab import VidStab

stabilizer = VidStab()
vidcap = cv2.VideoCapture('input_video.mov')

while True:
     grabbed_frame, frame = vidcap.read()
     
     if frame is not None:
        # Perform any pre-processing of frame before stabilization here
        pass
     
     # Pass frame to stabilizer even if frame is None
     # stabilized_frame will be an all black frame until iteration 30
     stabilized_frame = stabilizer.stabilize_frame(input_frame=frame,
                                                   smoothing_window=30)
     if stabilized_frame is None:
         # There are no more frames available to stabilize
         break
     
     # Perform any post-processing of stabilized frame here
     pass

Example with object tracking

import os
import cv2
from vidstab import VidStab, layer_overlay, download_ostrich_video

# Download test video to stabilize
if not os.path.isfile("ostrich.mp4"):
    download_ostrich_video("ostrich.mp4")

# Initialize object tracker, stabilizer, and video reader
object_tracker = cv2.TrackerCSRT_create()
stabilizer = VidStab()
vidcap = cv2.VideoCapture("ostrich.mp4")

# Initialize bounding box for drawing rectangle around tracked object
object_bounding_box = None

while True:
    grabbed_frame, frame = vidcap.read()

    # Pass frame to stabilizer even if frame is None
    stabilized_frame = stabilizer.stabilize_frame(input_frame=frame, border_size=50)

    # If stabilized_frame is None then there are no frames left to process
    if stabilized_frame is None:
        break

    # Draw rectangle around tracked object if tracking has started
    if object_bounding_box is not None:
        success, object_bounding_box = object_tracker.update(stabilized_frame)

        if success:
            (x, y, w, h) = [int(v) for v in object_bounding_box]
            cv2.rectangle(stabilized_frame, (x, y), (x + w, y + h),
                          (0, 255, 0), 2)

    # Display stabilized output
    cv2.imshow('Frame', stabilized_frame)

    key = cv2.waitKey(5)

    # Select ROI for tracking and begin object tracking
    # Non-zero frame indicates stabilization process is warmed up
    if stabilized_frame.sum() > 0 and object_bounding_box is None:
        object_bounding_box = cv2.selectROI("Frame",
                                            stabilized_frame,
                                            fromCenter=False,
                                            showCrosshair=True)
        object_tracker.init(stabilized_frame, object_bounding_box)
    elif key == 27:
        break

vidcap.release()
cv2.destroyAllWindows()

Working with live video

The VidStab class can also process live video streams. The underlying video reader is cv2.VideoCapture(documentation). The relevant snippet from the documentation for stabilizing live video is:

Its argument can be either the device index or the name of a video file. Device index is just the number to specify which camera. Normally one camera will be connected (as in my case). So I simply pass 0 (or -1). You can select the second camera by passing 1 and so on.

The input_path argument of the VidStab.stabilize method can accept integers that will be passed directly to cv2.VideoCapture as a device index. You can also pass a device index to the --input argument for command line usage.

One notable difference between live feeds and video files is that webcam footage does not have a definite end point. The options for ending a live video stabilization are to set the max length using the max_frames argument or to manually stop the process by pressing the Esc key or the Q key. If max_frames is not provided then no progress bar can be displayed for live video stabilization processes.

Example

from vidstab import VidStab

stabilizer = VidStab()
stabilizer.stabilize(input_path=0,
                     output_path='stable_webcam.avi',
                     max_frames=1000,
                     playback=True)

Transform file writing & reading

Generating and saving transforms to file

import numpy as np
from vidstab import VidStab, download_ostrich_video

# Download video if needed
download_ostrich_video(INPUT_VIDEO_PATH)

# Generate transforms and save to TRANSFORMATIONS_PATH as csv (no headers)
stabilizer = VidStab()
stabilizer.gen_transforms(INPUT_VIDEO_PATH)
np.savetxt(TRANSFORMATIONS_PATH, stabilizer.transforms, delimiter=',')

File at TRANSFORMATIONS_PATH is of the form shown below. The 3 columns represent delta x, delta y, and delta angle respectively.

-9.249733913760086068e+01,2.953221378387767970e+01,-2.875918912994855636e-02
-8.801434576214279559e+01,2.741942225927152776e+01,-2.715232319470826938e-02

Reading and using transforms from file

Below example reads a file of transforms and applies to an arbitrary video. The transform file is of the form shown in above section.

import numpy as np
from vidstab import VidStab

# Read in csv transform data, of form (delta x, delta y, delta angle):
transforms = np.loadtxt(TRANSFORMATIONS_PATH, delimiter=',')

# Create stabilizer and supply numpy array of transforms
stabilizer = VidStab()
stabilizer.transforms = transforms

# Apply stabilizing transforms to INPUT_VIDEO_PATH and save to OUTPUT_VIDEO_PATH
stabilizer.apply_transforms(INPUT_VIDEO_PATH, OUTPUT_VIDEO_PATH)

python_video_stab's People

Contributors

adamspannbauer avatar suslikv avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

python_video_stab's Issues

Refactor into more maintainable units

Current code is overly complex and will be hard to maintain given new features etc.

Break up functionality into smaller more easily testable units.

Auto set min border size to fully capture video

Find how to set minimum area needed for frames to never be cut off by transforms.

This feature is not a requirement for updated release to pypi; investigate how difficult and add to release if keeps on schedule.

changing task from investigating to implementing

Command Line Parameters vs. Python Interface

Hey,
is there a specific reason why some parameters of the "stabilize" method (like smoothing_window) aren't accessible from the command line?
On the contrary, it would be also nice to have a progress bar, when calling "stabilize(...)" using the VidStab class.

Cheers
Sebastian

Fix delta angle plot issues around degrees/radians

Appreciate the work and the progress, can't wait to see the upcoming features, like the auto border.

I can't figure out what unit the rotation is in, it says "degrees" on the axis but that doesn't seem to be true. Eg. in the following screenshot:

image

The image is rotated 5° to the counterclockwise, which corresponds to 0,0872 radians. The graph displays -0.18. What's going on here?

No output video

Describe the bug
No output video file created.
The next script run doesn't generate any output video file, but plots are shown (correct plots).

Provide version info
What version of Python are you running? Python 3.7.3
What version of OpenCV are you running? >>> cv2.version '4.1.0'
What version of vidstab are you running? The last one
OS: Windows 10

Provide error message
No error message shown

Provide code snippet
import matplotlib.pyplot as plt
from vidstab import VidStab
in_video = r'C:\Desktop\4.avi'
out_video = r'‪C:\Desktop\4_stab.avi'
stabilizer = VidStab()
stabilizer.stabilize(input_path=in_video, output_path=out_video)
stabilizer.plot_trajectory()
plt.show()
stabilizer.plot_transforms()
plt.show()

Are you able to provide the video?
any video )

The quality of stablization with vidstab is poor as compare to original implementation by nghiaho

Hello Adam, While experimenting with vidstab library, I saw vidstab introduces even more Jitteriness to output. Take this test video by NighaHo for example hippo.mp4.

For this test video, If you side-by-side compare vidstab output with the output of NighaHo implementation, you can see the large fluctuations at your output which is not present in the second one.

Code to reproduce

from vidstab.VidStab import VidStab
import cv2
stabilizer = VidStab()
vidcap = cv2.VideoCapture('hippo.mp4')
while True:
	grabbed_frame, frame = vidcap.read()
	# Pass frame to stabilizer even if frame is None
	# stabilized_frame will be an all black frame until iteration 30
	stabilized_frame = stabilizer.stabilize_frame(input_frame=frame,
	smoothing_window=30)
	if stabilized_frame is None:
	# There are no more frames available to stabilize
		break
	cv2.imshow('Compare',stabilized_frame)	

	if cv2.waitKey(1) & 0xFF == ord('q'):
		break 

cv2.destroyAllWindows()
# close output window
vidcap.release()

Trajectories

Thanks for your project!
How I understand reading your code, you build trajectories of video frames use only dx and dy (i.e. H[0, 2] and H[1, 2]).
I try to build trajectory of frames using his center point (x, y). So new uniform coordinates of this point in next frame are p' = H * (x, y, 1)^T. And then Euclidean coordinates are (p'[0] / p'[2], p'[1] / p'[2]). But so I get very strange plots. Could you explain me, please, where my mistake?

release blog post

write announcement blog post.... likely wait on logo commissioning

AttributeError: GFTT not a supported detector

Recreation:

  • install in fresh python virtualenv using (error occurs for both python 2.7 & 3.6.5):

pip install -e git+https://github.com/AdamSpannbauer/python_video_stab.git#egg=vidstab[cv2]

  • run readme/gen_example_output.py

Error:

Traceback (most recent call last):
  File "site-packages/imutils/feature/factories.py", line 81, in FeatureDetector_create
    detr = _DETECTOR_FACTORY[detector.upper()]
KeyError: 'GFTT'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "gen_example_output.py", line 6, in <module>
    stabilizer = VidStab()
  File "vidstab/vidstab/VidStab.py", line 56, in __init__
    blockSize=3)
  File "site-packages/imutils/feature/factories.py", line 86, in FeatureDetector_create
    raise AttributeError("{} not a supported detector".format(detector))
AttributeError: GFTT not a supported detector

8uC1 or 8uC3

When I am running test video, I am not getting any error but when I pass my desired video it gives this error "error: (-210) Both input images must have either 8uC1 or 8uC3 type in function cv::estimateRigidTransform"

I am using Python 3 and openCV 3.1.4.

Throw exception if input video file doesn't exist.

Current behavior

When the input video path doesn't exist, VidStab throws IndexError: pop from an empty deque (see error message in #70 for example)

Desired behavior:

If the input video doesn't exist VidStab should throw an exception notifying the user that video doesn't exist.

videostab on realtime videos

I am trying to apply this algorithm on a real time camera feed with the help of opencv library, I am unable to do cv2.imshow for input as well as output videos. Is the class Vidstab not supported for realtime videos.

OpenCV dependency install

(added from mobile)

Figure out best route to manage opencv dependency.

Mark as enhancement for install to install opencv-python from pypi (see fuzzywuzzy with levenstein enhancement)? This installation can cause issues if user already has OpenCV built on their machine

installing vidstab from pypi or github

I'm having trouble installing vidstab with pip. This is the error message:

Collecting git+https://github.com/AdamSpannbauer/python_video_stab.git
Cloning https://github.com/AdamSpannbauer/python_video_stab.git to c:\users\henoc\appdata\local\temp\pip-req-build-cb124koo
Requirement already satisfied: numpy in c:\python34\lib\site-packages (from vidstab==0.1.6) (1.14.5)
Collecting pandas (from vidstab==0.1.6)
Using cached https://files.pythonhosted.org/packages/08/01/803834bc8a4e708aedebb133095a88a4dad9f45bbaf5ad777d2bea543c7e/pandas-0.22.0.tar.gz
Could not find a version that satisfies the requirement numpy==1.9.3 (from versions: 1.10.4, 1.11.0, 1.11.1rc1, 1.11.1, 1.11.2rc1, 1.11.2, 1.11.3, 1.12.0b1, 1.12.0rc1, 1.12.0rc2, 1.12.0, 1.12.1rc1, 1.12.1, 1.13.0rc1, 1.13.0rc2, 1.13.0, 1.13.1, 1.13.3, 1.14.0rc1, 1.14.0, 1.14.1, 1.14.2, 1.14.3, 1.14.4, 1.14.5, 1.15.0rc1)
No matching distribution found for numpy==1.9.3

I'm using python 3.4.3, opencv 3.4.1, numpy 1.14.5 and pip 10.0.1.

the paper of the algorithm

This work is awesome. Would you like to share the detail of this algorithm or the paper of this algorithm?

vidstab installation

Every time i try to run this command, I get this error, can you help me understand what I am doing wrong

(videostab) rijuta@rijuta-HP-Notebook:~/PycharmProjects/python_video_stab-master$ python3 -m vidstab -i input_ostrich.gif -o stable_video1.avi -k GFTT
Progress bar is based on cv2.CAP_PROP_FRAME_COUNT which may be inaccurate

Traceback (most recent call last):
  File "/usr/lib/python3.5/runpy.py", line 184, in _run_module_as_main
    "__main__", mod_spec)
  File "/usr/lib/python3.5/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/home/rijuta/PycharmProjects/python_video_stab-master/vidstab/__main__.py", line 32, in <module>
    stabilizer.stabilize(input_path=args['input'], output_path=args['output'])
  File "/home/rijuta/PycharmProjects/python_video_stab-master/vidstab/VidStab.py", line 310, in stabilize
    show_progress=True)
  File "/home/rijuta/PycharmProjects/python_video_stab-master/vidstab/VidStab.py", line 174, in gen_transforms
    self.transforms = np.array(self._raw_transforms + (self.smoothed_trajectory - self.trajectory))
  File "/home/rijuta/.virtualenvs/videostab/lib/python3.5/site-packages/pandas/core/generic.py", line 1607, in __array_wrap__
    return self._constructor(result, **d).__finalize__(self)
  File "/home/rijuta/.virtualenvs/videostab/lib/python3.5/site-packages/pandas/core/frame.py", line 379, in __init__
    copy=copy)
  File "/home/rijuta/.virtualenvs/videostab/lib/python3.5/site-packages/pandas/core/frame.py", line 536, in _init_ndarray
    return create_block_manager_from_blocks([values], [columns, index])
  File "/home/rijuta/.virtualenvs/videostab/lib/python3.5/site-packages/pandas/core/internals.py", line 4859, in create_block_manager_from_blocks
    mgr = BlockManager(blocks, axes)
  File "/home/rijuta/.virtualenvs/videostab/lib/python3.5/site-packages/pandas/core/internals.py", line 3282, in __init__
    self._verify_integrity()
  File "/home/rijuta/.virtualenvs/videostab/lib/python3.5/site-packages/pandas/core/internals.py", line 3498, in _verify_integrity
    len(self.items), tot_items))
AssertionError: Number of manager items must equal union of block items
# manager items: 1, # tot_items: 0

stabilization of defocused video

I am trying to stabilize a defocused video that I have captured from my mobile camera. However this error is popping up whenever it starts transformation. I have downloaded some defocused videos from the Internet. Its working fine on them but not on videos recorded from camera

File "/home/rijuta/.virtualenvs/videoanalytics/local/lib/python2.7/site-packages/vidstab/VidStab.py", line 297, in stabilize
show_progress=True)
File "/home/rijuta/.virtualenvs/videoanalytics/local/lib/python2.7/site-packages/vidstab/VidStab.py", line 155, in gen_transforms
self._gen_trajectory(input_path=input_path, show_progress=show_progress)
File "/home/rijuta/.virtualenvs/videoanalytics/local/lib/python2.7/site-packages/vidstab/VidStab.py", line 105, in _gen_trajectory
for i, matched in enumerate(status):
TypeError: 'NoneType' object is not iterable

migrate assets to AWS

repo is bloated with gifs/videos for docs/tests. move all these items to S3 bucket and refactor as needed

Non-free algorithms like SIFT and SURF causing tests to fail.

See opencv/opencv-python#126. As of OpenCV 3.4.3, the patented algorithms such as SIFT and SURF are hidden behind the OPENCV_ENABLE_NONFREE=1 flag. They are no longer included in the Python bindings which is causing the tests to fail on Travis.

Possible options are:

  • Remove SIFT and SURF from the tests
  • Add try-except logic in kp_method initialization to raise a warning when SIFT and SURF are used.

ModuleNotFoundError: No module named 'progress'

I get this error when I use python -m pip install vidstab.

...Collecting progress (from vidstab)
  Using cached https://files.pythonhosted.org/packages/e9/ff/7871f3736dc6707435b2a2f217c46b5a5bc6ea7e0a9a443cd69146a1afd1/progress-1.4.tar.gz
    Complete output from command python setup.py egg_info:
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "C:\Users\Bernard\AppData\Local\Temp\pip-install-w0qzry3z\progress\setup.py", line 5, in <module>
        import progress
    ModuleNotFoundError: No module named 'progress'

    ----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in C:\Users\Bernard\AppData\Local\Temp\pip-install-w0qzry3z\progress\

update logo

The current logo is copy/pasted clipart from Google images. Commision new version of logo from friend, fiverr, etc

Update imutils req to >= 0.5.2

imutils keypoint factory had bug that caused some issues if users don't have opencv installed with contrib. The default detector for vidstab (cv2.goodFeaturesToTrack) was one of the possible detectors that ran into this error (see #4 & #46).

This issue was recently fixed in imutils 0.5.2; to avoid issues the imutils requirements will be updated to imutils>=0.5.2.

IndexError: pop from an empty deque

/usr/local/lib/python3.6/dist-packages/vidstab/general_utils.py:57: UserWarning: No progress bar will be shown. (Unable to grab frame count & no max_frames provided.)
  warnings.warn('No progress bar will be shown. (Unable to grab frame count & no max_frames provided.)')
Traceback (most recent call last):
  File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/usr/local/lib/python3.6/dist-packages/vidstab/__main__.py", line 64, in <module>
    cli_stabilizer(args)
  File "/usr/local/lib/python3.6/dist-packages/vidstab/main_utils.py", line 119, in cli_stabilizer
    playback=args['playback'])
  File "/usr/local/lib/python3.6/dist-packages/vidstab/VidStab.py", line 485, in stabilize
    bar = self._init_trajectory(smoothing_window, max_frames, show_progress=show_progress)
  File "/usr/local/lib/python3.6/dist-packages/vidstab/VidStab.py", line 157, in _init_trajectory
    self._process_first_frame()
  File "/usr/local/lib/python3.6/dist-packages/vidstab/VidStab.py", line 137, in _process_first_frame
    _, _ = self.frame_queue.read_frame(array=array)
  File "/usr/local/lib/python3.6/dist-packages/vidstab/frame_queue.py", line 43, in read_frame
    return self._append_frame(frame, pop_ind)
  File "/usr/local/lib/python3.6/dist-packages/vidstab/frame_queue.py", line 51, in _append_frame
    self.i = self.inds.popleft()
IndexError: pop from an empty deque

Works for some videos but for some videos it gives this error

Pass stabilized video frame for further processing

Hi,
I want to pass stabilized video for further processing such as sending it to an object detector. Is there a way to do this? I ask because in the input method, I only see outputting the stabilized video to a file.

update docs

include:

  • example of applying transforms from file
  • auto border sizing example (frame layering to show results?)
  • possibly others

Not working for me....

Hi, I can not run the program ... (may be due to I am a inexperienced user)

Using from command line

Using defaults

python3 -m vidstab --input input_video.mov --output stable_video.avi

'vidstab' is a package and cannot be directly executed

Then, as a class

ImportError: cannot import name 'vidstab_utils'

Br

picamera FileNotFoundError

Below quote taken from @seventheefs comment on #76

My second question is about using your code with picamera. Somehow i'm unable to make it work with it, i tried the -i -1 parameter but it just keep displaying the FileNotFoundError: -1 does not exist. i'm using opencv 3.4.2 with a pycamera rev 1.3

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.