Giter VIP home page Giter VIP logo

mugen's People

Contributors

dependabot[bot] avatar scherroman avatar tartrsn avatar tirkarthi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mugen's Issues

Building wheel for Pillow (setup.py) ... error

When I run pip install -e mugen in anaconda I recive this message:

Building wheel for Pillow (setup.py) ... error

"The headers or library files could not be found for zlib,
a required dependency when compiling Pillow from source."

Feature request - Video source selection based on time

Consider this as another 'would be cool if...'

I have multiple folders of video sources... I want to have mugen use source A for the first minute, source B for the second minute, source C for the 3rd minute... etc. So the generated video wouldn't use anything from Source A after minute 1...

My workaround (I'm still using older mugen code for this, since no regen in new_mugen yet) is to generate multiple videos, and then edit the json specs into a single file, so that it'll make a new json with the desired source separation intact. And then regen a new video that does the above. Hacky but it works for now.

the progress_bar issue

Thanks for the code firstly, but I constantly got the errors for TypeError:write_videofile() got an unexpected keyword argument 'progress_bar'. I don't know how to solve this issue.

Getting started trouble

I ran these commands to get started, but when I try anything mugen wants a missing 'bin' directory. Any ideas?

mkdir 3_try
cd 3_try
git clone https://github.com/scherroman/mugen.git
cd mugen
conda env create -f environment.yml
everything downloads...
source activate mugen
cd src/bin/
python cli.py --help
Recieve this error:
Traceback (most recent call last):
File "cli.py", line 9, in
import bin.constants as cli_c
ModuleNotFoundError: No module named 'bin'

Stuck at 45/46

am I doing something wrong here?
It looks like I configured it in such a way there's 1 too few video events available, but I'm not sure if I just did it bad.

I'm stuck in a loop that I tried to capture below (sorry for the duplicate sub-progress-bar) and it just does the sub-task that has 398 items over and over again.
I've been here for 10 minutes, but it feels like I've been here all year. send for help.

(mugen) bash-3.2$ mugen create -a bright-lights.wav -vn LostInTranslation.mkv -ss -es 1/8 -aem onsets -bm weak_beats -v output.mkv

Weights
------------
output: 100.00%

Analyzing audio...

Events:
[<EventList 0-44 (45), type: Onset, selected: False>]

Generating music video from video segments and audio...
 98%|███████████████████████████████████▏| 45/46 [02:13<00:01,  1.94s/it]

t:   0%|                               | 0/398 [00:00<?, ?it/s, now=None]
t:  93%|██████████████████▋ | 372/398 [00:03<00:00, 111.91it/s, now=None]

Creating mugen conda environment fails

Creating the mugen conda environment with "conda env create -f environment.yml", fails while collecting tesserocr with "ImportError: No module named Cython.Distutils".

Option to label segments visually

I know new_mugen doesn't have 'recreate' yet, or even the json file, like 'original' mugen, but before I forget, cause I realized how much time this would save me...

How I review my generated video now: make a video, watch the video, realize I dislike a number of random clips used, and want to replace them, and now I have to recreate the video with some changes. So I have to figure out which clips they are. If I save segments, I can go in and find the videos manually, and this works... but it's annoying, I have to look at hundreds of thumbnails sometimes, and pick out the ones I dislike and try again...

It would be nice to have a way to generate 'debugging' videos where the clips are labeled with what # they are, so I could just pause the video and note the number, and then recreate telling it to replace that # (or #s)? And once I'm happy with the resulting videos, recreate without that debugging and the final video will be pristine, but no need to keep saving segments over and over?

Adding a text overlay in the upper corner (or whatever location) with the clip number should be a fairly trivial tweak, right?

Make text detection optional

Mugen should exclude text detection if tesserocr is not pip installed.
If tesserocr is pip installed, Mugen should use text detection by default.

ModuleNotFoundError: No module named 'numpy'

I cannot get it to install, I've tried running the code on Windows 7 and Ubuntu 16.04. I get the same error

K:\mugen>conda env create -f environment.yml
Fetching package metadata ...........
Solving package specifications: .
Collecting cython>=0.25.2
Using cached Cython-0.26-cp36-none-win_amd64.whl
Collecting moviepy>=0.2.3.2
Using cached moviepy-0.2.3.2-py2.py3-none-any.whl
Collecting librosa>=0.5.0
Using cached librosa-0.5.1.tar.gz
Collecting Pillow>=3.4.2
Using cached Pillow-4.2.1-cp36-cp36m-win_amd64.whl
Collecting numpy>=1.12.0
Using cached numpy-1.13.1-cp36-none-win_amd64.whl
Collecting pysrt>=1.1.1
Using cached pysrt-1.1.1.tar.gz
Collecting tqdm>=4.10.0
Using cached tqdm-4.15.0-py2.py3-none-any.whl
Collecting decorator>=4.0.11
Using cached decorator-4.1.2-py2.py3-none-any.whl
Collecting dill>=0.2.7.1
Using cached dill-0.2.7.1.tar.gz
Collecting imageio==2.1.2 (from moviepy>=0.2.3.2)
Using cached imageio-2.1.2.zip
Collecting audioread>=2.0.0 (from librosa>=0.5.0)
Using cached audioread-2.1.5.tar.gz
Collecting scipy>=0.13.0 (from librosa>=0.5.0)
Using cached scipy-0.19.1.tar.gz
Collecting scikit-learn>=0.14.0 (from librosa>=0.5.0)
Using cached scikit_learn-0.19.0-cp36-cp36m-win_amd64.whl
Collecting joblib>=0.7.0 (from librosa>=0.5.0)
Using cached joblib-0.11-py2.py3-none-any.whl
Collecting six>=1.3 (from librosa>=0.5.0)
Using cached six-1.10.0-py2.py3-none-any.whl
Collecting resampy>=0.1.2 (from librosa>=0.5.0)
Using cached resampy-0.1.5.tar.gz
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "", line 1, in
File "C:\Users\SamM\AppData\Local\Temp\pip-build-18o9g51f\resampy\setup.p
y", line 6, in
import numpy as np
ModuleNotFoundError: No module named 'numpy'

----------------------------------------

Command "python setup.py egg_info" failed with error code 1 in C:\Users\SamM\Ap
pData\Local\Temp\pip-build-18o9g51f\resampy\

CondaValueError: pip returned an error.

How to speedup?

Hello, it's possible to speed up generation music video?
Maybe using GPU instead of CPU?
Thx you.

Feature request - Photos instead of video

It would be great if there was a way to import photos / images as well as videos and cut on the beat.

Is this something that has been concidered for the project?

No module named 'dill'

Error message:

File "cli.py", line 10, in
import bin.utility as cli_util
File "/home/grzana/Desktop/mugen-master/src/bin/utility.py", line 8, in
import mugen.paths as paths
File "/home/grzana/Desktop/mugen-master/src/mugen/init.py", line 5, in
from mugen.video.video_filters import VideoFilter
File "/home/grzana/Desktop/mugen-master/src/mugen/video/video_filters.py", line 4, in
import mugen.video.detect as v_detect
File "/home/grzana/Desktop/mugen-master/src/mugen/video/detect.py", line 15, in
from mugen.video.segments.VideoSegment import VideoSegment
File "/home/grzana/Desktop/mugen-master/src/mugen/video/segments/VideoSegment.py", line 8, in
from mugen.video.segments.Segment import Segment
File "/home/grzana/Desktop/mugen-master/src/mugen/video/segments/Segment.py", line 11, in
from mugen.mixins.Persistable import Persistable
File "/home/grzana/Desktop/mugen-master/src/mugen/mixins/Persistable.py", line 1, in
from dill import dill
ImportError: cannot import name 'dill'

Using Ubuntu 18.04
I installed dev env version but on every type env the same error happend
First i have this error #17 but i fix this and now i have this error

No module named 'scripts.cli'

(base) abcdeMacBook-Pro:mugen abc$ mugen
Traceback (most recent call last):
File "/Users/abc/anaconda3/bin/mugen", line 11, in
load_entry_point('mugen', 'console_scripts', 'mugen')()
File "/Users/abc/anaconda3/lib/python3.5/site-packages/pkg_resources/init.py", line 476, in load_entry_point
return get_distribution(dist).load_entry_point(group, name)
File "/Users/abc/anaconda3/lib/python3.5/site-packages/pkg_resources/init.py", line 2700, in load_entry_point
return ep.load()
File "/Users/abc/anaconda3/lib/python3.5/site-packages/pkg_resources/init.py", line 2318, in load
return self.resolve()
File "/Users/abc/anaconda3/lib/python3.5/site-packages/pkg_resources/init.py", line 2324, in resolve
module = import(self.module_name, fromlist=['name'], level=0)
ImportError: No module named 'scripts.cli'

Chronological order

Testing the create code and it looks good so far.

One thing I noticed is that you are only passing a desired duration in getting a clip, and not 'start location' (ie where the new segment will go in the new video). The time the clip is going in is valuable for some potential filters/options.

One obvious example (and looking at the code I realized to implement it now would require cloning/extending a lot of code, which is why I'm posting now, when you could easily change this):
I have Music Video A (and matching audio for A as audio source), and a variety of Videos in directory B.... currently, I can weight A and B directory, so create a video that using as much of A as I like, and splices in clips from B... but the video clips of A will be random. I don't want random, I want sequential as in the original A. (so essentially, the new music video will be A+clips from B on the beat, weighted in as desired). There needs to be a way to say "for A, don't go random, get clip starting at time X (the current time in video generating), for duration, but for B/etc, random is fine."

Or perhaps, differently, I want to grab clips sequentially from all, so that early clips are early from the videos and later clips later... and not have a clip from an ending of one of these videos too early in my new generated video.

I see an argument along the lines of "chronological" or just "order", with options being similar to weighting, per source, where the choices are

  • "strict" (use exact new video time for a clip of duration),
  • "loose" (pick a time relatively close to [new video time/total new video time] relative to [selected video length] for video's duration.) (example for clarity: We're in minute 2 of a 5 minute new video... pick a clip from video X somewhere roughly about 2/5 of the way into that video X, for duration Y) [Hopefully that's clear, and you see why that's useful - think story unfolding images, where random loses that arrow of direction...
  • "random" the current default

so in my above example: with -video-sources A.mp4 OtherDirectory/B/ -vw 4 6 -order strict random, th resulting video would be like A 40% of the time, but 60% of the time, pull clips from some video from B

Or perhaps another good example, I have 2+ music videos for the same song, and I want to meld them together equally: -v A.mp4 B.mp4 -order strict strict

For the loose usage: imaginary B and C videos are all time lapse video (ala Timescape, your example), or other sequential events... We don't want to jump around randomly for those, we want to show clips in an orderly, but still randomish way.... telling a story visually, and showing items out of order (5,1,4,2 as random) isn't as good as some order (1,2,4,5) -v A.mp4 B.mp4 C.mp4 -order random loose loose.

Container ?

Description

I would like to run mugen in a container.

Rationale

Because it would be so much faster, and simpler t run. Because it could make think of how to scale up that service (scale up a Kubernetes deployment to 20 pods, and each of the 20 pods processes 2 seconds of the videos, in, the end all is put back together and returned to request issuer)

Alternatives

running conda in a Virtual Machine

Additional context

Feature Request - Source time boundaries

Awesome job so far... adding items I've brainstormed and/or desired as I played with this more and more...

The ability to provide a time boundary for where to pull video from out of a source... so I might want to pull from Video X, but only between minutes 5 and 20, and not from the first 5 or after 20. Some way to specify this, per video (a filename.times for each video, if it exists, use that?)
(multiple limits as file lines?
00:05:00-00:10:00
00:12:30-00:20:00

Why: without this, the only way to ensure clips aren't grabbed is to create a smaller 'preclipped' file, or to look at clips and reject items wrongly picked.

Feature suggestions

Playing with this... and having fun imagining uses for it.

Some suggestions:

  • Make Text detection optional
  • Add some visual beat option, or maybe a beat visualization layer?
  • allow for slight offset for beat/video sync (so that you can adjust to correct for visual/auditory lag) (ie appear to change just on the beat)
  • allow setting specific videos to only certain time windows (so you could ensure Video X is only at end, or Video Y won't be used after halfway point, etc...)
  • order video clips so that they are chronological (ie pick from beginning of videos at start, then pick later as song progresses...)
  • overlapping clips - similar to the way the Cup Song works, adding new non fullscreen clips to the beat...
  • stay nearby - pick the next clip nearby from the last one, for at least X clips, before switching video sources
  • ken burns the clips (zoom and pan)
  • leave clip audio intact (or mix with song?)

Weighting Controls

Allow the user to apply percentage weights to videos/set of videos, to control how often a video is sampled from for the music video.

i.e. Use a series (26 episodes) 50% of the time, and use the movie 50% of the time.

Currently, each video input is sampled equally as frequently.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.