Giter VIP home page Giter VIP logo

distributed-video-transcoding's Introduction

Distributed Multi-bitrate Video Transcoding

Distributed Multi-bitrate Video Transcoding On Centos / Ubuntu / Suse / RedHat (Bash Scripts)

Multi-bitrate Video processing requires lots of computing power and time to process full movie. There are different open source video transcoding and processing tools freely available in Linux, like libav-tools, ffmpeg, mencoder, and handbrake. However, none of these tools support PARALLEL computing easily.

After some research, I found amazing solution designed by 'Dustin Kirkland' based on Ubuntu JUJU and avconv. But our requirement was little bit diffrent from Dustins's solution. Our requirement was to convert single video in Multi-bitrate and in formats like 3gp, flv and upload them to single or multiple CDN(like Akamai or tata). Also we want to build this solution on top of CentOS and ffmpeg. So I decided to develop "Simple Scalable, Parallel, Multi-bitrate Video Transcoding System" by myself. Here is my solution.

The Algorithm is same as Dustin's solution but with some changes:

  1. Upload file to FTP. After a successful upload CallUploadScript(pure-ftpd function) will call script:
    • Script is responsible for syncing files to all nodes(Disabled ssh encryptions to speed up transfer)
    • Divide video duration by number of nodes available to process and add start time, length to MySQL queue table.
    • Updating duration, file path, filename of video and number of nodes available for transcoding to MySQL
  2. Transcode Nodes will pick jobs from the queue
  3. Each Node will then process their segments of video and raise a flag when done
  4. Master nodes will wait for each of the all-done flags, and then any master worker will pick the job to concatenate the result
  5. Upload converted files to different CDN

Fault Tolerant

Making this process fault tolerant to node failures, I have written small script checkNodeFailed.sh, which will check for failed nodes and will try to reassign that job to another node. We need to add every minute cron run this.

Pre-requisites:

  1. bc
  2. nproc
  3. ffmpeg
  4. mysql
  5. mysql-server(For master node)
  6. mplayer
  7. rsync
  8. Password less ssh login
  9. nfs server and client
  10. supervisord
  11. ffprobe

Installation:

  1. Install ffmpeg(Click here for instruction)

  2. Download and copy all scripts(.sh files) to /srv directory

  3. Change file permission to 755

  4. Install Pure-FTPD and change CallUploadscript directive to yes in /etc/pure-ftpd.conf file

  5. Create test user for FTP and set password

    # useradd -m ftptest; passwd ftptest

  6. Run below commands to change pure-ftpd init script

    # sed -i 's#start() {#start() {\n\t/usr/sbin/pure-uploadscript -B -r /srv/CallUpload.sh#g' /etc/init.d/pure-ftpd

    # sed -i 's#stop() {#stop() {\n\tkillall -9 pure-uploadscript#g' /etc/init.d/pure-ftpd

  7. restart pure-ftp service

  8. Make sure to Change Database IP in all three scripts (DB_IP variable)

  9. Install mysql-server and import SQL file 'transcoding.sql'. Create 'transcode' user with password same as username. Make sure user is able to connect from all of the worker nodes.

  10. NFS Export /srv directory and mount it on all nodes with NFS client option "lookupcache=none"

  11. On all servers install supervisord and copy supervisord.conf from download directory to /etc/supervisord.conf. Restart supervisord service.

  12. Add every minute cron for checkNodeFailed.sh script. */1 * * * * /srv/checkNodeFailed.sh

  13. To check the status of jobs you may use the dashboard. Copy frontend folder to your apache DocumentRoot. In my case its /var/www/html/

    # cp -a frontend/ /var/www/html/

distributed-video-transcoding's People

Contributors

patademahesh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

distributed-video-transcoding's Issues

Splitt issues

hi,

i tested the script and i´m wondering if you maybe notice that a simply split with -ss = starttime and -t = length will produce audio issues in aac media streams when they get concated.

a simple split at second 3 by starting time 0 for segment one and starting time at second 3 + another length of 3 is not frame and sync accurate.

when i look at the log files i saw this error during the concated process:
TS -830103474726726784, next:2959489 st:0 invalid dropping
PTS -830103474726720768, next:2959489 invalid dropping st:0
[mp4 @ 0x7fde9a035400] pts has no value
DTS -830103474726723840, next:2992856 st:0 invalid dropping
PTS -830103474726708736, next:2992856 invalid dropping st:0
[mp4 @ 0x7fde9a035400] pts has no value
DTS -830103474726720768, next:3026223 st:0 invalid dropping
PTS -830103474726714752, next:3026223 invalid dropping st:0
DTS -830103474726717824, next:3059590 st:0 invalid dropping
[mp4 @ 0x7fde9a035400] pts has no value
DTS -
....cut

do you have noticed that?

i think a real splich and stich function requires a lot more todo so that this will work perfectly. my idea is that you first have to extract the complete aac stream, then the video frames (best transmux video segments at keyframes positions). then encode the segments, then encoder audio, then mux all together.

i´m using fffmpeg (latest git) and maybe there the concat of avconv works better/other?

What do you think? Do i oversee something?

Cloud Computing

this Script is gorgeous !

Exist any solution to use this for computing on Amazon EC2 Cloud Services ?

Regards Sascha

AWS Lamda version ?

Would it be possible/practical/economical to use this method with AWS Lambda service ? One could theoretically use hundreds of processes and make this really fast and scalable.

Trying to get in touch regarding a security issue

Hey there!

I'd like to report a security issue but cannot find contact instructions on your repository.

If not a hassle, might you kindly add a SECURITY.md file with an email, or another contact method? GitHub recommends this best practice to ensure security issues are responsibly disclosed, and it would serve as a simple instruction for security researchers in the future.

Thank you for your consideration, and I look forward to hearing from you!

(cc @huntr-helper)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.