Giter VIP home page Giter VIP logo

leandromoreira / ffmpeg-libav-tutorial Goto Github PK

View Code? Open in Web Editor NEW
9.6K 271.0 917.0 5.98 MB

FFmpeg libav tutorial - learn how media works from basic to transmuxing, transcoding and more. Translations: 🇺🇸 🇨🇳 🇰🇷 🇪🇸 🇻🇳 🇧🇷

Home Page: https://github.com/leandromoreira/ffmpeg-libav-tutorial

License: BSD 3-Clause "New" or "Revised" License

C 56.46% Makefile 3.24% Dockerfile 37.62% Shell 0.53% CMake 2.15%
ffmpeg ffmpeg-libraries libav tutorial codec video-processing transcode-video

ffmpeg-libav-tutorial's Introduction

🇨🇳 🇰🇷 🇪🇸 🇻🇳 🇧🇷

license

I was looking for a tutorial/book that would teach me how to start to use FFmpeg as a library (a.k.a. libav) and then I found the "How to write a video player in less than 1k lines" tutorial. Unfortunately it was deprecated, so I decided to write this one.

Most of the code in here will be in C but don't worry: you can easily understand and apply it to your preferred language. FFmpeg libav has lots of bindings for many languages like python, go and even if your language doesn't have it, you can still support it through the ffi (here's an example with Lua).

We'll start with a quick lesson about what is video, audio, codec and container and then we'll go to a crash course on how to use FFmpeg command line and finally we'll write code, feel free to skip directly to the section Learn FFmpeg libav the Hard Way.

Some people used to say that the Internet video streaming is the future of the traditional TV, in any case, the FFmpeg is something that is worth studying.

Table of Contents

Intro

video - what you see!

If you have a sequence series of images and change them at a given frequency (let's say 24 images per second), you will create an illusion of movement. In summary this is the very basic idea behind a video: a series of pictures / frames running at a given rate.

Zeitgenössische Illustration (1886)

audio - what you listen!

Although a muted video can express a variety of feelings, adding sound to it brings more pleasure to the experience.

Sound is the vibration that propagates as a wave of pressure, through the air or any other transmission medium, such as a gas, liquid or solid.

In a digital audio system, a microphone converts sound to an analog electrical signal, then an analog-to-digital converter (ADC) — typically using pulse-code modulation (PCM) - converts the analog signal into a digital signal.

audio analog to digital

Source

codec - shrinking data

CODEC is an electronic circuit or software that compresses or decompresses digital audio/video. It converts raw (uncompressed) digital audio/video to a compressed format or vice versa. https://en.wikipedia.org/wiki/Video_codec

But if we chose to pack millions of images in a single file and called it a movie, we might end up with a huge file. Let's do the math:

Suppose we are creating a video with a resolution of 1080 x 1920 (height x width) and that we'll spend 3 bytes per pixel (the minimal point at a screen) to encode the color (or 24 bit color, what gives us 16,777,216 different colors) and this video runs at 24 frames per second and it is 30 minutes long.

toppf = 1080 * 1920 //total_of_pixels_per_frame
cpp = 3 //cost_per_pixel
tis = 30 * 60 //time_in_seconds
fps = 24 //frames_per_second

required_storage = tis * fps * toppf * cpp

This video would require approximately 250.28GB of storage or 1.19 Gbps of bandwidth! That's why we need to use a CODEC.

container - a comfy place for audio and video

A container or wrapper format is a metafile format whose specification describes how different elements of data and metadata coexist in a computer file. https://en.wikipedia.org/wiki/Digital_container_format

A single file that contains all the streams (mostly the audio and video) and it also provides synchronization and general metadata, such as title, resolution and etc.

Usually we can infer the format of a file by looking at its extension: for instance a video.webm is probably a video using the container webm.

container

FFmpeg - command line

A complete, cross-platform solution to record, convert and stream audio and video.

To work with multimedia we can use the AMAZING tool/library called FFmpeg. Chances are you already know/use it directly or indirectly (do you use Chrome?).

It has a command line program called ffmpeg, a very simple yet powerful binary. For instance, you can convert from mp4 to the container avi just by typing the follow command:

$ ffmpeg -i input.mp4 output.avi

We just made a remuxing here, which is converting from one container to another one. Technically FFmpeg could also be doing a transcoding but we'll talk about that later.

FFmpeg command line tool 101

FFmpeg does have a documentation that does a great job of explaining how it works.

# you can also look for the documentation using the command line

ffmpeg -h full | grep -A 10 -B 10 avoid_negative_ts

To make things short, the FFmpeg command line program expects the following argument format to perform its actions ffmpeg {1} {2} -i {3} {4} {5}, where:

  1. global options
  2. input file options
  3. input url
  4. output file options
  5. output url

The parts 2, 3, 4 and 5 can be as many as you need. It's easier to understand this argument format in action:

# WARNING: this file is around 300MB
$ wget -O bunny_1080p_60fps.mp4 http://distribution.bbb3d.renderfarming.net/video/mp4/bbb_sunflower_1080p_60fps_normal.mp4

$ ffmpeg \
-y \ # global options
-c:a libfdk_aac \ # input options
-i bunny_1080p_60fps.mp4 \ # input url
-c:v libvpx-vp9 -c:a libvorbis \ # output options
bunny_1080p_60fps_vp9.webm # output url

This command takes an input file mp4 containing two streams (an audio encoded with aac CODEC and a video encoded using h264 CODEC) and convert it to webm, changing its audio and video CODECs too.

We could simplify the command above but then be aware that FFmpeg will adopt or guess the default values for you. For instance when you just type ffmpeg -i input.avi output.mp4 what audio/video CODEC does it use to produce the output.mp4?

Werner Robitza wrote a must read/execute tutorial about encoding and editing with FFmpeg.

Common video operations

While working with audio/video we usually do a set of tasks with the media.

Transcoding

transcoding

What? the act of converting one of the streams (audio or video) from one CODEC to another one.

Why? sometimes some devices (TVs, smartphones, console and etc) doesn't support X but Y and newer CODECs provide better compression rate.

How? converting an H264 (AVC) video to an H265 (HEVC).

$ ffmpeg \
-i bunny_1080p_60fps.mp4 \
-c:v libx265 \
bunny_1080p_60fps_h265.mp4

Transmuxing

transmuxing

What? the act of converting from one format (container) to another one.

Why? sometimes some devices (TVs, smartphones, console and etc) doesn't support X but Y and sometimes newer containers provide modern required features.

How? converting a mp4 to a ts.

$ ffmpeg \
-i bunny_1080p_60fps.mp4 \
-c copy \ # just saying to ffmpeg to skip encoding
bunny_1080p_60fps.ts

Transrating

transrating

What? the act of changing the bit rate, or producing other renditions.

Why? people will try to watch your video in a 2G (edge) connection using a less powerful smartphone or in a fiber Internet connection on their 4K TVs therefore you should offer more than one rendition of the same video with different bit rate.

How? producing a rendition with bit rate between 3856K and 2000K.

$ ffmpeg \
-i bunny_1080p_60fps.mp4 \
-minrate 964K -maxrate 3856K -bufsize 2000K \
bunny_1080p_60fps_transrating_964_3856.mp4

Usually we'll be using transrating with transsizing. Werner Robitza wrote another must read/execute series of posts about FFmpeg rate control.

Transsizing

transsizing

What? the act of converting from one resolution to another one. As said before transsizing is often used with transrating.

Why? reasons are about the same as for the transrating.

How? converting a 1080p to a 480p resolution.

$ ffmpeg \
-i bunny_1080p_60fps.mp4 \
-vf scale=480:-1 \
bunny_1080p_60fps_transsizing_480.mp4

Bonus Round: Adaptive Streaming

adaptive streaming

What? the act of producing many resolutions (bit rates) and split the media into chunks and serve them via http.

Why? to provide a flexible media that can be watched on a low end smartphone or on a 4K TV, it's also easy to scale and deploy but it can add latency.

How? creating an adaptive WebM using DASH.

# video streams
$ ffmpeg -i bunny_1080p_60fps.mp4 -c:v libvpx-vp9 -s 160x90 -b:v 250k -keyint_min 150 -g 150 -an -f webm -dash 1 video_160x90_250k.webm

$ ffmpeg -i bunny_1080p_60fps.mp4 -c:v libvpx-vp9 -s 320x180 -b:v 500k -keyint_min 150 -g 150 -an -f webm -dash 1 video_320x180_500k.webm

$ ffmpeg -i bunny_1080p_60fps.mp4 -c:v libvpx-vp9 -s 640x360 -b:v 750k -keyint_min 150 -g 150 -an -f webm -dash 1 video_640x360_750k.webm

$ ffmpeg -i bunny_1080p_60fps.mp4 -c:v libvpx-vp9 -s 640x360 -b:v 1000k -keyint_min 150 -g 150 -an -f webm -dash 1 video_640x360_1000k.webm

$ ffmpeg -i bunny_1080p_60fps.mp4 -c:v libvpx-vp9 -s 1280x720 -b:v 1500k -keyint_min 150 -g 150 -an -f webm -dash 1 video_1280x720_1500k.webm

# audio streams
$ ffmpeg -i bunny_1080p_60fps.mp4 -c:a libvorbis -b:a 128k -vn -f webm -dash 1 audio_128k.webm

# the DASH manifest
$ ffmpeg \
 -f webm_dash_manifest -i video_160x90_250k.webm \
 -f webm_dash_manifest -i video_320x180_500k.webm \
 -f webm_dash_manifest -i video_640x360_750k.webm \
 -f webm_dash_manifest -i video_640x360_1000k.webm \
 -f webm_dash_manifest -i video_1280x720_500k.webm \
 -f webm_dash_manifest -i audio_128k.webm \
 -c copy -map 0 -map 1 -map 2 -map 3 -map 4 -map 5 \
 -f webm_dash_manifest \
 -adaptation_sets "id=0,streams=0,1,2,3,4 id=1,streams=5" \
 manifest.mpd

PS: I stole this example from the Instructions to playback Adaptive WebM using DASH

Going beyond

There are many and many other usages for FFmpeg. I use it in conjunction with iMovie to produce/edit some videos for YouTube and you can certainly use it professionally.

Learn FFmpeg libav the Hard Way

Don't you wonder sometimes 'bout sound and vision? David Robert Jones

Since the FFmpeg is so useful as a command line tool to do essential tasks over the media files, how can we use it in our programs?

FFmpeg is composed by several libraries that can be integrated into our own programs. Usually, when you install FFmpeg, it installs automatically all these libraries. I'll be referring to the set of these libraries as FFmpeg libav.

This title is a homage to Zed Shaw's series Learn X the Hard Way, particularly his book Learn C the Hard Way.

Chapter 0 - The infamous hello world

This hello world actually won't show the message "hello world" in the terminal 👅 Instead we're going to print out information about the video, things like its format (container), duration, resolution, audio channels and, in the end, we'll decode some frames and save them as image files.

FFmpeg libav architecture

But before we start to code, let's learn how FFmpeg libav architecture works and how its components communicate with others.

Here's a diagram of the process of decoding a video:

ffmpeg libav architecture - decoding process

You'll first need to load your media file into a component called AVFormatContext (the video container is also known as format). It actually doesn't fully load the whole file: it often only reads the header.

Once we loaded the minimal header of our container, we can access its streams (think of them as a rudimentary audio and video data). Each stream will be available in a component called AVStream.

Stream is a fancy name for a continuous flow of data.

Suppose our video has two streams: an audio encoded with AAC CODEC and a video encoded with H264 (AVC) CODEC. From each stream we can extract pieces (slices) of data called packets that will be loaded into components named AVPacket.

The data inside the packets are still coded (compressed) and in order to decode the packets, we need to pass them to a specific AVCodec.

The AVCodec will decode them into AVFrame and finally, this component gives us the uncompressed frame. Noticed that the same terminology/process is used either by audio and video stream.

Requirements

Since some people were facing issues while compiling or running the examples we're going to use Docker as our development/runner environment, we'll also use the big buck bunny video so if you don't have it locally just run the command make fetch_small_bunny_video.

Chapter 0 - code walkthrough

TLDR; show me the code and execution.

$ make run_hello

We'll skip some details, but don't worry: the source code is available at github.

We're going to allocate memory to the component AVFormatContext that will hold information about the format (container).

AVFormatContext *pFormatContext = avformat_alloc_context();

Now we're going to open the file and read its header and fill the AVFormatContext with minimal information about the format (notice that usually the codecs are not opened). The function used to do this is avformat_open_input. It expects an AVFormatContext, a filename and two optional arguments: the AVInputFormat (if you pass NULL, FFmpeg will guess the format) and the AVDictionary (which are the options to the demuxer).

avformat_open_input(&pFormatContext, filename, NULL, NULL);

We can print the format name and the media duration:

printf("Format %s, duration %lld us", pFormatContext->iformat->long_name, pFormatContext->duration);

To access the streams, we need to read data from the media. The function avformat_find_stream_info does that. Now, the pFormatContext->nb_streams will hold the amount of streams and the pFormatContext->streams[i] will give us the i stream (an AVStream).

avformat_find_stream_info(pFormatContext,  NULL);

Now we'll loop through all the streams.

for (int i = 0; i < pFormatContext->nb_streams; i++)
{
  //
}

For each stream, we're going to keep the AVCodecParameters, which describes the properties of a codec used by the stream i.

AVCodecParameters *pLocalCodecParameters = pFormatContext->streams[i]->codecpar;

With the codec properties we can look up the proper CODEC querying the function avcodec_find_decoder and find the registered decoder for the codec id and return an AVCodec, the component that knows how to enCOde and DECode the stream.

AVCodec *pLocalCodec = avcodec_find_decoder(pLocalCodecParameters->codec_id);

Now we can print information about the codecs.

// specific for video and audio
if (pLocalCodecParameters->codec_type == AVMEDIA_TYPE_VIDEO) {
  printf("Video Codec: resolution %d x %d", pLocalCodecParameters->width, pLocalCodecParameters->height);
} else if (pLocalCodecParameters->codec_type == AVMEDIA_TYPE_AUDIO) {
  printf("Audio Codec: %d channels, sample rate %d", pLocalCodecParameters->channels, pLocalCodecParameters->sample_rate);
}
// general
printf("\tCodec %s ID %d bit_rate %lld", pLocalCodec->long_name, pLocalCodec->id, pLocalCodecParameters->bit_rate);

With the codec, we can allocate memory for the AVCodecContext, which will hold the context for our decode/encode process, but then we need to fill this codec context with CODEC parameters; we do that with avcodec_parameters_to_context.

Once we filled the codec context, we need to open the codec. We call the function avcodec_open2 and then we can use it.

AVCodecContext *pCodecContext = avcodec_alloc_context3(pCodec);
avcodec_parameters_to_context(pCodecContext, pCodecParameters);
avcodec_open2(pCodecContext, pCodec, NULL);

Now we're going to read the packets from the stream and decode them into frames but first, we need to allocate memory for both components, the AVPacket and AVFrame.

AVPacket *pPacket = av_packet_alloc();
AVFrame *pFrame = av_frame_alloc();

Let's feed our packets from the streams with the function av_read_frame while it has packets.

while (av_read_frame(pFormatContext, pPacket) >= 0) {
  //...
}

Let's send the raw data packet (compressed frame) to the decoder, through the codec context, using the function avcodec_send_packet.

avcodec_send_packet(pCodecContext, pPacket);

And let's receive the raw data frame (uncompressed frame) from the decoder, through the same codec context, using the function avcodec_receive_frame.

avcodec_receive_frame(pCodecContext, pFrame);

We can print the frame number, the PTS, DTS, frame type and etc.

printf(
    "Frame %c (%d) pts %d dts %d key_frame %d [coded_picture_number %d, display_picture_number %d]",
    av_get_picture_type_char(pFrame->pict_type),
    pCodecContext->frame_number,
    pFrame->pts,
    pFrame->pkt_dts,
    pFrame->key_frame,
    pFrame->coded_picture_number,
    pFrame->display_picture_number
);

Finally we can save our decoded frame into a simple gray image. The process is very simple, we'll use the pFrame->data where the index is related to the planes Y, Cb and Cr, we just picked 0 (Y) to save our gray image.

save_gray_frame(pFrame->data[0], pFrame->linesize[0], pFrame->width, pFrame->height, frame_filename);

static void save_gray_frame(unsigned char *buf, int wrap, int xsize, int ysize, char *filename)
{
    FILE *f;
    int i;
    f = fopen(filename,"w");
    // writing the minimal required header for a pgm file format
    // portable graymap format -> https://en.wikipedia.org/wiki/Netpbm_format#PGM_example
    fprintf(f, "P5\n%d %d\n%d\n", xsize, ysize, 255);

    // writing line by line
    for (i = 0; i < ysize; i++)
        fwrite(buf + i * wrap, 1, xsize, f);
    fclose(f);
}

And voilà! Now we have a gray scale image with 2MB:

saved frame

Chapter 1 - syncing audio and video

Be the player - a young JS developer writing a new MSE video player.

Before we move to code a transcoding example let's talk about timing, or how a video player knows the right time to play a frame.

In the last example, we saved some frames that can be seen here:

frame 0 frame 1 frame 2 frame 3 frame 4 frame 5

When we're designing a video player we need to play each frame at a given pace, otherwise it would be hard to pleasantly see the video either because it's playing so fast or so slow.

Therefore we need to introduce some logic to play each frame smoothly. For that matter, each frame has a presentation timestamp (PTS) which is an increasing number factored in a timebase that is a rational number (where the denominator is known as timescale) divisible by the frame rate (fps).

It's easier to understand when we look at some examples, let's simulate some scenarios.

For a fps=60/1 and timebase=1/60000 each PTS will increase timescale / fps = 1000 therefore the PTS real time for each frame could be (supposing it started at 0):

  • frame=0, PTS = 0, PTS_TIME = 0
  • frame=1, PTS = 1000, PTS_TIME = PTS * timebase = 0.016
  • frame=2, PTS = 2000, PTS_TIME = PTS * timebase = 0.033

For almost the same scenario but with a timebase equal to 1/60.

  • frame=0, PTS = 0, PTS_TIME = 0
  • frame=1, PTS = 1, PTS_TIME = PTS * timebase = 0.016
  • frame=2, PTS = 2, PTS_TIME = PTS * timebase = 0.033
  • frame=3, PTS = 3, PTS_TIME = PTS * timebase = 0.050

For a fps=25/1 and timebase=1/75 each PTS will increase timescale / fps = 3 and the PTS time could be:

  • frame=0, PTS = 0, PTS_TIME = 0
  • frame=1, PTS = 3, PTS_TIME = PTS * timebase = 0.04
  • frame=2, PTS = 6, PTS_TIME = PTS * timebase = 0.08
  • frame=3, PTS = 9, PTS_TIME = PTS * timebase = 0.12
  • ...
  • frame=24, PTS = 72, PTS_TIME = PTS * timebase = 0.96
  • ...
  • frame=4064, PTS = 12192, PTS_TIME = PTS * timebase = 162.56

Now with the pts_time we can find a way to render this synched with audio pts_time or with a system clock. The FFmpeg libav provides these info through its API:

Just out of curiosity, the frames we saved were sent in a DTS order (frames: 1,6,4,2,3,5) but played at a PTS order (frames: 1,2,3,4,5). Also, notice how cheap are B-Frames in comparison to P or I-Frames.

LOG: AVStream->r_frame_rate 60/1
LOG: AVStream->time_base 1/60000
...
LOG: Frame 1 (type=I, size=153797 bytes) pts 6000 key_frame 1 [DTS 0]
LOG: Frame 2 (type=B, size=8117 bytes) pts 7000 key_frame 0 [DTS 3]
LOG: Frame 3 (type=B, size=8226 bytes) pts 8000 key_frame 0 [DTS 4]
LOG: Frame 4 (type=B, size=17699 bytes) pts 9000 key_frame 0 [DTS 2]
LOG: Frame 5 (type=B, size=6253 bytes) pts 10000 key_frame 0 [DTS 5]
LOG: Frame 6 (type=P, size=34992 bytes) pts 11000 key_frame 0 [DTS 1]

Chapter 2 - remuxing

Remuxing is the act of changing from one format (container) to another, for instance, we can change a MPEG-4 video to a MPEG-TS one without much pain using FFmpeg:

ffmpeg input.mp4 -c copy output.ts

It'll demux the mp4 but it won't decode or encode it (-c copy) and in the end, it'll mux it into a mpegts file. If you don't provide the format -f the ffmpeg will try to guess it based on the file's extension.

The general usage of FFmpeg or the libav follows a pattern/architecture or workflow:

  • protocol layer - it accepts an input (a file for instance but it could be a rtmp or HTTP input as well)
  • format layer - it demuxes its content, revealing mostly metadata and its streams
  • codec layer - it decodes its compressed streams data optional
  • pixel layer - it can also apply some filters to the raw frames (like resizing)optional
  • and then it does the reverse path
  • codec layer - it encodes (or re-encodes or even transcodes) the raw framesoptional
  • format layer - it muxes (or remuxes) the raw streams (the compressed data)
  • protocol layer - and finally the muxed data is sent to an output (another file or maybe a network remote server)

ffmpeg libav workflow

This graph is strongly inspired by Leixiaohua's and Slhck's works.

Now let's code an example using libav to provide the same effect as in ffmpeg input.mp4 -c copy output.ts.

We're going to read from an input (input_format_context) and change it to another output (output_format_context).

AVFormatContext *input_format_context = NULL;
AVFormatContext *output_format_context = NULL;

We start doing the usually allocate memory and open the input format. For this specific case, we're going to open an input file and allocate memory for an output file.

if ((ret = avformat_open_input(&input_format_context, in_filename, NULL, NULL)) < 0) {
  fprintf(stderr, "Could not open input file '%s'", in_filename);
  goto end;
}
if ((ret = avformat_find_stream_info(input_format_context, NULL)) < 0) {
  fprintf(stderr, "Failed to retrieve input stream information");
  goto end;
}

avformat_alloc_output_context2(&output_format_context, NULL, NULL, out_filename);
if (!output_format_context) {
  fprintf(stderr, "Could not create output context\n");
  ret = AVERROR_UNKNOWN;
  goto end;
}

We're going to remux only the video, audio and subtitle types of streams so we're holding what streams we'll be using into an array of indexes.

number_of_streams = input_format_context->nb_streams;
streams_list = av_mallocz_array(number_of_streams, sizeof(*streams_list));

Just after we allocated the required memory, we're going to loop throughout all the streams and for each one we need to create new out stream into our output format context, using the avformat_new_stream function. Notice that we're marking all the streams that aren't video, audio or subtitle so we can skip them after.

for (i = 0; i < input_format_context->nb_streams; i++) {
  AVStream *out_stream;
  AVStream *in_stream = input_format_context->streams[i];
  AVCodecParameters *in_codecpar = in_stream->codecpar;
  if (in_codecpar->codec_type != AVMEDIA_TYPE_AUDIO &&
      in_codecpar->codec_type != AVMEDIA_TYPE_VIDEO &&
      in_codecpar->codec_type != AVMEDIA_TYPE_SUBTITLE) {
    streams_list[i] = -1;
    continue;
  }
  streams_list[i] = stream_index++;
  out_stream = avformat_new_stream(output_format_context, NULL);
  if (!out_stream) {
    fprintf(stderr, "Failed allocating output stream\n");
    ret = AVERROR_UNKNOWN;
    goto end;
  }
  ret = avcodec_parameters_copy(out_stream->codecpar, in_codecpar);
  if (ret < 0) {
    fprintf(stderr, "Failed to copy codec parameters\n");
    goto end;
  }
}

Now we can create the output file.

if (!(output_format_context->oformat->flags & AVFMT_NOFILE)) {
  ret = avio_open(&output_format_context->pb, out_filename, AVIO_FLAG_WRITE);
  if (ret < 0) {
    fprintf(stderr, "Could not open output file '%s'", out_filename);
    goto end;
  }
}

ret = avformat_write_header(output_format_context, NULL);
if (ret < 0) {
  fprintf(stderr, "Error occurred when opening output file\n");
  goto end;
}

After that, we can copy the streams, packet by packet, from our input to our output streams. We'll loop while it has packets (av_read_frame), for each packet we need to re-calculate the PTS and DTS to finally write it (av_interleaved_write_frame) to our output format context.

while (1) {
  AVStream *in_stream, *out_stream;
  ret = av_read_frame(input_format_context, &packet);
  if (ret < 0)
    break;
  in_stream  = input_format_context->streams[packet.stream_index];
  if (packet.stream_index >= number_of_streams || streams_list[packet.stream_index] < 0) {
    av_packet_unref(&packet);
    continue;
  }
  packet.stream_index = streams_list[packet.stream_index];
  out_stream = output_format_context->streams[packet.stream_index];
  /* copy packet */
  packet.pts = av_rescale_q_rnd(packet.pts, in_stream->time_base, out_stream->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
  packet.dts = av_rescale_q_rnd(packet.dts, in_stream->time_base, out_stream->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
  packet.duration = av_rescale_q(packet.duration, in_stream->time_base, out_stream->time_base);
  // https://ffmpeg.org/doxygen/trunk/structAVPacket.html#ab5793d8195cf4789dfb3913b7a693903
  packet.pos = -1;

  //https://ffmpeg.org/doxygen/trunk/group__lavf__encoding.html#ga37352ed2c63493c38219d935e71db6c1
  ret = av_interleaved_write_frame(output_format_context, &packet);
  if (ret < 0) {
    fprintf(stderr, "Error muxing packet\n");
    break;
  }
  av_packet_unref(&packet);
}

To finalize we need to write the stream trailer to an output media file with av_write_trailer function.

av_write_trailer(output_format_context);

Now we're ready to test it and the first test will be a format (video container) conversion from a MP4 to a MPEG-TS video file. We're basically making the command line ffmpeg input.mp4 -c copy output.ts with libav.

make run_remuxing_ts

It's working!!! don't you trust me?! you shouldn't, we can check it with ffprobe:

ffprobe -i remuxed_small_bunny_1080p_60fps.ts

Input #0, mpegts, from 'remuxed_small_bunny_1080p_60fps.ts':
  Duration: 00:00:10.03, start: 0.000000, bitrate: 2751 kb/s
  Program 1
    Metadata:
      service_name    : Service01
      service_provider: FFmpeg
    Stream #0:0[0x100]: Video: h264 (High) ([27][0][0][0] / 0x001B), yuv420p(progressive), 1920x1080 [SAR 1:1 DAR 16:9], 60 fps, 60 tbr, 90k tbn, 120 tbc
    Stream #0:1[0x101]: Audio: ac3 ([129][0][0][0] / 0x0081), 48000 Hz, 5.1(side), fltp, 320 kb/s

To sum up what we did here in a graph, we can revisit our initial idea about how libav works but showing that we skipped the codec part.

remuxing libav components

Before we end this chapter I'd like to show an important part of the remuxing process, you can pass options to the muxer. Let's say we want to delivery MPEG-DASH format for that matter we need to use fragmented mp4 (sometimes referred as fmp4) instead of MPEG-TS or plain MPEG-4.

With the command line we can do that easily.

ffmpeg -i non_fragmented.mp4 -movflags frag_keyframe+empty_moov+default_base_moof fragmented.mp4

Almost equally easy as the command line is the libav version of it, we just need to pass the options when write the output header, just before the packets copy.

AVDictionary* opts = NULL;
av_dict_set(&opts, "movflags", "frag_keyframe+empty_moov+default_base_moof", 0);
ret = avformat_write_header(output_format_context, &opts);

We now can generate this fragmented mp4 file:

make run_remuxing_fragmented_mp4

But to make sure that I'm not lying to you. You can use the amazing site/tool gpac/mp4box.js or the site http://mp4parser.com/ to see the differences, first load up the "common" mp4.

mp4 boxes

As you can see it has a single mdat atom/box, this is place where the video and audio frames are. Now load the fragmented mp4 to see which how it spreads the mdat boxes.

fragmented mp4 boxes

Chapter 3 - transcoding

TLDR; show me the code and execution.

$ make run_transcoding

We'll skip some details, but don't worry: the source code is available at github.

In this chapter, we're going to create a minimalist transcoder, written in C, that can convert videos coded in H264 to H265 using FFmpeg/libav library specifically libavcodec, libavformat, and libavutil.

media transcoding flow

Just a quick recap: The AVFormatContext is the abstraction for the format of the media file, aka container (ex: MKV, MP4, Webm, TS). The AVStream represents each type of data for a given format (ex: audio, video, subtitle, metadata). The AVPacket is a slice of compressed data obtained from the AVStream that can be decoded by an AVCodec (ex: av1, h264, vp9, hevc) generating a raw data called AVFrame.

Transmuxing

Let's start with the simple transmuxing operation and then we can build upon this code, the first step is to load the input file.

// Allocate an AVFormatContext
avfc = avformat_alloc_context();
// Open an input stream and read the header.
avformat_open_input(avfc, in_filename, NULL, NULL);
// Read packets of a media file to get stream information.
avformat_find_stream_info(avfc, NULL);

Now we're going to set up the decoder, the AVFormatContext will give us access to all the AVStream components and for each one of them, we can get their AVCodec and create the particular AVCodecContext and finally we can open the given codec so we can proceed to the decoding process.

The AVCodecContext holds data about media configuration such as bit rate, frame rate, sample rate, channels, height, and many others.

for (int i = 0; i < avfc->nb_streams; i++)
{
  AVStream *avs = avfc->streams[i];
  AVCodec *avc = avcodec_find_decoder(avs->codecpar->codec_id);
  AVCodecContext *avcc = avcodec_alloc_context3(*avc);
  avcodec_parameters_to_context(*avcc, avs->codecpar);
  avcodec_open2(*avcc, *avc, NULL);
}

We need to prepare the output media file for transmuxing as well, we first allocate memory for the output AVFormatContext. We create each stream in the output format. In order to pack the stream properly, we copy the codec parameters from the decoder.

We set the flag AV_CODEC_FLAG_GLOBAL_HEADER which tells the encoder that it can use the global headers and finally we open the output file for write and persist the headers.

avformat_alloc_output_context2(&encoder_avfc, NULL, NULL, out_filename);

AVStream *avs = avformat_new_stream(encoder_avfc, NULL);
avcodec_parameters_copy(avs->codecpar, decoder_avs->codecpar);

if (encoder_avfc->oformat->flags & AVFMT_GLOBALHEADER)
  encoder_avfc->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;

avio_open(&encoder_avfc->pb, encoder->filename, AVIO_FLAG_WRITE);
avformat_write_header(encoder->avfc, &muxer_opts);

We're getting the AVPacket's from the decoder, adjusting the timestamps, and write the packet properly to the output file. Even though the function av_interleaved_write_frame says "write frame" we are storing the packet. We finish the transmuxing process by writing the stream trailer to the file.

AVFrame *input_frame = av_frame_alloc();
AVPacket *input_packet = av_packet_alloc();

while (av_read_frame(decoder_avfc, input_packet) >= 0)
{
  av_packet_rescale_ts(input_packet, decoder_video_avs->time_base, encoder_video_avs->time_base);
  av_interleaved_write_frame(*avfc, input_packet) < 0));
}

av_write_trailer(encoder_avfc);

Transcoding

The previous section showed a simple transmuxer program, now we're going to add the capability to encode files, specifically we're going to enable it to transcode videos from h264 to h265.

After we prepared the decoder but before we arrange the output media file we're going to set up the encoder.

AVRational input_framerate = av_guess_frame_rate(decoder_avfc, decoder_video_avs, NULL);
AVStream *video_avs = avformat_new_stream(encoder_avfc, NULL);

char *codec_name = "libx265";
char *codec_priv_key = "x265-params";
// we're going to use internal options for the x265
// it disables the scene change detection and fix then
// GOP on 60 frames.
char *codec_priv_value = "keyint=60:min-keyint=60:scenecut=0";

AVCodec *video_avc = avcodec_find_encoder_by_name(codec_name);
AVCodecContext *video_avcc = avcodec_alloc_context3(video_avc);
// encoder codec params
av_opt_set(sc->video_avcc->priv_data, codec_priv_key, codec_priv_value, 0);
video_avcc->height = decoder_ctx->height;
video_avcc->width = decoder_ctx->width;
video_avcc->pix_fmt = video_avc->pix_fmts[0];
// control rate
video_avcc->bit_rate = 2 * 1000 * 1000;
video_avcc->rc_buffer_size = 4 * 1000 * 1000;
video_avcc->rc_max_rate = 2 * 1000 * 1000;
video_avcc->rc_min_rate = 2.5 * 1000 * 1000;
// time base
video_avcc->time_base = av_inv_q(input_framerate);
video_avs->time_base = sc->video_avcc->time_base;

avcodec_open2(sc->video_avcc, sc->video_avc, NULL);
avcodec_parameters_from_context(sc->video_avs->codecpar, sc->video_avcc);

We need to expand our decoding loop for the video stream transcoding:

AVFrame *input_frame = av_frame_alloc();
AVPacket *input_packet = av_packet_alloc();

while (av_read_frame(decoder_avfc, input_packet) >= 0)
{
  int response = avcodec_send_packet(decoder_video_avcc, input_packet);
  while (response >= 0) {
    response = avcodec_receive_frame(decoder_video_avcc, input_frame);
    if (response == AVERROR(EAGAIN) || response == AVERROR_EOF) {
      break;
    } else if (response < 0) {
      return response;
    }
    if (response >= 0) {
      encode(encoder_avfc, decoder_video_avs, encoder_video_avs, decoder_video_avcc, input_packet->stream_index);
    }
    av_frame_unref(input_frame);
  }
  av_packet_unref(input_packet);
}
av_write_trailer(encoder_avfc);

// used function
int encode(AVFormatContext *avfc, AVStream *dec_video_avs, AVStream *enc_video_avs, AVCodecContext video_avcc int index) {
  AVPacket *output_packet = av_packet_alloc();
  int response = avcodec_send_frame(video_avcc, input_frame);

  while (response >= 0) {
    response = avcodec_receive_packet(video_avcc, output_packet);
    if (response == AVERROR(EAGAIN) || response == AVERROR_EOF) {
      break;
    } else if (response < 0) {
      return -1;
    }

    output_packet->stream_index = index;
    output_packet->duration = enc_video_avs->time_base.den / enc_video_avs->time_base.num / dec_video_avs->avg_frame_rate.num * dec_video_avs->avg_frame_rate.den;

    av_packet_rescale_ts(output_packet, dec_video_avs->time_base, enc_video_avs->time_base);
    response = av_interleaved_write_frame(avfc, output_packet);
  }
  av_packet_unref(output_packet);
  av_packet_free(&output_packet);
  return 0;
}

We converted the media stream from h264 to h265, as expected the h265 version of the media file is smaller than the h264 however the created program is capable of:

  /*
   * H264 -> H265
   * Audio -> remuxed (untouched)
   * MP4 - MP4
   */
  StreamingParams sp = {0};
  sp.copy_audio = 1;
  sp.copy_video = 0;
  sp.video_codec = "libx265";
  sp.codec_priv_key = "x265-params";
  sp.codec_priv_value = "keyint=60:min-keyint=60:scenecut=0";

  /*
   * H264 -> H264 (fixed gop)
   * Audio -> remuxed (untouched)
   * MP4 - MP4
   */
  StreamingParams sp = {0};
  sp.copy_audio = 1;
  sp.copy_video = 0;
  sp.video_codec = "libx264";
  sp.codec_priv_key = "x264-params";
  sp.codec_priv_value = "keyint=60:min-keyint=60:scenecut=0:force-cfr=1";

  /*
   * H264 -> H264 (fixed gop)
   * Audio -> remuxed (untouched)
   * MP4 - fragmented MP4
   */
  StreamingParams sp = {0};
  sp.copy_audio = 1;
  sp.copy_video = 0;
  sp.video_codec = "libx264";
  sp.codec_priv_key = "x264-params";
  sp.codec_priv_value = "keyint=60:min-keyint=60:scenecut=0:force-cfr=1";
  sp.muxer_opt_key = "movflags";
  sp.muxer_opt_value = "frag_keyframe+empty_moov+delay_moov+default_base_moof";

  /*
   * H264 -> H264 (fixed gop)
   * Audio -> AAC
   * MP4 - MPEG-TS
   */
  StreamingParams sp = {0};
  sp.copy_audio = 0;
  sp.copy_video = 0;
  sp.video_codec = "libx264";
  sp.codec_priv_key = "x264-params";
  sp.codec_priv_value = "keyint=60:min-keyint=60:scenecut=0:force-cfr=1";
  sp.audio_codec = "aac";
  sp.output_extension = ".ts";

  /* WIP :P  -> it's not playing on VLC, the final bit rate is huge
   * H264 -> VP9
   * Audio -> Vorbis
   * MP4 - WebM
   */
  //StreamingParams sp = {0};
  //sp.copy_audio = 0;
  //sp.copy_video = 0;
  //sp.video_codec = "libvpx-vp9";
  //sp.audio_codec = "libvorbis";
  //sp.output_extension = ".webm";

Now, to be honest, this was harder than I thought it'd be and I had to dig into the FFmpeg command line source code and test it a lot and I think I'm missing something because I had to enforce force-cfr for the h264 to work and I'm still seeing some warning messages like warning messages (forced frame type (5) at 80 was changed to frame type (3)).

ffmpeg-libav-tutorial's People

Contributors

alexlopatin avatar anacarolinacastro avatar anebot avatar anthonytex avatar danielfm avatar developer0hye avatar fransm avatar jonasdlindner avatar juandebravo avatar juniorxsound avatar jwilk avatar ldm0 avatar leandromoreira avatar lightingbro46 avatar lucasrodcosta avatar madphysicist avatar maidamai0 avatar oguzhanmeteozturk avatar oneroyalace avatar rayworks avatar sangwonl avatar simonista avatar skpg-tech avatar smilingwolf avatar sovea avatar tinti avatar toton95 avatar uricorn avatar viettel-solutions avatar zhiburt avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ffmpeg-libav-tutorial's Issues

Correctness nitpick: H264 + AAC not muxed by WEBM

While not relevant to explain concepts, I think correctness should still be thought about.
In the Transmuxing section the image depicts h264 + aac in a WEBM file. In reality this does not work as WEBM only supports VP8/VP9 and Vorbis/Opus. A container that can also mux H264 and AAC that is not MP4 would be MKV, which is also widely adopted.

Bad link in README

In the README there is a line that uses wget to download a large mp4 file.
wget -O bunny_1080p_60fps.mp4 http://distribution.bbb3d.renderfarming.net/video/mp4/bbb_sunflower_1080p_60fps_normal.mp4
This command results in a 403 forbidden error.

wget -O bunny_1080p_60fps.mp4 https://mirror.clarkson.edu/blender/demo/movies/BBB/bbb_sunflower_1080p_60fps_normal.mp4
is an alternative mirror to the same file.

hello_world with other files (unknown codec)

I used the hello executable on a recorded transport stream file.
It didn't work since the stream also contained EPG program guide data for which no codec was found.
As for the hello world example only the video stream is relevant it seems ok just to ignore streams for which no codec is found.
I've submitted #78 for this

Error : Invalid data found when processing input

I tried the code but was getting following error with every input file
: "Invalid data found when processing input".

It seems to work only after I call av_register_all(). How is the code working for you without registering all codecs ?

`[mp4 @ 000002512f668d80] Invalid packet stream index: 2`

hi,

first of all awesome tutorial, anyways i get some errors when i run the remuxing sample on my side i use cpp and run the code , and detects the following error , [mp4 @ 000002512f668d80] Invalid packet stream index: 2 i did research it and se that the bunny sample from you only have 2 channels [0, 1] so the 3th and 4th will be NULL , but this cause the error on my side .

you ever get some error like this ? if so can you tell me how i can prevent this error ?

thanks in advance

Niek Tuytel

Live streaming to Web Browser or Qt desktop app (Widget or QML)

First thing. Thank you for your awsome work. My purpose is to study various streaming techniques, band witch usage, recovery over poor connections, cpu utilization etc.

I have ROS.org project where I am reading raw frames from few PoE cameras. What I like to do is implement implement simple TCP server that could except connections from web browser (or desktop Qi app) to view remote video. I have extensive expirience with network programming.

What I am missing and hoping someone can help me understand or point me to correct source, how to format trades to any modern browser can display it. What are best and most coming codecs used. I do not care about sound with video so this might be and option to enable so save some bandwidth

Thank you all for any input and suggestions

LOG: ERROR could not open the file

Hi,

I have compiled your code and try to open the example mp4 file (bbb_sunflower_1080p_60fps_normal.mp4), but it shows below error.

LOG: initializing all the containers, codecs and protocols.
LOG: opening the input file (bbb_sunflower_1080p_60fps_normal.mp4) and loading format (container) header
LOG: ERROR could not open the file.

I tried with different file but no success. Is it problem with new send/receive API.?

Regards,
Venkateswaran

Cannot compile, using ffmpeg from sources

Hello, I'm using Ubuntu 18.04, did a clean install in a virtual machine. I downloaded libvpx from sources, compiled and installed without any problems.
Then I downloaded ffmpeg source (from git, from git clone https://git.ffmpeg.org/ffmpeg.git ffmpeg), and compiled it (passing --enable-libvpx to it), make compiled without any issues, and did a make install.

When I try to make the 0_hello_world.c, make gives me the following error:
(you can check the output with a better looking text at https://pastebin.com/wPKY99ue )

mkdir build
gcc -g -Wall -o build/hello_world -lavformat -lavcodec -lswscale -lavutil -lz 0_hello_world.c
&& ./build/hello_world small_bunny_1080p_60fps.mp4`
0_hello_world.c: In function ‘main’:
0_hello_world.c:35:3: warning: ‘av_register_all’ is deprecated [-Wdeprecated-declarations]
av_register_all();
^~~~~~~~~~~~~~~
In file included from 0_hello_world.c:15:0:
/usr/local/include/libavformat/avformat.h:2033:6: note: declared here
void av_register_all(void);
^~~~~~~~~~~~~~~
/tmp/ccbsfyh8.o: In function 'av_make_error_string':
/usr/local/include/libavutil/error.h:111: undefined reference to 'av_strerror'
/tmp/ccbsfyh8.o: In function 'main':
/home/xultz/libav/ffmpeg-libav-tutorial/0_hello_world.c:35: undefined reference to 'av_register_all'
/home/xultz/libav/ffmpeg-libav-tutorial/0_hello_world.c:40: undefined reference to 'avformat_alloc_context'
/home/xultz/libav/ffmpeg-libav-tutorial/0_hello_world.c:54: undefined reference to 'avformat_open_input'
/home/xultz/libav/ffmpeg-libav-tutorial/0_hello_world.c:73: undefined reference to 'avformat_find_stream_info'
/home/xultz/libav/ffmpeg-libav-tutorial/0_hello_world.c:103: undefined reference to 'avcodec_find_decoder'
/home/xultz/libav/ffmpeg-libav-tutorial/0_hello_world.c:125: undefined reference to 'avcodec_alloc_context3'
/home/xultz/libav/ffmpeg-libav-tutorial/0_hello_world.c:134: undefined reference to 'avcodec_parameters_to_context'
/home/xultz/libav/ffmpeg-libav-tutorial/0_hello_world.c:142: undefined reference to 'avcodec_open2'
/home/xultz/libav/ffmpeg-libav-tutorial/0_hello_world.c:149: undefined reference to 'av_frame_alloc'
/home/xultz/libav/ffmpeg-libav-tutorial/0_hello_world.c:156: undefined reference to 'av_packet_alloc'
/home/xultz/libav/ffmpeg-libav-tutorial/0_hello_world.c:180: undefined reference to 'av_packet_unref'
/home/xultz/libav/ffmpeg-libav-tutorial/0_hello_world.c:168: undefined reference to 'av_read_frame'
/home/xultz/libav/ffmpeg-libav-tutorial/0_hello_world.c:185: undefined reference to 'avformat_close_input'
/home/xultz/libav/ffmpeg-libav-tutorial/0_hello_world.c:186: undefined reference to 'avformat_free_context'
/home/xultz/libav/ffmpeg-libav-tutorial/0_hello_world.c:187: undefined reference to 'av_packet_free'
/home/xultz/libav/ffmpeg-libav-tutorial/0_hello_world.c:188: undefined reference to 'av_frame_free'
/home/xultz/libav/ffmpeg-libav-tutorial/0_hello_world.c:189: undefined reference to 'avcodec_free_context'
/tmp/ccbsfyh8.o: In function 'decode_packet':
/home/xultz/libav/ffmpeg-libav-tutorial/0_hello_world.c:207: undefined reference to 'avcodec_send_packet'
/home/xultz/libav/ffmpeg-libav-tutorial/0_hello_world.c:218: undefined reference to 'avcodec_receive_frame'
/home/xultz/libav/ffmpeg-libav-tutorial/0_hello_world.c:230: undefined reference to 'av_get_picture_type_char'
/home/xultz/libav/ffmpeg-libav-tutorial/0_hello_world.c:242: undefined reference to 'av_frame_unref'
collect2: error: ld returned 1 exit status
Makefile:4: recipe for target 'hello_world' failed
make: *** [hello_world] Error 1

I took a look at pkg-config outputs from --libs and --cflags for some libs (lib libavformat, libavutil, etc) and I changed the gcc line on the Makefile to the following, but I couldn't see any changes on the make output:
gcc -g -Wall -I/usr/local/include -o build/hello_world -L/usr/local/lib -lavformat -lavcodec -lswscale -lavutil -lz -lvpx -lm -lswresample -pthread 0_hello_world.c

What else can I do in order to compile the code?

Thank you in advance for any help!

SDL extension to this tutorial

Hello Leandro,

Thanks so much for this repository! This account and your LinkedIn feed has contributed immensely for most of my knowledge about Video processing.

So as I understand this tutorial was born out of the fact that the other http://dranger.com/ffmpeg/ site was deprecated. And since that tutorial ends with an explanation about the Video playback aspects involving a coarser version of ffplay utilizing libav and SDL, do you intend to add the same to this tutorial?

It will be really helpful to understand the Video player end of the processing pipeline. It will also help us how to leverage the ffmpeg libraries to build a custom player on similar lines as ffplay, mplayer etc.

Thanks again,
Sanmathi

Library parameters on gcc invocation

Hi,

thanks for the tutorial, I'm enjoying it.

Just wanted to remark that I'm compiling the code on Ubuntu 16.04 and I couldn't get the compilation to work until I modified the gcc line to have all the library parameters after the source file, like below:

gcc -g -Wall -o build/hello_world 0_hello_world.c -lavformat -lavcodec -lswscale -lavutil -lz

Not sure what this means, I'm a total beginner on both C and ffmpeg.

Gianluca

Compile error

It's a great tutorial !

But when I run command ' make run_hello', I got error message as below,

docker run -w /files --rm -it  -v `pwd`:/files leandromoreira/ffmpeg-devel \
  gcc -L/opt/ffmpeg/lib -I/opt/ffmpeg/include/ /files/0_hello_world.c \
  -lavcodec -lavformat -lavfilter -lavdevice -lswresample -lswscale -lavutil \
  -o /files/build/hello
docker run -w /files --rm -it -v `pwd`:/files leandromoreira/ffmpeg-devel /files/build/hello /files/small_bunny_1080p_60fps.mp4
LOG: initializing all the containers, codecs and protocols.
LOG: opening the input file (/files/small_bunny_1080p_60fps.mp4) and loading format (container) header
LOG: ERROR could not open the file
Makefile:11: recipe for target 'run_hello' failed
make: *** [run_hello] Error 255

where I goes wrong?

Incorrect shell syntax in README.md

The following syntax is consistently used in README.md:

part 1 \ # comment
part 2

However, it does not work, try it:

echo 1 \ # comment
2

2 is treated as a separate command.
The reason is that the backslash doesn't actually mean "merge with the next line", it means "ignore the meaning of the next character". So if a newline separator is preceded by a backslash, the special meaning of the separator -- that is, to separate commands -- is ignored. But here the backslash precedes a space, not a newline, and so the newline has its old meaning and each line is treated as a separate command.

make fetch_small_bunny_video not working

I tried
make fetch_small_bunny_video
but it fails. This is on ubuntu 20.04.

FIrst issue was that apparently my command processor is not dash. I fixed this by adding
#!/bin/bash
at the beginning of ./fetch_bbb_video.sh

Next I got a not authorized from distribution.bbb3d.renderfarming.net
Here google helped me by giving me a local mirror from which I could fetch it. Probably the url needs updating.
Or maybe make the small_bunny_1080p_60fps.mp4 available somewhere (it is only 3.2 M)

BTW: nice tutorial; thank you

Change pCodecParameters to pLocalCodecParameters in README.md

Hi, I noticed this line of code in README.md

printf("\tCodec %s ID %d bit_rate %lld", pLocalCodec->long_name, pLocalCodec->id, pCodecParameters->bit_rate);

in which pLocalCodec is not declared in the article before used which might cause confusion in my view. Digging into the c code, I found this line.

pCodecParameters = pLocalCodecParameters;

As for a guess, maybe the author firstly write the c code and copy-and-paste these important parts into README and lead to this issue. By the way, I think adding \n afterward printfs could make output looks better. Hope that useful.

UPDATE
This line of code has the same problem:

AVCodecContext *pCodecContext = avcodec_alloc_context3(pCodec);

Maybe it will be better to replace pCodec with pLocalCodec.

Series of Images From Video

Hi,It is not the issue, but i want to know how can we extract images from video like in editing apps that a horizontal list of images appeared.I did little bit testing on frames ,but how can we know that a video contain how many images ? and number of frames a video contain is more than 1500 of 1 minute video.so can you guide me!!!!!!!!?

set framerate in RTSP

Hi dear @leandromoreira,
Thanks for your great documentation about ffmpeg.
I get stuck a problem with this library and I can't take it over.
I'm developing an application by c++ and ffmpeg library to capture ip camera frames, clearly the protocol name is RTSP.
everything is going well when the resolution of cameras is low but when I change it to higher resolution, the memory consumption goes up suddenly.
So I need your help solve it.
1.is there any way to reduce the framerate or fps when I read the cameras frame? I need only the last frame at the time of reading and the previous frames don't matter.
2.is it possible to reduce the frame's width and height before calling av_read_frame ? I mean that I set a parameter in AVFormatContext or AVCodecContext by which ffmpeg read the frame with the specific size. for example 400x400

I will be very very thankful if you help me.

Compiling errors

Hello. Great tutorial!
I am not able to compile it on Ubuntu 16.04. I am getting a lot of unknown variable-type errors, probably due to different versions of libavcodec. I have tried downloading it form Ubuntu repositories and from jonathonf's ppa. Neither of witch had a successful compiling. I'm going to attach the error messages:

0_hello_world.c: In function ‘main’:
0_hello_world.c:84:3: error: unknown type name ‘AVCodecParameters’
   AVCodecParameters *pCodecParameters =  NULL;
   ^
0_hello_world.c:90:5: error: unknown type name ‘AVCodecParameters’
     AVCodecParameters *pLocalCodecParameters =  NULL;
     ^
0_hello_world.c:91:55: error: ‘AVStream {aka struct AVStream}’ has no member named ‘codecpar’
     pLocalCodecParameters = pFormatContext->streams[i]->codecpar;
                                                       ^
0_hello_world.c:103:61: error: request for member ‘codec_id’ in something not a structure or union
     pLocalCodec = avcodec_find_decoder(pLocalCodecParameters->codec_id);
                                                             ^
0_hello_world.c:111:30: error: request for member ‘codec_type’ in something not a structure or union
     if (pLocalCodecParameters->codec_type == AVMEDIA_TYPE_VIDEO) {
                              ^
0_hello_world.c:116:71: error: request for member ‘width’ in something not a structure or union
       logging("Video Codec: resolution %d x %d", pLocalCodecParameters->width, pLocalCodecParameters->height);
                                                                       ^
0_hello_world.c:116:101: error: request for member ‘height’ in something not a structure or union
       logging("Video Codec: resolution %d x %d", pLocalCodecParameters->width, pLocalCodecParameters->height);
                                                                                                     ^
0_hello_world.c:117:37: error: request for member ‘codec_type’ in something not a structure or union
     } else if (pLocalCodecParameters->codec_type == AVMEDIA_TYPE_AUDIO) {
                                     ^
0_hello_world.c:118:80: error: request for member ‘channels’ in something not a structure or union
       logging("Audio Codec: %d channels, sample rate %d", pLocalCodecParameters->channels, pLocalCodecParameters->sample_rate);
                                                                                ^
0_hello_world.c:118:113: error: request for member ‘sample_rate’ in something not a structure or union
       logging("Audio Codec: %d channels, sample rate %d", pLocalCodecParameters->channels, pLocalCodecParameters->sample_rate);
                                                                                                                 ^
0_hello_world.c:122:104: error: request for member ‘bit_rate’ in something not a structure or union
     logging("\tCodec %s ID %d bit_rate %lld", pLocalCodec->long_name, pLocalCodec->id, pCodecParameters->bit_rate);
                                                                                                        ^
0_hello_world.c:134:7: warning: implicit declaration of function ‘avcodec_parameters_to_context’ [-Wimplicit-function-declaration]
   if (avcodec_parameters_to_context(pCodecContext, pCodecParameters) < 0)
       ^
0_hello_world.c:156:23: warning: implicit declaration of function ‘av_packet_alloc’ [-Wimplicit-function-declaration]
   AVPacket *pPacket = av_packet_alloc();
                       ^
0_hello_world.c:156:23: warning: initialization makes pointer from integer without a cast [-Wint-conversion]
0_hello_world.c:187:3: warning: implicit declaration of function ‘av_packet_free’ [-Wimplicit-function-declaration]
   av_packet_free(&pPacket);
   ^
0_hello_world.c: In function ‘decode_packet’:
0_hello_world.c:207:18: warning: implicit declaration of function ‘avcodec_send_packet’ [-Wimplicit-function-declaration]
   int response = avcodec_send_packet(pCodecContext, pPacket);
                  ^
0_hello_world.c:218:16: warning: implicit declaration of function ‘avcodec_receive_frame’ [-Wimplicit-function-declaration]
     response = avcodec_receive_frame(pCodecContext, pFrame);
                ^
Makefile:4: recipe for target 'hello_world' failed
make: *** [hello_world] Error 1

Thank you in advanced
Oier

Cannot build docker image

I clone the repository and do
docker build . -t ffmpeg-tutorial

here is the error it gives me:

Setting up libssl-dev:amd64 (1.0.2g-1ubuntu4.16) ...
Setting up libtool (2.4.6-0.1) ...
Setting up nasm (2.11.08-1ubuntu0.1) ...
Setting up pkg-config (0.29.1-0ubuntu1) ...
Setting up yasm (1.3.0-2) ...
Processing triggers for libc-bin (2.23-0ubuntu10) ...
Removing intermediate container 1903411f7f23
---> fd08780e9d53
Step 20/43 : RUN DIR=/tmp/opencore-amr && mkdir -p ${DIR} && cd ${DIR} && curl -sL https://sourceforge.net/projects/opencore-amr/files/opencore-amr/opencore-amr-${OPENCOREAMR_VERSION}.tar.gz | tar -zx --strip-components=1 && ./configure --prefix="${PREFIX}" --enable-shared && make && make install && rm -rf ${DIR}
---> Running in bd057e87ca3d

gzip: stdin: unexpected end of file
tar: Child returned status 1
tar: Error is not recoverable: exiting now
The command '/bin/sh -c DIR=/tmp/opencore-amr && mkdir -p ${DIR} && cd ${DIR} && curl -sL https://sourceforge.net/projects/opencore-amr/files/opencore-amr/opencore-amr-${OPENCOREAMR_VERSION}.tar.gz | tar -zx --strip-components=1 && ./configure --prefix="${PREFIX}" --enable-shared && make && make install && rm -rf ${DIR}' returned a non-zero code: 2

edit: doesn't look like this meant to be used this way. now I see that make command uses it to build the source

hello_world program

How would I convert Y,Cr,Cb frame to RGB to store in a ppm file instead of just the Y component ?

Code problems

  1. calloc not malloc in your initialise of contexts
  2. encode_frame ONLY called once after the file has been processed.
  3. Crash at line 269 with an invalid index

cannot make

I use centos7. I use yum installed the following pakages:

sudo yum install ffmpeg.x86_64 ffmpeg-debuginfo.x86_64 ffmpeg-devel.x86_64 ffmpeg-libs.x86_64

and I can run ffmpeg.
But when I make your 0_helllo_word.c, show me the following error:

gcc -g -Wall -o build/hello_world -lavformat -lavcodec -lswscale -lz 0_hello_world.c
&& ./build/hello_world small_bunny_1080p_60fps.mp4
0_hello_world.c:14:32: fatal error: libavcodec/avcodec.h: No such file or directory

So what environment do I need prepare before I run your make?

segmentation fault when use transcoding code

the detail error is below:

`Starting program: /home/projectDEMO/ffmpeg/ffmpeglib/doc/examples/./transcoding_3
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".

Program received signal SIGSEGV, Segmentation fault.
0x0000000000eb9388 in av_strstart (str=0x0, pfx=pfx@entry=0xf77548 "rtp:", ptr=ptr@entry=0x0) at libavutil/avstring.c:35
35 {
Missing separate debuginfos, use: debuginfo-install bzip2-libs-1.0.6-13.el7.x86_64 glibc-2.17-292.tl2.x86_64 libXau-1.0.8-2.1.el7.x86_64 libxcb-1.13-1.tl2.x86_64 xz-libs-5.2.2-1.tl2.x86_64`

can you help me solve this problem?
thanks!

make error

ffmpeg version 3.0.7-0ubuntu0.16.10.1 Copyright (c) 2000-2017 the FFmpeg developers
built with gcc 6.2.0 (Ubuntu 6.2.0-5ubuntu12) 20161005
configuration: --prefix=/usr --extra-version=0ubuntu0.16.10.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --cc=cc --cxx=g++ --enable-gpl --enable-shared --disable-stripping --disable-decoder=libopenjpeg --disable-decoder=libschroedinger --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-librubberband --enable-librtmp --enable-libschroedinger --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxvid --enable-libzvbi --enable-openal --enable-opengl --enable-x11grab --enable-libdc1394 --enable-libiec61883 --enable-libzmq --enable-frei0r --enable-chromaprint --enable-libx264
libavutil      55. 17.103 / 55. 17.103
libavcodec     57. 24.102 / 57. 24.102
libavformat    57. 25.100 / 57. 25.100
libavdevice    57.  0.101 / 57.  0.101
libavfilter     6. 31.100 /  6. 31.100
libavresample   3.  0.  0 /  3.  0.  0
libswscale      4.  0.100 /  4.  0.100
libswresample   2.  0.101 /  2.  0.101
libpostproc    54.  0.100 / 54.  0.100
z 0_hello_world.c
0_hello_world.c: In function ‘main’:
0_hello_world.c:84:3: error: unknown type name ‘AVCodecParameters’
   AVCodecParameters *pCodecParameters =  NULL;
   ^~~~~~~~~~~~~~~~~
0_hello_world.c:90:5: error: unknown type name ‘AVCodecParameters’
     AVCodecParameters *pLocalCodecParameters =  NULL;
     ^~~~~~~~~~~~~~~~~
0_hello_world.c:91:55: error: ‘AVStream {aka struct AVStream}’ has no member named ‘codecpar’; did you mean ‘codec’?
     pLocalCodecParameters = pFormatContext->streams[i]->codecpar;
                                                       ^~
0_hello_world.c:103:61: error: request for member ‘codec_id’ in something not a structure or union
     pLocalCodec = avcodec_find_decoder(pLocalCodecParameters->codec_id);
                                                             ^~
0_hello_world.c:111:30: error: request for member ‘codec_type’ in something not a structure or union
     if (pLocalCodecParameters->codec_type == AVMEDIA_TYPE_VIDEO) {
                              ^~
0_hello_world.c:116:71: error: request for member ‘width’ in something not a structure or union
       logging("Video Codec: resolution %d x %d", pLocalCodecParameters->width, pLocalCodecParameters->height);
                                                                       ^~
0_hello_world.c:116:101: error: request for member ‘height’ in something not a structure or union
       logging("Video Codec: resolution %d x %d", pLocalCodecParameters->width, pLocalCodecParameters->height);
                                                                                                     ^~
0_hello_world.c:117:37: error: request for member ‘codec_type’ in something not a structure or union
     } else if (pLocalCodecParameters->codec_type == AVMEDIA_TYPE_AUDIO) {
                                     ^~
0_hello_world.c:118:80: error: request for member ‘channels’ in something not a structure or union
       logging("Audio Codec: %d channels, sample rate %d", pLocalCodecParameters->channels, pLocalCodecParameters->sample_rate);
                                                                                ^~
0_hello_world.c:118:113: error: request for member ‘sample_rate’ in something not a structure or union
       logging("Audio Codec: %d channels, sample rate %d", pLocalCodecParameters->channels, pLocalCodecParameters->sample_rate);
                                                                                                                 ^~
0_hello_world.c:122:104: error: request for member ‘bit_rate’ in something not a structure or union
     logging("\tCodec %s ID %d bit_rate %lld", pLocalCodec->long_name, pLocalCodec->id, pCodecParameters->bit_rate);
                                                                                                        ^~
0_hello_world.c:134:7: warning: implicit declaration of function ‘avcodec_parameters_to_context’ [-Wimplicit-function-declaration]
   if (avcodec_parameters_to_context(pCodecContext, pCodecParameters) < 0)
       ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
0_hello_world.c: In function ‘decode_packet’:
0_hello_world.c:207:18: warning: implicit declaration of function ‘avcodec_send_packet’ [-Wimplicit-function-declaration]
   int response = avcodec_send_packet(pCodecContext, pPacket);
                  ^~~~~~~~~~~~~~~~~~~
0_hello_world.c:218:16: warning: implicit declaration of function ‘avcodec_receive_frame’ [-Wimplicit-function-declaration]
     response = avcodec_receive_frame(pCodecContext, pFrame);
                ^~~~~~~~~~~~~~~~~~~~~

avformat_open_input always returns "Protocol not found" rv=(-1330794744)

Hi, I am new to ffmpeg and I have read your tutorials and it helps me a lot. When I get your code working in Visual Studio 2015 , I want to open a movie file named "mother1.mp4" which at the same directory where my main.c located, and I got the error as the title described above.

int rv = avformat_open_input(&pFormatContext, "mother1.mp4", NULL, NULL); if (rv != 0) { av_strerror(rv, errbuf, sizeof(errbuf)); logging("ERROR could not open the file: %s",errbuf); return -1; }
Any ideas? Thanks.

copy in ffmpeg c++

dear,

i am currently stuck now for almost 1 month and still not get the solution ,
in the commandline tool the following command is what i want
ffmpeg -i "video.mp4" -i "audio.mp4" -codec copy -shortest output.mp4

i want to add a new generated video with no sound in it (greated with opencv). to the original video
what i need to do in c++ i really stuck now on this for 4 weeks (learn a lot of new things).

i hope you can help me with this , it seems to be very easy but my headache is on the -codec copy -shortest how i can get this succeeded in ffmpeg?

thank in advance if you want to help me with this,

Kind regards Niek Tuytel,

Question about demuxing DASH

Hi there @leandromoreira,
That tutorial is really impressive, but I've got some question. I'm generally new in C/C++ and using ffmpeg as a library. I want to try to open DASH with following code

struct representation {
    char *url_template;
    AVIOContext pb;
    AVIOContext *input;
    AVFormatContext *parent;
    AVFormatContext *ctx;
    AVPacket pkt;
    int rep_idx;
    int rep_count;
    int stream_index;

    enum AVMediaType type;
    char id[20];
    int bandwidth;
    AVRational framerate;
    AVStream *assoc_stream; /* demuxer stream associated with this representation */

    int n_fragments;
    struct fragment **fragments; /* VOD list of fragment for profile */

    int n_timelines;
    struct timeline **timelines;

    int64_t first_seq_no;
    int64_t last_seq_no;
    int64_t start_number; /* used in case when we have dynamic list of segment to know which segments are new one*/

    int64_t fragment_duration;
    int64_t fragment_timescale;

    int64_t presentation_timeoffset;

    int64_t cur_seq_no;
    int64_t cur_seg_offset;
    int64_t cur_seg_size;
    struct fragment *cur_seg;

    /* Currently active Media Initialization Section */
    struct fragment *init_section;
    uint8_t *init_sec_buf;
    uint32_t init_sec_buf_size;
    uint32_t init_sec_data_len;
    uint32_t init_sec_buf_read_offset;
    int64_t cur_timestamp;
    int is_restart_needed;
};

static int readFunction(void* opaque, uint8_t* buf, int buf_size) {
    int ret = 0;
    struct DASHContext *v = static_cast<DASHContext *>(opaque);
//    DASHContext *c = static_cast<DASHContext *>(v->parent->priv_data);

//    if (!v->input) {
//        free_fragment(&v->cur_seg);
//        v->cur_seg = get_current_fragment(v);
//        if (!v->cur_seg) {
//            ret = AVERROR_EOF;
//        }
//    }
    return 0;
}

int main(int argc, char *argv[]) {
    char filepath[] = "https://***/stream.mpd";
    representation d = {
    };

    unsigned char *buffer = nullptr;
    const std::shared_ptr<AVIOContext> avioContext(avio_alloc_context(buffer, 8192, 0, &d, &readFunction, nullptr, nullptr), &av_free);
    const auto avFormat = std::shared_ptr<AVFormatContext>(avformat_alloc_context(), &avformat_free_context);

    auto avFormatPtr = avFormat.get();
    avFormatPtr->iformat = av_find_input_format("dash");
    avFormat->pb = avioContext.get();
    auto iFormat = av_find_input_format("dash");
    if (avformat_open_input(&avFormatPtr, filepath, iFormat, NULL) != 0) {
        printf("Couldn't open input stream.\n");
        return -1;
    }
}

The biggest problem with opening it is that: DASH init segment and data segments are encrypted with AES-256. Default DASH demuxer implementation is not able do decode segments on fly so I'm aware that I probably need to rewrite it, but I've no idea how to properly begin reading that stream because now I'm receiving a segfault at readData function. Have you got any idea how could I begin reading from that stream? I've got a key for decrypting that stream.

container.png wrong order of codecs

In the image where you explain containers, Opus and VP9 are backwards. Opus should be the Audio Codec and VP9 should be the Video Codec.

Cheers!

how to call atadenoise function in ffmpeg

how to use atadenoise with source code by transplanting it to a video engine. Is there any example to call function "ff_atadenoise_init_x86" in "FFmpeg/libavfilter/atadenoise.h"

Disagreement with FFmpeg example

I am looking through 0_hello_world.c, line 238. According to the example in the latest FFmpeg docs, http://www.ffmpeg.org/doxygen/trunk/decode_video_8c-example.html, you don't need to unref the frame here. It's quite possible that I am misunderstanding the import of the comment

/* the picture is allocated by the decoder. no need to
   free it */

in the FFmpeg version of decode().

Overall, awesome tutorial. I found it extremely helpful despite any impression my nitpicking may give to the contrary. I can't wait to see it grow in both content and popularity.

Emtpy audio packets

Hi there!

Thanks for putting this tutorial together. This help is extremely valuable!

I've adapted transcode.c to take a video and audio input using avfoundation (specifically from a webcam and inbuilt microphone). Instead of an output file I also want to stream to a rtmp server with the audio converted from PCM to AAC.

The video packets are successfully received and transcoded. However, the audio packets appear empty? AVPacket->side_data is null and AVPacket->buf->data is an empty pointer. Think this is causing avcodec_send_frame to crash with an empty AVFrame in encode_audio?

Any tips/help will be much appreciated!

ffmpeg -i <filename> -filter:v fps=fps=30 <filename>

dear,

the given command in the title should create a new file with a proper frame rate,
how ever my question is is this also possible with c++, and what i have to use re-muxing ?
or more thinking about muxing?

thanks in advance for helping,
(would be awesome if you give some sample code, :) )

kind regards,

Niek

Warn of long download in example

I've got a week to go until my download limit is renewed so I have about a gig to go atm . Thought I would follow your example but started the wget command and found it was over 300Mb so had to stop it or I would not get through the week .

I suggest you put the size on files for download so we know

tutorial1: av_read_frame from a concrete stream

As I see there's no notions that it's necessary to distinguish a video stream somehow to read a packet from a right stream.

if (pPacket->stream_index == video_stream_index) {
logging("AVPacket->pts %" PRId64, pPacket->pts);
response = decode_packet(pPacket, pCodecContext, pFrame);
if (response < 0)
break;
// stop it, otherwise we'll be saving hundreds of frames
if (--how_many_packets_to_process <= 0) break;
}

Without determine that I was getting such a error.

[h264 @ 0x56240387ff80] Invalid NAL unit size (554341485 > 384).
[h264 @ 0x56240387ff80] Error splitting the input into NAL units.

I am not sure if adding mention it in the script swamps it with specific complications so I just want to pointed this out :)

And Yep I see this 😄

We'll skip some details, but don't worry: the source code is available at github.

I guess I just didn't work out a good approach here and I am relaying only on the README.md but I might have to consider examples as a part of tutorial 😄

decode MPEG2-TS buffer from socket

Hello,

Thanks for the tutorial.
I would like to decode frame buffer encoded in MPEG2-TS. (H.264/AAC data)
This buffer is received from socket. And I would like to get an transform it as an RGB buffer.
How to achieve this with avformat ?

void FrameDecoder::onFrameReceived(const QByteArray &frame) {
    qDebug() << "Ready to decode the frame " << frame;
   
}

Thanks for help.

Hey, can I get some help, have no clue how this works.

static void save_gray_frame(unsigned char *buf, int wrap, int xsize, int ysize, char *filename)
{
    FILE *f;
    int i;
    f = fopen(filename,"w");
    // writing the minimal required header for a pgm file format
    // portable graymap format -> https://en.wikipedia.org/wiki/Netpbm_format#PGM_example
    fprintf(f, "P5\n%d %d\n%d\n", xsize, ysize, 255);

    // writing line by line
    for (i = 0; i < ysize; i++)
        fwrite(buf + i * xsize, 1, xsize, f);
    fclose(f);
}

fwrite(buf + i * xsize, 1, xsize, f);
buf + i * xsize
How do you get the data stream to do that? Like, what is it doing? Where do you get that expression from?

Improvement for the tutorial

Hi,

First I would like to thank you for the initiative. I have few suggestoin for your project:

  1. Add a section for FFMpeg compilation and/or links for Windows/Linux. For instance, add links to Zeranoe builds. Explain the limitation of codec/mux and licenses issues, the interest of the non-free flag for compilation.
  2. A small guide/links explaining how to compile ffmpeg with specific options, how to add/remove a codec from a build.
  3. Differenciate version of FFMpeg. There are many API changes, maintaining multiple version of the sample code can be great.
  4. Add a small tutorial for synching audio & video when there is no constant FPS input (I already have some stuff for this one).

I can participate to some of these points. Let me know if you are interested by a contribution!

BR

Simple question

Hi! I am new to Github actually and decided to follow along with your tutorial. I apologize if this is not the correct place to ask a simple question.

I just wanted to know if the tutorial was complete? I see Chapter 3 has been added relatively recently. Just was curious regarding the current status of this FFmpeg tutorial.

Thanks so much! This is really great.

read input from /dev/dvb/adapter0/dvr0

I made some changes to the 3_transcoding_c to read from /dev/dvb/adapter0/dvr0 instead of a file. However running thru docker gives me error:
LOG: input filename /dev/dvb/adapter0/dvr0
LOG: failed to open input file /dev/dvb/adapter0/dvr0
Makefile:48: recipe for target 'run_transcodingr' failed
make: *** [run_transcodingr] Error 255

I also changed the permissions of the /dev/dvb to my user, but still the same error:
chown -R abc /dev/dvb

Pls suggest.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.