Giter VIP home page Giter VIP logo

webm-muxer's People

Contributors

happylinks avatar vanilagy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

webm-muxer's Issues

Chrome throwing error and failing to export video

I had code that was working, I don't think anything changed but I'm now getting this error.

A VideoFrame was garbage collected without being closed. Applications should call close() on frames when done with them to prevent stalls.

I'm not explicitly calling close() anywhere but neither are you in your example code.

I'm using the newest published version of this library and passing canvas frames into code like this.


export default class WebM {
  constructor(width, height, transparent = true, fps) {
    this.muxer = new Muxer({
      target: new ArrayBufferTarget(),
      video: {
        codec: 'V_VP9',
        width: width,
        height: height,
        frameRate: fps,
        alpha: transparent,
      },
      audio: undefined,
      firstTimestampBehavior: 'offset',
    });


    this.videoEncoder = new VideoEncoder({
      output: (chunk, meta) => this.muxer.addVideoChunk(chunk, meta),
      error: (error) => reject(error),
    });
    this.videoEncoder.configure({
      codec: 'vp09.00.10.08',
      width: width,
      height: height,
      bitrate: 1e6,
    });
  }

  addFrame(frame, time, frameIndex) {
    return new Promise((resolve) => {
      this.videoEncoder.encode(new VideoFrame(frame, { timestamp: time * 1000 }), {
        keyFrame: !(frameIndex % 50),
      });
      resolve();
    });
  }

  generate() {
    return new Promise((resolve, reject) => {
      this.videoEncoder
        .flush()
        .then(() => {
          this.muxer.finalize();
          resolve(new Blob([this.muxer.target.buffer], { type: 'video/webm' }));
        })
        .catch(reject);
    });
  }
}

read webm

Can I increase the read frame data in webm format to match WebCodecs to draw and play canvas?

VideoDecoder;

`
(........ webm track, frame, chunk? ........ )

videoDecoder.decode(chunk);

`

Dynamic browser support

Using this library, I generate videos on the fly.

Then, I try to play the video in the browser.

In chrome desktop, it works, but on safari (desktop/mobile) or chrome mobile it doesn't.
See example file:
test.webm

I could not yet figure out why this is the case. I would like this library to support an isPlayable method, that given a Muxer determines if the video is playable or not. It should return either true, false, or null.

Quick mock implementation:

async function isPlayable() {
    if (!('mediaCapabilities' in navigator)) {
      return null;
     // or maybe use `canPlayType` as a fallback. or `MediaSource.isTypeSupported(mimeType)`
    }

    const videoConfig = {
      contentType: 'video/webm; codecs="vp09.00.10.08"', // replace with codec
      width: 1280,  // Replace with actual width
      height: 720,  // Replace with actual height
      bitrate: 1000000,  // Replace with actual bitrate
      framerate: 25,  // Replace with actual frame rate
      hasAlphaChannel: true, // replace with alpha
    };
   // {"powerEfficient":true,"smooth":true,"supported":true,"supportedConfiguration":{"video":{"bitrate":1000000,"contentType":"video/webm; codecs=\"vp09.00.10.08\"","framerate":25,"height":720,"width":1280},"type":"file"}}
    const result = await navigator.mediaCapabilities.decodingInfo({type: 'file', video: videoConfig});
    return result.supported; 
}

Or maybe, for example here, we specify hasAlphaChannel: true but the supportedConfiguration says no alpha is supported, we might be able to makePlayable by using the specified configuration

Help with multimedia file (muxing audio and video)

Hi,

I'm with a problem on the last part of transform a mp4 file into a webm file.

I could not understand your explanation on this part of your doc: https://github.com/Vanilagy/webm-muxer#media-chunk-buffering.

I'm trying to pass to muxer the encoded parts of audio and video in a interleaved way, but it is not working.

The image below shows the order that I am sending the encoded chunks to the muxer. The number on front of the label is the chunk timestamp. I tried some orders and the resulted file doesn't play.

image

Do you have any suggesting about this?

Thank you.

Help getting audio from audio context working

I am wondering if anyone can help me mux video and audio (not from the microphone) together? Below is a snippet of some of the code I am using inside a cables.gl op file. I have managed to feed canvas frames one by one to the video to get perfectly formed videos with no missing frames. However when I add the audio the video is not viewable, when I ffmpeg convert it to mp4 there is no audio.

              const audioCtx = CABLES.WEBAUDIO.createAudioContext(op);
            const streamAudio = audioCtx.createMediaStreamDestination();

            inAudio.get().connect(streamAudio); <-- this gets fed from an audio source in cables

      audioTrack = streamAudio.stream;
      recorder = new MediaRecorder(audioTrack);

    	muxer = new WebMMuxer({
        "target": "buffer",
        "video": {
            "codec": "V_VP9",
            "width": inWidth.get() / CABLES.patch.cgl.pixelDensity,
            "height": inHeight.get() / CABLES.patch.cgl.pixelDensity,
            "frameRate": fps
        },
        "audio": {
            "codec": "A_OPUS",
            "sampleRate": 48000,
            "numberOfChannels": 2
        },
        "firstTimestampBehavior": "offset" // Because we're directly pumping a MediaStreamTrack's data into it
    });

    videoEncoder = new VideoEncoder({
        "output": (chunk, meta) => { return muxer.addVideoChunk(chunk, meta); },
        "error": (e) => { return op.error(e); }
    });
    videoEncoder.configure({
        "codec": "vp09.00.10.08",
        "width": inWidth.get() / CABLES.patch.cgl.pixelDensity,
        "height": inHeight.get() / CABLES.patch.cgl.pixelDensity,
        "framerate": 29.7,
        "bitrate": 5e6
    });

    	if (audioTrack) {
    	    op.log('we HAVE AUDIO !!!!!!!!!!!!!!!!!!')

/* I REMOVED ALLL THE CODE FROM THE DEMO FROM HERE

// 		const audioEncoder = new AudioEncoder({
// 			output: (chunk) => muxer.addRawAudioChunk(chunk),
// 			error: e => console.error(e)
// 		});
// 		audioEncoder.configure({
// 			codec: 'opus',
// 			numberOfChannels: 2,
// 			sampleRate: 48000, //todo should have a variable
// 			bitrate: 128000,
// 		});


		// Create a MediaStreamTrackProcessor to get AudioData chunks from the audio track
// 		let trackProcessor = new MediaStreamTrackProcessor({ track: audioTrack });
// 		let consumer = new WritableStream({
// 			write(audioData) {
// 				if (!recording) return;
// 				audioEncoder.encode(audioData);
// 				audioData.close();
// 			}
// 		});
// 		trackProcessor.readable.pipeTo(consumer);

TO HERE */

      recorder.ondataavailable = function(e){
          op.log('test', e.data) <-- this returns a blob {size: 188409, type: 'audio/webm;codecs=opus'}
          //audioEncoder.encode(e.data);
          muxer.addAudioChunkRaw(e.data) <-- this throws no errors
      }
      recorder.start()
}

Recording a <canvas> and audio stream

Love this! I'm still new to video processing so I'm not sure if this is possible.

My goal is to apply filters, trim, and draw on top of a video.

I have a <video> element as source (that has an audio track).

By updating the currentTime and listening to "seeked" I've successfully managed to record video frames for a section a given video (for example timestamp 2000 to 3500). This works perfectly and is a lot faster than using the MediaRecorder.

Now I also want to add the correct section of the AudioTrack and that's where I'm kind of lost?

I've tried to use the method in this issue and in the canvas drawing demo but it doesn't seem to work. The WritableStream write function gets called but the chunks in the AudioEncoder output have a byteLenght of only 3 which seems incorrect.

If you could give me a pointer in the right direction that would be amazing.

Also, happy to support this project, so if you have a donation link, please let me know. 🙏

[FR] Support more input types

I want to use this library to re-mux a raw H.264 stream into a WebM file (because WebM has better support among media players than raw H.264 stream).

Because I already have an encoded stream, I don't need (or want) WebCodecs API to be involved (browser compatibility is another concern).

But currently, this library does an instanceof test against EncodedVideoChunk here:

trackNumber: externalChunk instanceof EncodedVideoChunk ? VIDEO_TRACK_NUMBER : AUDIO_TRACK_NUMBER

I know I can construct EncodedVideoChunks with my encoded data, but ideally, I want to supply the buffer directly to this library, saving the extra memory allocation and copying.

I tried to modify this library like this:

diff --git a/src/main.ts b/src/main.ts
index 3109e82..840d756 100644
--- a/src/main.ts
+++ b/src/main.ts
@@ -226,7 +226,7 @@ class WebMMuxer {

 		this.writeVideoDecoderConfig(meta);

-		let internalChunk = this.createInternalChunk(chunk, timestamp);
+    let internalChunk = this.createInternalChunk(chunk, 'video', timestamp);
 		if (this.options.video.codec === 'V_VP9') this.fixVP9ColorSpace(internalChunk);

 		/**
@@ -328,12 +328,12 @@ class WebMMuxer {
 		}[this.colorSpace.matrix];
 		writeBits(chunk.data, i+0, i+3, colorSpaceID);
 	}

 	public addAudioChunk(chunk: EncodedAudioChunk, meta: EncodedAudioChunkMetadata, timestamp?: number) {
 		this.ensureNotFinalized();
 		if (!this.options.audio) throw new Error("No audio track declared.");

-		let internalChunk = this.createInternalChunk(chunk, timestamp);
+    let internalChunk = this.createInternalChunk(chunk, 'audio', timestamp);

 		// Algorithm explained in `addVideoChunk`
 		this.lastAudioTimestamp = internalChunk.timestamp;
@@ -356,7 +356,7 @@ class WebMMuxer {
 	}

 	/** Converts a read-only external chunk into an internal one for easier use. */
-	private createInternalChunk(externalChunk: EncodedVideoChunk | EncodedAudioChunk, timestamp?: number) {
+  private createInternalChunk(externalChunk: EncodedVideoChunk | EncodedAudioChunk, trackType: 'video' | 'audio', timestamp?: number) {
 		let data = new Uint8Array(externalChunk.byteLength);
 		externalChunk.copyTo(data);

@@ -364,7 +364,7 @@ class WebMMuxer {
 			data,
 			timestamp: timestamp ?? externalChunk.timestamp,
 			type: externalChunk.type,
-			trackNumber: externalChunk instanceof EncodedVideoChunk ? VIDEO_TRACK_NUMBER : AUDIO_TRACK_NUMBER
+      trackNumber: trackType === 'video' ? VIDEO_TRACK_NUMBER : AUDIO_TRACK_NUMBER
 		};

 		return internalChunk;

So I can give it plain objects. I haven't modified it to take buffers directly.

Here is my consuming code:

https://github.com/yume-chan/ya-webadb/blob/eaf3a7a3c829ebdbd4e1608c4cc0f3caf623f180/apps/demo/src/components/scrcpy/recorder.ts#L77-L100

        const sample = h264StreamToAvcSample(frame.data);
        this.muxer!.addVideoChunk(
            {
                byteLength: sample.byteLength,
                timestamp,
                type: frame.keyframe ? "key" : "delta",
                // Not used
                duration: null,
                copyTo: (destination) => {
                    // destination is a Uint8Array
                    (destination as Uint8Array).set(sample);
                },
            },
            {
                decoderConfig: this.configurationWritten
                    ? undefined
                    : {
                          // Not used
                          codec: "",
                          description: this.avcConfiguration,
                      },
            }
        );
        this.configurationWritten = true;

is one chunk equals to one segment ?

I have a question on the streamer interface, does onData callback is given 'complete' clusters, like one full cluster ? is one media cluster encapsulated inside one chunk ? This is needed for live streaming.

Video is flickering, seems to be related to slow Android device

Hi,

Tried to use the muxer on a slower Android device and the resulting video flickers, almost seems like every second a frame is displayed from a couple milliseconds ago.

The video has a 1080 × 1920 resolution and is MPEG-4 AAC H.264 encoded.

Tried if videoBitrate had an effect but didn't seem to make any difference. Encoding on MacOS works correctly. This is with V_VP9 and vp09.00.10.08

I'm sorry I don't have any more specific details. Is there any reason this might happen? Anything I can configure to improve the output?

Webm plays way too fast when encoded on iOs 16.4

Hello there
Not sure if it has something to do with webm-muxer, which is awesome by the way.

I‘m using your library with VideoFrame from a live webrtc feed. I‘m encoding it with vp8 and vp9 codec using WebCodec API. In the next version of safari (16.4) WebCodec API is also available in safari.

It works great on android, but when i create webm videos from safari, the resulting videos play very fast. Also i cannot open it in VLC, it plays in windows native player though.

I tried reducing the framerate that the VideoFrames are pushed to the encoder so that the result is about 15fps, again working as expected on Android.
Do you have any hints regarding this? As i understand, setting the fps param in webmmuxer is only informative (metadata) right?
Because i send each frame manually, I dont think the fps param of VideoEncoder has any influence.

Live streaming support

Would it be possible to get chunks of muxed data for live streaming? I use webm streaming created from ffmpeg (to an icecast2 server) and it works fine (I can play in a html5 video ou audio tag without problem), even thou the webm standard was not really conceived for live streaming...

Variable frame rate

Hello,

Before starting I would like to thank you for webm-muxer.

I wanted to know if it is normal that the framerate of your demo has a changing framerate between each file while in the source code it is well marked frameRate: 30 ?

In VLC it's written for both videos "30.000300" but in After Effects I have 29.042 fps for the first file and 30.512 fps for the second.

Is there a possibility to have a video file with a fixed framerate?

Strange duration when inside web worker

Hi,

The muxer works great outside the webworker but when I put it inside a webworker the video duration is really weird.

Once the page is loaded, if I wait 10s before starting recording, the video duration will be 10s + recorded video time. The second strange thing is that the video in the player will start at 10s (and not 0s) but impossible to return before 10s.

And if I record a new video after the other video, say after 60s, the video when finished recording will be 64s, etc.

When I reload the page the "bug" starts from zero but increases according to the time I stay on the page.

After trying for days and days, reading all the documents on the subject and trying all possible examples, believing I was doing something wrong, I tried the webm-writer library modified by the WebCodecs team [example](https://github.com/w3c/webcodecs/tree/704c167b81876f48d448a38fe47a3de4bad8bae1/ samples/capture-to-file) and everything works normally.

Do you have any idea what the problem is or am I doing something wrong?

Some exemple code

  function start() {
    const [ track ] = stream.value.getTracks()
    const trackSettings = track.getSettings()
    const processor = new MediaStreamTrackProcessor(track)
    inputStream = processor.readable

    worker.postMessage({
      type: 'start',
      config: {
        trackSettings,
        codec,
        framerate,
        bitrate,
      },
      stream: inputStream
    }, [ inputStream ])

    isRecording.value = true

    stopped = new Promise((resolve, reject) => {
      worker.onmessage = ({data: buffer}) => {
        const blob = new Blob([buffer], { type: mimeType })
        worker.terminate()
        resolve(blob)
      }
    })
  }

Worker.js

import '@workers/webm-writer'

let muxer
let frameReader

self.onmessage = ({data}) => {
  switch (data.type) {
    case 'start': start(data); break;
    case 'stop': stop(); break;
  }
}

async function start({ stream, config }) {
  let encoder
  let frameCounter = 0

  muxer = new WebMWriter({
    codec: 'VP9',
    width: config.trackSettings.width,
    height: config.trackSettings.height
  })

  frameReader = stream.getReader()

  encoder = new VideoEncoder({
    output: chunk => muxer.addFrame(chunk),
    error: ({message}) => stop()
  })

  const encoderConfig = {
    codec: config.codec.encoder,
    width: config.trackSettings.width,
    height: config.trackSettings.height,
    bitrate: config.bitrate,
    avc: { format: "annexb" },
    framerate: config.framerate,
    latencyMode: 'quality',
    bitrateMode: 'constant',
  }

  const encoderSupport = await VideoEncoder.isConfigSupported(encoderConfig)
  if (encoderSupport.supported) {
    console.log('Encoder successfully configured:', encoderSupport.config)
    encoder.configure(encoderSupport.config)
  } else {
    console.log('Config not supported:', encoderSupport.config)
  }

  frameReader.read().then(async function processFrame({ done, value }) {
    let frame = value

    if (done) {
      await encoder.flush()
      const buffer = muxer.complete()
      postMessage(buffer)
      encoder.close()
      return
    }

    if (encoder.encodeQueueSize <= config.framerate) {
      if (++frameCounter % 20 == 0) {
        console.log(frameCounter + ' frames processed');
      }
      const insert_keyframe = (frameCounter % 150) == 0
      encoder.encode(frame, { keyFrame: insert_keyframe })
    }

    frame.close()
    frameReader.read().then(processFrame)
  })
}

async function stop() {
  await frameReader.cancel()
  const buffer = await muxer.complete()
  postMessage(buffer)
  frameReader = null
}

Screenshot

Video sample 1

Videocapture

Video sample 2

Capture d’écran 2023-02-24 à 00 35 54

webmmuxer throws Matroska cluster too big error even with 10 seconds keyframe interval.

I have a media pipeline in which the encoder stage feeds the recording stage, the recording stage uses mp4/webmmuxer to write local media file. I was using mp4muxer and every thing works fine. But , when i switch to webmmuxer i get this below error and the recorder refuses to write the file. I am inserting keyframes every 10 milliseconds. I wonder whether some thing wrong with my usage. Are there any extra options we need to pass compared to mp4 muxer ?

Current Matroska cluster exceeded its maximum allowed length of 32768 milliseconds. In order to produce a correct WebM file, you must pass in a video key frame at least every 32768 milliseconds.

please advise.
Thanks

Audio as track 1?

When encoding audio-only content, is the audio set to track 1? How to set to track 1?

Support mobile chrome

Demo is failed in chrome mobile(110 & 111) with errors:

DOMException: Input audio buffer is incompatible with codec parameters
Uncaught (in promise) DOMException: Failed to execute 'encode' on 'AudioEncoder': Cannot call 'encode' on a closed codec

Looks like it should work
https://caniuse.com/webcodecs

StreamTarget onDone not available anymore since v4.0.0

Hi, we noticed that you removed the onDone method of StreamTarget in v4.0.0. Is there an alternative way to know reliably when all data has passed through the muxer once muxer.finalize() has been called?
We forward the data send via StreamTarget to a file, but can't use FileSystemWritableFileStream directly for various reasons and used the onDone method to trigger to close the file handle.

How to use with PHP?

Hello maintainers and community members,

I'm currently working on a project that uses PHP for the backend and would like to take advantage of the webm-muxer TypeScript package for WebM/Matroska multiplexing. Given the advantages of this package, including its speed, size, and support for both video and audio as well as live-streaming, I believe it could greatly benefit our workflow.

Here are my main questions and areas of concern:

  1. Node.js Bridge: Considering webm-muxer is written in TypeScript, is there a recommended approach to call the functions from PHP, possibly via a Node.js bridge? Has anyone successfully integrated it using solutions like phpexecjs or others?
  2. Real-time Performance: When using it with PHP, especially in a real-time environment like live-streaming, are there any performance bottlenecks or challenges we should anticipate?
  3. Temporary Storage: For large video/audio files, temporary storage might be a concern. Does webm-muxer have any built-in utilities for managing temporary files, or would this have to be managed entirely on the PHP side?
  4. Concurrency: PHP can spawn multiple processes or threads (using solutions like pthreads). How thread-safe is webm-muxer in concurrent scenarios?
  5. API Wrapper: Is there an existing PHP wrapper for the webm-muxer API, or would it be recommended to build a custom wrapper tailored to our application's needs?
  6. Error Handling: How does webm-muxer report errors, and what would be the best way to catch and handle these errors on the PHP side?
  7. Updates & Maintenance: With potential updates to webm-muxer, what's the best approach to ensure that the PHP integration remains stable and up-to-date?

I appreciate any feedback, examples, or pointers from those who have attempted or succeeded in such an integration. Thank you in advance for your help and insights!

Support safari

Safari is add webcodecs support(video only for now) in latest dev releases (https://caniuse.com/webcodecs)

Сurrently demo is failing with error:

[Error] Unhandled Promise Rejection: ReferenceError: Can't find variable: AudioEncoder

Timeline for Firefox VideoEncoder support

This is a question somewhat unrelated to this library, but:

I know most browsers have a public ticket system that allows devs to track the progress of features being added/fixed. I looked everywhere yesterday and couldn't find any mention of a timeline for VideoEncoder support in Firefox, like if it was even on their radar or not.

Do you know where to look for this? Are you in any secret discords where they talk about it?

Love your library by the way, was a breeze to implement & use, with no headaches yet.

Is it possible to get file buffer before .finalize() is called?

First of all - thank you for creating this amazing lib! I'm going to use it in the https://screen.studio rendering & encoding pipeline.

In my pipeline, I need to transcode the .webm file into .mp4 (I hoped the vp9 codec could be used directly in .mp4 without transcoding, but it will not play on QuickTime on macOS).

What I can do is wait for the .webm file to be ready and then start transcoding. This will work, but as export speed is critical for me, I'd like to already start transcoding even before the .webm video file is ready (aka all video chunks being added).

Thus my question is - is it possible to get a file data buffer while I'm adding video chunks so I can already pass it to ffmpeg? This would allow me to parallelize encoding .webm and transcoding it to mp4.

Thank you!

Stream to web storage

Hi @Vanilagy

Just wanted to check if there is any way that we can use the streaming option to stream data to web storage like IndexedDB.
I would like to avoid array buffer target as the in-memory usage may increase for larger videos but I do not have the flexibility to prompt for saving to a file hence was wondering if an in-memory streaming option is available somehow

Thanks,
Neeraj

Writing to disk via node

Hi @Vanilagy!
First of all, thank you for creating this amazing library and for the active maintenance. This is super helpful for a use case that I have been working on.

I was following this comment and had a couple of doubts:

  1. For writing to disk in node.js environment, would you suggest using StreamTarget with a chunked approach or without the chunked approach and why so?
  2. For StreamTarget, is backpressure being handled by default via the library as writing to the WriteStream will require this capability to be present at some point so that the writes happen efficiently.

Another quick question, is it possible to increase the width, height or bitrate configurations of the Muxer midway

Thanks,
Neeraj

Need help with YouTube Live Ingest using HLS

Hi there,

I'm currently working on a project that involves live streaming to YouTube using the HLS protocol. I came across your webm-muxer library and was impressed with its performance and simplicity.

However, I'm having trouble figuring out how to use it with YouTube's Live Ingest feature. I was wondering if you could provide some guidance or examples on how to do this.

Any help would be greatly appreciated!

Thank you.

Best regards.

Using with nodejs fs

Hi Vanilagy,

For an electronjs app, I have to stream the creation of a video without being able to use the Web File System API.

So I use "fs" and I wanted to know if there is a possibility to stream like the Web File System API? Currently I'm using the buffer but it's not ideal because I have long 4K videos.

Do you have the possibility to do something?

Thank you !

Encode alpha video to WebM ?

Chrome 31 now supports video alpha transparency in WebM.
"webm-muxer" How to Encoder alpha videos ?

VideoEncoderConfig:
alpha: 'keep', // keep alpha channel

It doesn't work

Can microphone audio be synthesized when recording screen?

Hi there! I'm wondering how to use this library for screen recording since I'm not using Canvas. Also, I'll be speaking into a microphone while recording and I'd like to merge the audio from the microphone with the video. Can you guide me on how to do that? Thanks!

	import { Muxer, ArrayBufferTarget } from 'webm-muxer';

	let audioTrack: MediaStreamTrack;
	let audioTrack1: MediaStreamTrack;
	let audioEncoder: AudioEncoder | null;
	let videoEncoder: VideoEncoder | null;
	let muxer: Muxer<ArrayBufferTarget> | null;

	async function start() {
		let userMedia = await navigator.mediaDevices.getUserMedia({ video: false, audio: true });
		let _audioTrack = userMedia.getAudioTracks()[0];
		let audioSampleRate = _audioTrack?.getCapabilities().sampleRate?.max || 22050;

		let displayMedia = await navigator.mediaDevices.getDisplayMedia({ video: true, audio: true });
		let _audioTrack1 = displayMedia.getAudioTracks()[0];
		let audioSampleRate1 = _audioTrack1?.getCapabilities().sampleRate?.max || audioSampleRate;

		let _muxer = new Muxer({
			target: new ArrayBufferTarget(),
			video: {
				codec: 'V_VP9',
				width: 1280,
				height: 720
			},
			audio: {
				codec: 'A_OPUS',
				sampleRate: audioSampleRate1,
				numberOfChannels: 1
			},
			firstTimestampBehavior: 'offset' // Because we're directly piping a MediaStreamTrack's data into it
		});

		let _videoEncoder = new VideoEncoder({
			output: (chunk, meta) => _muxer.addVideoChunk(chunk, meta),
			error: (e) => console.error(e)
		});
		_videoEncoder.configure({
			codec: 'vp09.00.10.08',
			width: 1280,
			height: 720,
			bitrate: 1e6
		});

		let _audioEncoder = new AudioEncoder({
			output: (chunk, meta) => _muxer.addAudioChunk(chunk, meta),
			error: (e) => console.error(e)
		});
		_audioEncoder.configure({
			codec: 'opus',
			numberOfChannels: 1,
			sampleRate: audioSampleRate1,
			bitrate: 64000
		});

		writeAudioToEncoder(_audioEncoder, _audioTrack);
		writeAudioToEncoder(_audioEncoder, _audioTrack1);

		muxer = _muxer;
		audioEncoder = _audioEncoder;
		audioTrack = _audioTrack;
		audioTrack1 = _audioTrack1;
	}

	function writeAudioToEncoder(audioEncoder: AudioEncoder, audioTrack: MediaStreamTrack) {
		// Create a MediaStreamTrackProcessor to get AudioData chunks from the audio track
		let trackProcessor = new MediaStreamTrackProcessor({ track: audioTrack });
		let consumer = new WritableStream({
			write(audioData) {
				audioEncoder.encode(audioData);
				audioData.close();
			}
		});
		trackProcessor.readable.pipeTo(consumer);
	}

	let frameCounter = 0;
	function encodeVideoFrame(videoEncoder: VideoEncoder) {
		let frame = new VideoFrame(canvas, {
			timestamp: ((frameCounter * 1000) / 30) * 1000
		});

		frameCounter++;

		videoEncoder.encode(frame, { keyFrame: frameCounter % 30 === 0 });
		frame.close();
	}

	const endRecording = async () => {
		audioTrack?.stop();
		audioTrack1?.stop();

		await audioEncoder?.flush();
		await videoEncoder?.flush();
		muxer?.finalize();

		if (muxer) {
			let { buffer } = muxer.target;
			downloadBlob(new Blob([buffer]));
		}

		audioEncoder = null;
		videoEncoder = null;
		muxer = null;
	};

	const downloadBlob = (blob: Blob) => {
		let url = window.URL.createObjectURL(blob);
		let a = document.createElement('a');
		a.style.display = 'none';
		a.href = url;
		a.download = 'picasso.webm';
		document.body.appendChild(a);
		a.click();
		window.URL.revokeObjectURL(url);
	};

I have a couple of questions. Can this library merge two audio segments into one media file? And is it possible to process videos without using Canvas?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.