vanilagy / webm-muxer Goto Github PK
View Code? Open in Web Editor NEWWebM multiplexer in pure TypeScript with support for WebCodecs API, video & audio.
Home Page: https://vanilagy.github.io/webm-muxer/demo
License: MIT License
WebM multiplexer in pure TypeScript with support for WebCodecs API, video & audio.
Home Page: https://vanilagy.github.io/webm-muxer/demo
License: MIT License
This package causes warnings in Angular compiler because it uses commonJs modules instead of ESM modules, also, its not treeshakeable.
See: https://gist.github.com/aelbore/65a4d2e86c3326f36607db111a7b6887
I had code that was working, I don't think anything changed but I'm now getting this error.
A VideoFrame was garbage collected without being closed. Applications should call close() on frames when done with them to prevent stalls.
I'm not explicitly calling close() anywhere but neither are you in your example code.
I'm using the newest published version of this library and passing canvas frames into code like this.
export default class WebM {
constructor(width, height, transparent = true, fps) {
this.muxer = new Muxer({
target: new ArrayBufferTarget(),
video: {
codec: 'V_VP9',
width: width,
height: height,
frameRate: fps,
alpha: transparent,
},
audio: undefined,
firstTimestampBehavior: 'offset',
});
this.videoEncoder = new VideoEncoder({
output: (chunk, meta) => this.muxer.addVideoChunk(chunk, meta),
error: (error) => reject(error),
});
this.videoEncoder.configure({
codec: 'vp09.00.10.08',
width: width,
height: height,
bitrate: 1e6,
});
}
addFrame(frame, time, frameIndex) {
return new Promise((resolve) => {
this.videoEncoder.encode(new VideoFrame(frame, { timestamp: time * 1000 }), {
keyFrame: !(frameIndex % 50),
});
resolve();
});
}
generate() {
return new Promise((resolve, reject) => {
this.videoEncoder
.flush()
.then(() => {
this.muxer.finalize();
resolve(new Blob([this.muxer.target.buffer], { type: 'video/webm' }));
})
.catch(reject);
});
}
}
Can I increase the read frame data in webm format to match WebCodecs to draw and play canvas?
VideoDecoder;
`
(........ webm track, frame, chunk? ........ )
videoDecoder.decode(chunk);
`
Using this library, I generate videos on the fly.
Then, I try to play the video in the browser.
In chrome desktop, it works, but on safari (desktop/mobile) or chrome mobile it doesn't.
See example file:
test.webm
I could not yet figure out why this is the case. I would like this library to support an isPlayable
method, that given a Muxer determines if the video is playable or not. It should return either true
, false
, or null
.
Quick mock implementation:
async function isPlayable() {
if (!('mediaCapabilities' in navigator)) {
return null;
// or maybe use `canPlayType` as a fallback. or `MediaSource.isTypeSupported(mimeType)`
}
const videoConfig = {
contentType: 'video/webm; codecs="vp09.00.10.08"', // replace with codec
width: 1280, // Replace with actual width
height: 720, // Replace with actual height
bitrate: 1000000, // Replace with actual bitrate
framerate: 25, // Replace with actual frame rate
hasAlphaChannel: true, // replace with alpha
};
// {"powerEfficient":true,"smooth":true,"supported":true,"supportedConfiguration":{"video":{"bitrate":1000000,"contentType":"video/webm; codecs=\"vp09.00.10.08\"","framerate":25,"height":720,"width":1280},"type":"file"}}
const result = await navigator.mediaCapabilities.decodingInfo({type: 'file', video: videoConfig});
return result.supported;
}
Or maybe, for example here, we specify hasAlphaChannel: true
but the supportedConfiguration
says no alpha is supported, we might be able to makePlayable
by using the specified configuration
Hi,
I'm with a problem on the last part of transform a mp4 file into a webm file.
I could not understand your explanation on this part of your doc: https://github.com/Vanilagy/webm-muxer#media-chunk-buffering.
I'm trying to pass to muxer the encoded parts of audio and video in a interleaved way, but it is not working.
The image below shows the order that I am sending the encoded chunks to the muxer. The number on front of the label is the chunk timestamp. I tried some orders and the resulted file doesn't play.
Do you have any suggesting about this?
Thank you.
I am wondering if anyone can help me mux video and audio (not from the microphone) together? Below is a snippet of some of the code I am using inside a cables.gl op file. I have managed to feed canvas frames one by one to the video to get perfectly formed videos with no missing frames. However when I add the audio the video is not viewable, when I ffmpeg convert it to mp4 there is no audio.
const audioCtx = CABLES.WEBAUDIO.createAudioContext(op);
const streamAudio = audioCtx.createMediaStreamDestination();
inAudio.get().connect(streamAudio); <-- this gets fed from an audio source in cables
audioTrack = streamAudio.stream;
recorder = new MediaRecorder(audioTrack);
muxer = new WebMMuxer({
"target": "buffer",
"video": {
"codec": "V_VP9",
"width": inWidth.get() / CABLES.patch.cgl.pixelDensity,
"height": inHeight.get() / CABLES.patch.cgl.pixelDensity,
"frameRate": fps
},
"audio": {
"codec": "A_OPUS",
"sampleRate": 48000,
"numberOfChannels": 2
},
"firstTimestampBehavior": "offset" // Because we're directly pumping a MediaStreamTrack's data into it
});
videoEncoder = new VideoEncoder({
"output": (chunk, meta) => { return muxer.addVideoChunk(chunk, meta); },
"error": (e) => { return op.error(e); }
});
videoEncoder.configure({
"codec": "vp09.00.10.08",
"width": inWidth.get() / CABLES.patch.cgl.pixelDensity,
"height": inHeight.get() / CABLES.patch.cgl.pixelDensity,
"framerate": 29.7,
"bitrate": 5e6
});
if (audioTrack) {
op.log('we HAVE AUDIO !!!!!!!!!!!!!!!!!!')
/* I REMOVED ALLL THE CODE FROM THE DEMO FROM HERE
// const audioEncoder = new AudioEncoder({
// output: (chunk) => muxer.addRawAudioChunk(chunk),
// error: e => console.error(e)
// });
// audioEncoder.configure({
// codec: 'opus',
// numberOfChannels: 2,
// sampleRate: 48000, //todo should have a variable
// bitrate: 128000,
// });
// Create a MediaStreamTrackProcessor to get AudioData chunks from the audio track
// let trackProcessor = new MediaStreamTrackProcessor({ track: audioTrack });
// let consumer = new WritableStream({
// write(audioData) {
// if (!recording) return;
// audioEncoder.encode(audioData);
// audioData.close();
// }
// });
// trackProcessor.readable.pipeTo(consumer);
TO HERE */
recorder.ondataavailable = function(e){
op.log('test', e.data) <-- this returns a blob {size: 188409, type: 'audio/webm;codecs=opus'}
//audioEncoder.encode(e.data);
muxer.addAudioChunkRaw(e.data) <-- this throws no errors
}
recorder.start()
}
Love this! I'm still new to video processing so I'm not sure if this is possible.
My goal is to apply filters, trim, and draw on top of a video.
I have a <video>
element as source (that has an audio track).
By updating the currentTime
and listening to "seeked"
I've successfully managed to record video frames for a section a given video (for example timestamp 2000 to 3500). This works perfectly and is a lot faster than using the MediaRecorder
.
Now I also want to add the correct section of the AudioTrack and that's where I'm kind of lost?
I've tried to use the method in this issue and in the canvas drawing demo but it doesn't seem to work. The WritableStream
write function gets called but the chunks in the AudioEncoder
output
have a byteLenght
of only 3
which seems incorrect.
If you could give me a pointer in the right direction that would be amazing.
Also, happy to support this project, so if you have a donation link, please let me know. 🙏
I want to use this library to re-mux a raw H.264 stream into a WebM file (because WebM has better support among media players than raw H.264 stream).
Because I already have an encoded stream, I don't need (or want) WebCodecs API to be involved (browser compatibility is another concern).
But currently, this library does an instanceof
test against EncodedVideoChunk
here:
Line 367 in 1e0d320
I know I can construct EncodedVideoChunk
s with my encoded data, but ideally, I want to supply the buffer directly to this library, saving the extra memory allocation and copying.
I tried to modify this library like this:
diff --git a/src/main.ts b/src/main.ts
index 3109e82..840d756 100644
--- a/src/main.ts
+++ b/src/main.ts
@@ -226,7 +226,7 @@ class WebMMuxer {
this.writeVideoDecoderConfig(meta);
- let internalChunk = this.createInternalChunk(chunk, timestamp);
+ let internalChunk = this.createInternalChunk(chunk, 'video', timestamp);
if (this.options.video.codec === 'V_VP9') this.fixVP9ColorSpace(internalChunk);
/**
@@ -328,12 +328,12 @@ class WebMMuxer {
}[this.colorSpace.matrix];
writeBits(chunk.data, i+0, i+3, colorSpaceID);
}
public addAudioChunk(chunk: EncodedAudioChunk, meta: EncodedAudioChunkMetadata, timestamp?: number) {
this.ensureNotFinalized();
if (!this.options.audio) throw new Error("No audio track declared.");
- let internalChunk = this.createInternalChunk(chunk, timestamp);
+ let internalChunk = this.createInternalChunk(chunk, 'audio', timestamp);
// Algorithm explained in `addVideoChunk`
this.lastAudioTimestamp = internalChunk.timestamp;
@@ -356,7 +356,7 @@ class WebMMuxer {
}
/** Converts a read-only external chunk into an internal one for easier use. */
- private createInternalChunk(externalChunk: EncodedVideoChunk | EncodedAudioChunk, timestamp?: number) {
+ private createInternalChunk(externalChunk: EncodedVideoChunk | EncodedAudioChunk, trackType: 'video' | 'audio', timestamp?: number) {
let data = new Uint8Array(externalChunk.byteLength);
externalChunk.copyTo(data);
@@ -364,7 +364,7 @@ class WebMMuxer {
data,
timestamp: timestamp ?? externalChunk.timestamp,
type: externalChunk.type,
- trackNumber: externalChunk instanceof EncodedVideoChunk ? VIDEO_TRACK_NUMBER : AUDIO_TRACK_NUMBER
+ trackNumber: trackType === 'video' ? VIDEO_TRACK_NUMBER : AUDIO_TRACK_NUMBER
};
return internalChunk;
So I can give it plain objects. I haven't modified it to take buffers directly.
Here is my consuming code:
const sample = h264StreamToAvcSample(frame.data);
this.muxer!.addVideoChunk(
{
byteLength: sample.byteLength,
timestamp,
type: frame.keyframe ? "key" : "delta",
// Not used
duration: null,
copyTo: (destination) => {
// destination is a Uint8Array
(destination as Uint8Array).set(sample);
},
},
{
decoderConfig: this.configurationWritten
? undefined
: {
// Not used
codec: "",
description: this.avcConfiguration,
},
}
);
this.configurationWritten = true;
I have a question on the streamer interface, does onData callback is given 'complete' clusters, like one full cluster ? is one media cluster encapsulated inside one chunk ? This is needed for live streaming.
Hi,
Tried to use the muxer on a slower Android device and the resulting video flickers, almost seems like every second a frame is displayed from a couple milliseconds ago.
The video has a 1080 × 1920 resolution and is MPEG-4 AAC H.264 encoded.
Tried if videoBitrate had an effect but didn't seem to make any difference. Encoding on MacOS works correctly. This is with V_VP9
and vp09.00.10.08
I'm sorry I don't have any more specific details. Is there any reason this might happen? Anything I can configure to improve the output?
Hello there
Not sure if it has something to do with webm-muxer, which is awesome by the way.
I‘m using your library with VideoFrame from a live webrtc feed. I‘m encoding it with vp8 and vp9 codec using WebCodec API. In the next version of safari (16.4) WebCodec API is also available in safari.
It works great on android, but when i create webm videos from safari, the resulting videos play very fast. Also i cannot open it in VLC, it plays in windows native player though.
I tried reducing the framerate that the VideoFrames are pushed to the encoder so that the result is about 15fps, again working as expected on Android.
Do you have any hints regarding this? As i understand, setting the fps param in webmmuxer is only informative (metadata) right?
Because i send each frame manually, I dont think the fps param of VideoEncoder has any influence.
Would it be possible to get chunks of muxed data for live streaming? I use webm streaming created from ffmpeg (to an icecast2 server) and it works fine (I can play in a html5 video ou audio tag without problem), even thou the webm standard was not really conceived for live streaming...
Hello,
Before starting I would like to thank you for webm-muxer.
I wanted to know if it is normal that the framerate of your demo has a changing framerate between each file while in the source code it is well marked frameRate: 30
?
In VLC it's written for both videos "30.000300" but in After Effects I have 29.042 fps for the first file and 30.512 fps for the second.
Is there a possibility to have a video file with a fixed framerate?
Hi,
The muxer works great outside the webworker but when I put it inside a webworker the video duration is really weird.
Once the page is loaded, if I wait 10s before starting recording, the video duration will be 10s + recorded video time. The second strange thing is that the video in the player will start at 10s (and not 0s) but impossible to return before 10s.
And if I record a new video after the other video, say after 60s, the video when finished recording will be 64s, etc.
When I reload the page the "bug" starts from zero but increases according to the time I stay on the page.
After trying for days and days, reading all the documents on the subject and trying all possible examples, believing I was doing something wrong, I tried the webm-writer library modified by the WebCodecs team [example](https://github.com/w3c/webcodecs/tree/704c167b81876f48d448a38fe47a3de4bad8bae1/ samples/capture-to-file) and everything works normally.
Do you have any idea what the problem is or am I doing something wrong?
function start() {
const [ track ] = stream.value.getTracks()
const trackSettings = track.getSettings()
const processor = new MediaStreamTrackProcessor(track)
inputStream = processor.readable
worker.postMessage({
type: 'start',
config: {
trackSettings,
codec,
framerate,
bitrate,
},
stream: inputStream
}, [ inputStream ])
isRecording.value = true
stopped = new Promise((resolve, reject) => {
worker.onmessage = ({data: buffer}) => {
const blob = new Blob([buffer], { type: mimeType })
worker.terminate()
resolve(blob)
}
})
}
import '@workers/webm-writer'
let muxer
let frameReader
self.onmessage = ({data}) => {
switch (data.type) {
case 'start': start(data); break;
case 'stop': stop(); break;
}
}
async function start({ stream, config }) {
let encoder
let frameCounter = 0
muxer = new WebMWriter({
codec: 'VP9',
width: config.trackSettings.width,
height: config.trackSettings.height
})
frameReader = stream.getReader()
encoder = new VideoEncoder({
output: chunk => muxer.addFrame(chunk),
error: ({message}) => stop()
})
const encoderConfig = {
codec: config.codec.encoder,
width: config.trackSettings.width,
height: config.trackSettings.height,
bitrate: config.bitrate,
avc: { format: "annexb" },
framerate: config.framerate,
latencyMode: 'quality',
bitrateMode: 'constant',
}
const encoderSupport = await VideoEncoder.isConfigSupported(encoderConfig)
if (encoderSupport.supported) {
console.log('Encoder successfully configured:', encoderSupport.config)
encoder.configure(encoderSupport.config)
} else {
console.log('Config not supported:', encoderSupport.config)
}
frameReader.read().then(async function processFrame({ done, value }) {
let frame = value
if (done) {
await encoder.flush()
const buffer = muxer.complete()
postMessage(buffer)
encoder.close()
return
}
if (encoder.encodeQueueSize <= config.framerate) {
if (++frameCounter % 20 == 0) {
console.log(frameCounter + ' frames processed');
}
const insert_keyframe = (frameCounter % 150) == 0
encoder.encode(frame, { keyFrame: insert_keyframe })
}
frame.close()
frameReader.read().then(processFrame)
})
}
async function stop() {
await frameReader.cancel()
const buffer = await muxer.complete()
postMessage(buffer)
frameReader = null
}
I have a media pipeline in which the encoder stage feeds the recording stage, the recording stage uses mp4/webmmuxer to write local media file. I was using mp4muxer and every thing works fine. But , when i switch to webmmuxer i get this below error and the recorder refuses to write the file. I am inserting keyframes every 10 milliseconds. I wonder whether some thing wrong with my usage. Are there any extra options we need to pass compared to mp4 muxer ?
Current Matroska cluster exceeded its maximum allowed length of 32768 milliseconds. In order to produce a correct WebM file, you must pass in a video key frame at least every 32768 milliseconds.
please advise.
Thanks
When encoding audio-only content, is the audio set to track 1? How to set to track 1?
Demo is failed in chrome mobile(110 & 111) with errors:
DOMException: Input audio buffer is incompatible with codec parameters
Uncaught (in promise) DOMException: Failed to execute 'encode' on 'AudioEncoder': Cannot call 'encode' on a closed codec
Looks like it should work
https://caniuse.com/webcodecs
Hi, we noticed that you removed the onDone method of StreamTarget in v4.0.0. Is there an alternative way to know reliably when all data has passed through the muxer once muxer.finalize() has been called?
We forward the data send via StreamTarget to a file, but can't use FileSystemWritableFileStream directly for various reasons and used the onDone method to trigger to close the file handle.
Hello maintainers and community members,
I'm currently working on a project that uses PHP for the backend and would like to take advantage of the webm-muxer
TypeScript package for WebM/Matroska multiplexing. Given the advantages of this package, including its speed, size, and support for both video and audio as well as live-streaming, I believe it could greatly benefit our workflow.
Here are my main questions and areas of concern:
webm-muxer
is written in TypeScript, is there a recommended approach to call the functions from PHP, possibly via a Node.js bridge? Has anyone successfully integrated it using solutions like phpexecjs or others?webm-muxer
have any built-in utilities for managing temporary files, or would this have to be managed entirely on the PHP side?webm-muxer
in concurrent scenarios?webm-muxer
API, or would it be recommended to build a custom wrapper tailored to our application's needs?webm-muxer
report errors, and what would be the best way to catch and handle these errors on the PHP side?webm-muxer
, what's the best approach to ensure that the PHP integration remains stable and up-to-date?I appreciate any feedback, examples, or pointers from those who have attempted or succeeded in such an integration. Thank you in advance for your help and insights!
How do you think we could add support for subtitles? May be support for WebVVT (S_TEXT/WEBVTT) tracks as specified in:
https://www.matroska.org/technical/subtitles.html
I know this is long shot, just starting the discussion here...
Hi there - I'm trying to use this library to encode a VideoFrame sequence with alpha to the container (ref #9). I see that it should be possible to write an alpha channel to VP8 because WebMWriter does it, using BlockAdditions
/ BlockAdditional
. Would it be feasible to add to this library?
Thanks for the great work!
Safari is add webcodecs support(video only for now) in latest dev releases (https://caniuse.com/webcodecs)
Сurrently demo is failing with error:
[Error] Unhandled Promise Rejection: ReferenceError: Can't find variable: AudioEncoder
This is a question somewhat unrelated to this library, but:
I know most browsers have a public ticket system that allows devs to track the progress of features being added/fixed. I looked everywhere yesterday and couldn't find any mention of a timeline for VideoEncoder support in Firefox, like if it was even on their radar or not.
Do you know where to look for this? Are you in any secret discords where they talk about it?
Love your library by the way, was a breeze to implement & use, with no headaches yet.
Hi there, can webm files be demuxed into chunked data, which then can be fed into VideoDecoder?
First of all - thank you for creating this amazing lib! I'm going to use it in the https://screen.studio rendering & encoding pipeline.
In my pipeline, I need to transcode the .webm file into .mp4 (I hoped the vp9 codec could be used directly in .mp4 without transcoding, but it will not play on QuickTime on macOS).
What I can do is wait for the .webm file to be ready and then start transcoding. This will work, but as export speed is critical for me, I'd like to already start transcoding even before the .webm video file is ready (aka all video chunks being added).
Thus my question is - is it possible to get a file data buffer while I'm adding video chunks so I can already pass it to ffmpeg? This would allow me to parallelize encoding .webm and transcoding it to mp4.
Thank you!
Hi @Vanilagy
Just wanted to check if there is any way that we can use the streaming option to stream data to web storage like IndexedDB.
I would like to avoid array buffer target as the in-memory usage may increase for larger videos but I do not have the flexibility to prompt for saving to a file hence was wondering if an in-memory streaming option is available somehow
Thanks,
Neeraj
Hi @Vanilagy!
First of all, thank you for creating this amazing library and for the active maintenance. This is super helpful for a use case that I have been working on.
I was following this comment and had a couple of doubts:
chunked
approach or without the chunked
approach and why so?WriteStream
will require this capability to be present at some point so that the writes happen efficiently.Another quick question, is it possible to increase the width, height or bitrate configurations of the Muxer midway
Thanks,
Neeraj
Hi there,
I'm currently working on a project that involves live streaming to YouTube using the HLS protocol. I came across your webm-muxer library and was impressed with its performance and simplicity.
However, I'm having trouble figuring out how to use it with YouTube's Live Ingest feature. I was wondering if you could provide some guidance or examples on how to do this.
Any help would be greatly appreciated!
Thank you.
Best regards.
I wonder if I get a selected chrome with AAC support already in place, can mux it to a mkv?
Using codecid: "A_AAC/MPEG4/LC"
Hi Vanilagy,
For an electronjs app, I have to stream the creation of a video without being able to use the Web File System API.
So I use "fs" and I wanted to know if there is a possibility to stream like the Web File System API? Currently I'm using the buffer but it's not ideal because I have long 4K videos.
Do you have the possibility to do something?
Thank you !
Are there plans to add VideoDecoder support?
Chrome 31 now supports video alpha transparency in WebM.
"webm-muxer" How to Encoder alpha videos ?
VideoEncoderConfig:
alpha: 'keep', // keep alpha channel
It doesn't work
Hi there! I'm wondering how to use this library for screen recording since I'm not using Canvas. Also, I'll be speaking into a microphone while recording and I'd like to merge the audio from the microphone with the video. Can you guide me on how to do that? Thanks!
import { Muxer, ArrayBufferTarget } from 'webm-muxer';
let audioTrack: MediaStreamTrack;
let audioTrack1: MediaStreamTrack;
let audioEncoder: AudioEncoder | null;
let videoEncoder: VideoEncoder | null;
let muxer: Muxer<ArrayBufferTarget> | null;
async function start() {
let userMedia = await navigator.mediaDevices.getUserMedia({ video: false, audio: true });
let _audioTrack = userMedia.getAudioTracks()[0];
let audioSampleRate = _audioTrack?.getCapabilities().sampleRate?.max || 22050;
let displayMedia = await navigator.mediaDevices.getDisplayMedia({ video: true, audio: true });
let _audioTrack1 = displayMedia.getAudioTracks()[0];
let audioSampleRate1 = _audioTrack1?.getCapabilities().sampleRate?.max || audioSampleRate;
let _muxer = new Muxer({
target: new ArrayBufferTarget(),
video: {
codec: 'V_VP9',
width: 1280,
height: 720
},
audio: {
codec: 'A_OPUS',
sampleRate: audioSampleRate1,
numberOfChannels: 1
},
firstTimestampBehavior: 'offset' // Because we're directly piping a MediaStreamTrack's data into it
});
let _videoEncoder = new VideoEncoder({
output: (chunk, meta) => _muxer.addVideoChunk(chunk, meta),
error: (e) => console.error(e)
});
_videoEncoder.configure({
codec: 'vp09.00.10.08',
width: 1280,
height: 720,
bitrate: 1e6
});
let _audioEncoder = new AudioEncoder({
output: (chunk, meta) => _muxer.addAudioChunk(chunk, meta),
error: (e) => console.error(e)
});
_audioEncoder.configure({
codec: 'opus',
numberOfChannels: 1,
sampleRate: audioSampleRate1,
bitrate: 64000
});
writeAudioToEncoder(_audioEncoder, _audioTrack);
writeAudioToEncoder(_audioEncoder, _audioTrack1);
muxer = _muxer;
audioEncoder = _audioEncoder;
audioTrack = _audioTrack;
audioTrack1 = _audioTrack1;
}
function writeAudioToEncoder(audioEncoder: AudioEncoder, audioTrack: MediaStreamTrack) {
// Create a MediaStreamTrackProcessor to get AudioData chunks from the audio track
let trackProcessor = new MediaStreamTrackProcessor({ track: audioTrack });
let consumer = new WritableStream({
write(audioData) {
audioEncoder.encode(audioData);
audioData.close();
}
});
trackProcessor.readable.pipeTo(consumer);
}
let frameCounter = 0;
function encodeVideoFrame(videoEncoder: VideoEncoder) {
let frame = new VideoFrame(canvas, {
timestamp: ((frameCounter * 1000) / 30) * 1000
});
frameCounter++;
videoEncoder.encode(frame, { keyFrame: frameCounter % 30 === 0 });
frame.close();
}
const endRecording = async () => {
audioTrack?.stop();
audioTrack1?.stop();
await audioEncoder?.flush();
await videoEncoder?.flush();
muxer?.finalize();
if (muxer) {
let { buffer } = muxer.target;
downloadBlob(new Blob([buffer]));
}
audioEncoder = null;
videoEncoder = null;
muxer = null;
};
const downloadBlob = (blob: Blob) => {
let url = window.URL.createObjectURL(blob);
let a = document.createElement('a');
a.style.display = 'none';
a.href = url;
a.download = 'picasso.webm';
document.body.appendChild(a);
a.click();
window.URL.revokeObjectURL(url);
};
I have a couple of questions. Can this library merge two audio segments into one media file? And is it possible to process videos without using Canvas?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.