Giter VIP home page Giter VIP logo

audioworklet-polyfill's Introduction

audioworklet-polyfill

Strictly unofficial polyfill for Web Audio API AudioWorklet. The processor runs in a Web Worker, which is connected via SharedArrayBuffer to a main thread ScriptProcessorNode.

edit: SharedArrayBuffers (SABs) are currently disabled in Firefox and Safari due to security concerns. As a workaround, the polyfill falls back to transferable ArrayBuffers that are bounced back and forth between main thread and web worker. This requires double buffering, which increases latency. The polyfill still works reasonably well even with this, in all tested user agents. SABs will be re-enabled when available.

demos

https://webaudiomodules.org/wamsynths

Tested in stable Firefox 75.0 and Safari 12.1.2.

More info at webaudiomodules.org

usage

<script src="audioworklet.js"></script>
<script>
// audioworker.js should also reside at root
const context = new AudioContext();

// -- buflenSPN defines ScriptProcessorNode buffer length in samples
// -- default is 512. use larger values if there are audible glitches
AWPF.polyfill(context, { buflenSPN:512 }).then(() => {
  let script = document.createElement("script");
  script.src = "my-worklet.js";
  script.onload = () => {
    // that's it, then just proceed 'normally'
    // const awn = new MyAudioWorkletNode(context);
    // ...
  }
  document.head.appendChild(script);    
});
</script>

AWPF.polyfill() resolves immediately if polyfill is not required. note that polyfilled AudioWorklet inputs (if any) need to be connected as in sourceNode.connect(awn.input).

description

audioworklet.js polyfills AudioWorkletNode and creates a web worker. Worker is initialized with audioworker.js script, which in turn polyfills AudioWorkletGlobalScope and AudioWorkletProcessor. audioWorklet.addModule() is thereby routed to web worker's importScript(), and raw audio processing takes place off main thread. Processed audio is put into a SAB (or transferable ArrayBuffer when SAB is unavailable), which is accessed in main thread ScriptProcessorNode (SPN) onaudioprocess() for audio output.

caveats

Due to SPN restrictions the number of input and output ports is limited to 1, and the minimum buffer length is 256. I've also cut corners here and there, so the polyfill does not accurately follow the spec in all details. Please raise an issue if you find an offending conflict.

AudioParams are still unsupported.

similar libraries

@developit has implemented a similar polyfill that uses an isolated main thread scope for audio processing.

audioworklet-polyfill's People

Contributors

amilajack avatar jariseon avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

audioworklet-polyfill's Issues

Issue in migrating to this worklet polyfill.

Hi,
I'm facing an issue with using this polyfill. Let me first tell you why I am looking to use this. I have implemented an AudioWorket processor its working fine on chrome, I also want it to work on ios and android browsers. The issue is that the polyfill that I'm using - https://github.com/GoogleChromeLabs/audioworklet-polyfill - uses the main thread for audio rendering and that is why I am getting a glitchy audio.

Therefore I wanted to use this implementation. To explain my method of implementation. I am first going to show the default code that works without a polyfill when worklet api is present.

Default Code

Folder structure

public/
     |___ index.html
     |___ audioprocessor.js

Code

index.html

<!DOCTYPE html>
<head></head>
<body>
  <script>
    (async function () {
      // audioworker.js should also reside at root
      const audioContext = new AudioContext();

      // Initialize module and create a node for processing.
      await audioContext.audioWorklet.addModule("audioprocessor.js");
      const pn = new AudioWorkletNode(audioContext, "audioprocessor");

      // Get stream and create audio graph source
      const stream = await navigator.mediaDevices.getUserMedia({ audio: true });
      const mic = audioContext.createMediaStreamSource(stream);
      // Connect the graph.
      mic.connect(pn);
      pn.connect(audioContext.destination);

      // Wait for 10 seconds and stop the stream as well as
      // the audio processor.
      setTimeout(() => {
        stream.getTracks().forEach(track => {
          track.stop();
        });
      }, 10000);
    })();
  </script>
</body>
</html>

audioprocessor.js

class Processor extends AudioWorkletProcessor {
    process(inputs, outputs, parameters) {
        // necessary code
    }
}

registerProcessor('audioprocessor', Processor);

Attempt at migration.

From the documentation that I came accross and from the understanding of the usage I attempted to migrate this but Its not working. Here's the details

Folder Stricture

public/
     |___ index.html
     |___ audioprocessor.js
     |___ audioworker.js
     |___ audioworklet.js

Code

index.html

<!DOCTYPE html>
<head>

<script src="audioworklet.js"></script>

</head>
<body>

    <script>
      // audioworker.js should also reside at root
      const audioContext = new AudioContext();

      // -- buflenSPN defines ScriptProcessorNode buffer length in samples
      // -- default is 512. use larger values if there are audible glitches
      AWPF.polyfill(audioContext, { buflenSPN:512 }).then(async () => {

        // Initialize module and create a node for processing.
        await audioContext.audioWorklet.addModule("audioprocessor.js");
        const pn = new AudioWorkletNode(audioContext, "audioprocessor");

        // Get stream and create audio graph source
        const stream = await navigator.mediaDevices.getUserMedia({ audio: true });
        const mic = audioContext.createMediaStreamSource(stream);
        // Connect the graph.
        mic.connect(pn);
        pn.connect(audioContext.destination);

        // Wait for 10 seconds and stop the stream as well as
        // the audio processor.
        setTimeout(() => {
            stream.getTracks().forEach(track => {
                track.stop();
            });
        }, 10000);
      });
    </script>
</body>
</html>

audioprocessor.js

class Processor extends AudioWorkletProcessor {
    process(inputs, outputs, parameters) {
        // necessary code
    }
}

registerProcessor('audioprocessor', Processor);

I'd be grateful if someone can help me in filling the gaps. I think this is going to have much better performance on mobile browsers.

Safari: crash

On Safari 11.1.2 (OSX/Sierra), I am getting the following error:

Unhandled Promise Rejection: TypeError: undefined is not an object (evaluating 'AWPF.worker.onmessage')

I think this happens because in the following code:

        fetch(AWPF.origin + "/audioworker.js").then(function (resp) {
          resp.text().then(function (s) {
            var u = window.URL.createObjectURL(new Blob([s]));
            AWPF.worker = new Worker(u);
            AWPF.worker.postMessage({ type:"init", sampleRate:scope.sampleRate });
          })
        })

        console.warn('Using Worker polyfill of AudioWorklet, audio will not be performance isolated.');
        AWPF.isAudioWorkletPolyfilled = true;
        resolve();

The promise may be resolved before the fetch is done. In that case, AWPF.worker may be accessed before it is set.

This is the code I am using:

        this.context = new (window.AudioContext || window.webkitAudioContext)();

        AWPF.polyfill(this.context).then(() => {
            this.context.audioWorklet.addModule('js/empty-processor.js').then(() => {
             // ...
            });
        });

Is OfflineAudioContext supported?

I've been reading up this project's description (as well as the code itself), and first I'd like to say I appreciate the effort behind it.

However, I saw that it uses dedicated workers for audio processing, and passes audio data back and forth. Which is async.

So my question is: how does this work with OfflineAudioContext, in which the onaudioprocess handler can be called faster than time progresses? Does it work at all?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.