Giter VIP home page Giter VIP logo

microphone-stream's Introduction

Node-style stream for getUserMedia

npm-version Node.js CI

If you just want to get some audio data from your microphone, this is what you're looking for!

Converts a MediaStream (from getUserMedia) into a standard Node.js-style stream for easy pipe()ing.

Note: This only works in a limited set of browsers (typically with webpack or browserify), and then only for https or localhost in Chrome. It does not work in Node.js.

Example

const getUserMedia = require('get-user-media-promise');
const MicrophoneStream = require('microphone-stream').default;

document.getElementById('my-start-button').onclick = function() {

  // Note: in most browsers, this constructor must be called in response to a click/tap, 
  // or else the AudioContext will remain suspended and will not provide any audio data.
  const micStream = new MicrophoneStream();

  getUserMedia({ video: false, audio: true })
    .then(function(stream) {
      micStream.setStream(stream);
    }).catch(function(error) {
      console.log(error);
    });

  // get Buffers (Essentially a Uint8Array DataView of the same Float32 values)
  micStream.on('data', function(chunk) {
    // Optionally convert the Buffer back into a Float32Array
    // (This actually just creates a new DataView - the underlying audio data is not copied or modified.)
    const raw = MicrophoneStream.toRaw(chunk)
    //...

    // note: if you set options.objectMode=true, the `data` event will output AudioBuffers instead of Buffers
   });

  // or pipe it to another stream
  micStream.pipe(/*...*/);

  // Access the internal audioInput for connecting to another nodes
  micStream.audioInput.connect(/*...*/));

  // It also emits a format event with various details (frequency, channels, etc)
  micStream.on('format', function(format) {
    console.log(format);
  });

  // Stop when ready
  document.getElementById('my-stop-button').onclick = function() {
    micStream.stop();
  };
 }

API

new MicrophoneStream(opts) -> Readable Stream

Where opts is an option object, with defaults:

{
  stream: null,
  objectMode: false,
  bufferSize: null,
  context: null
}
  • stream: MediaStream instance. For iOS compatibility, it is recommended that you create the MicrophoneStream instance in response to the tap - before you have a MediaStream, and then later call setStream() with the MediaStream.

  • bufferSize: Possible values: null, 256, 512, 1024, 2048, 4096, 8192, 16384. From Mozilla's Docs:

    It is recommended for authors to not specify this buffer size and allow the implementation to pick a good buffer size to balance between latency and audio quality.

  • objectMode: if true, stream enters objectMode and emits AudioBuffers instead of Buffers. This has implications for pipe()'ing to other streams.

  • context is the AudioContext instance. If omitted, one will be created automatically.

.setStream(mediaStream)

Set the mediaStream, necessary for iOS 11 support where the underlying AudioContext must be resumed in response to a user tap, but the mediaStream won't be available until later. Note: Some versions of Firefox leave the recording icon in place after recording has stopped.

.stop()

Stops the recording. Note: Some versions of Firefox leave the recording icon in place after recording has stopped.

.pauseRecording()

Temporarily stop emitting new data. Audio data recieved from the microphone after this will be dropped.

Note: the underlying Stream interface has a .pause() API that causes new data to be buffered rather than dropped.

.playRecording()

Resume emitting new audio data after pauseRecording() was called.

Event: data

Emits either a Buffer with raw 32-bit Floating point audio data, or if objectMode is set, an AudioBuffer containing the data + some metadata.

Event: format

One-time event with details of the audio format. Example:

{
  channels: 1,
  bitDepth: 32,
  sampleRate: 48000,
  signed: true,
  float: true
}

Note: Here is an example of converting WebAudio data to L16 format. (This is the format most commonly used for the data portion of .wav files; see RFC 1890 section 4.4.8).

MicrophoneStream.toRaw(Buffer) -> Float32Array

Converts a Buffer (from a data event or from calling .read()) back to the original Float32Array DataView format. (The underlying audio data is not copied or modified.)

microphone-stream's People

Contributors

dependabot[bot] avatar germanattanasio avatar guest271314 avatar justimagine09 avatar kant avatar nfriedly avatar saebekassebil avatar trivikr avatar yomed avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.