Giter VIP home page Giter VIP logo

waveform-data.js's Introduction

Build Status npm

waveform-data.js

waveform-data.js is a JavaScript library for creating zoomable representations of audio waveforms to enable visualisation of audio content.

waveform-data.js is part of a BBC R&D Browser-based audio waveform visualisation software family:

  • audiowaveform: C++ program that generates waveform data files from MP3 or WAV format audio.
  • audio_waveform-ruby: A Ruby gem that can read and write waveform data files.
  • waveform-data.js: JavaScript library that provides access to precomputed waveform data files, or can generate waveform data using the Web Audio API.
  • peaks.js: JavaScript UI component for interacting with waveforms.

We use these projects within the BBC in applications such as the BBC World Service Radio Archive and browser-based editing and sharing tools for BBC content editors.

Example of what it helps to build

Install

Use npm to install waveform-data.js, for both Node.js and browser-based applications:

npm install --save waveform-data

Usage and examples

waveform-data.js is available as a UMD module so it can be used from a <script> tag, or as a RequireJS or CommonJS module. See dist/waveform-data.js and dist/waveform-data.min.js.

Importing waveform-data.js

Using a script tag

Simply add waveform-data.js in a script tag in your HTML page:

<!DOCTYPE html>
<html>
  <body>
    <script src="/path/to/waveform-data.js"></script>
    <script>
      var waveform = new WaveformData(...);
    </script>
  </body>
</html>

Using ES6

An ES6 module build is provided for use with bundlers such as Webpack and Rollup. See dist/waveform-data.esm.js.

import WaveformData from 'waveform-data';

Using RequireJS

The UMD bundle can be used with RequireJS:

define(['WaveformData'], function(WaveformData) {
  // ...
});

Using CommonJS (Node.js)

A CommonJS build is provided for use with Node.js. See dist/waveform-data.cjs.js.

const WaveformData = require('waveform-data');

Receive binary waveform data

You can create and initialise a WaveformData object from waveform data in either binary or JSON format, using the Fetch API, as follows.

Binary format

Use audiowaveform to generate binary format waveform data, using a command such as:

audiowaveform -i track.mp3 -o track.dat -b 8 -z 256

Copy the waveform data file track.dat to your web server, then use the following code in your web application to request the waveform data:

fetch('https://example.com/waveforms/track.dat')
  .then(response => response.arrayBuffer())
  .then(buffer => WaveformData.create(buffer))
  .then(waveform => {
    console.log(`Waveform has ${waveform.channels} channels`);
    console.log(`Waveform has length ${waveform.length} points`);
  });

JSON format

Alternatively, audiowaveform can generate waveform data in JSON format:

audiowaveform -i track.mp3 -o track.json -b 8 -z 256

Use the following code to request the waveform data:

fetch('https://example.com/waveforms/track.json')
  .then(response => response.json())
  .then(json => WaveformData.create(json))
  .then(waveform => {
    console.log(`Waveform has ${waveform.channels} channels`);
    console.log(`Waveform has length ${waveform.length} points`);
  });

Using the Web Audio API

You can also create waveform data from audio in the browser, using the Web Audio API.

As input, you can either use an ArrayBuffer containing the original encoded audio (e.g., in MP3, Ogg Vorbis, or WAV format), or an AudioBuffer containing the decoded audio samples.

Note that this approach is generally less efficient than pre-processing the audio server-side, using audiowaveform.

Waveform data is created in two steps:

  • If you pass an ArrayBuffer containing encoded audio, the audio is decoded using the Web Audio API's decodeAudioData method. This must done on the browser's UI thread, so will be a blocking operation.

  • The decoded audio is processed to produce the waveform data. To avoid further blocking the browser's UI thread, by default this step is done using a Web Worker, if supported by the browser. You can disable the worker and run the processing in the main thread by setting disable_worker to true in the options.

const audioContext = new AudioContext();

fetch('https://example.com/audio/track.ogg')
  .then(response => response.arrayBuffer())
  .then(buffer => {
    const options = {
      audio_context: audioContext,
      array_buffer: buffer,
      scale: 128
    };

    return new Promise((resolve, reject) => {
      WaveformData.createFromAudio(options, (err, waveform) => {
        if (err) {
          reject(err);
        }
        else {
          resolve(waveform);
        }
      });
    });
  })
  .then(waveform => {
    console.log(`Waveform has ${waveform.channels} channels`);
    console.log(`Waveform has length ${waveform.length} points`);
  });

If you have an AudioBuffer containing decoded audio samples, e.g., from AudioContext.decodeAudioData then you can pass this directly to WaveformData.createFromAudio:

const audioContext = new AudioContext();

audioContext.decodeAudioData(arrayBuffer)
  .then((audioBuffer) => {
    const options = {
      audio_context: audioContext,
      audio_buffer: audioBuffer,
      scale: 128
    };

    return new Promise((resolve, reject) => {
      WaveformData.createFromAudio(options, (err, waveform) => {
        if (err) {
          reject(err);
        }
        else {
          resolve(waveform);
        }
      });
    });
  })
  .then(waveform => {
    console.log(`Waveform has ${waveform.channels} channels`);
    console.log(`Waveform has length ${waveform.length} points`);
  });

Drawing a waveform image

Once you've created a WaveformData object, you can use it to draw a waveform image, using the Canvas API or a visualization library such as D3.js.

Canvas example

const waveform = WaveformData.create(raw_data);

const scaleY = (amplitude, height) => {
  const range = 256;
  const offset = 128;

  return height - ((amplitude + offset) * height) / range;
}

const ctx = canvas.getContext('2d');
ctx.beginPath();

const channel = waveform.channel(0);

// Loop forwards, drawing the upper half of the waveform
for (let x = 0; x < waveform.length; x++) {
  const val = channel.max_sample(x);

  ctx.lineTo(x + 0.5, scaleY(val, canvas.height) + 0.5);
}

// Loop backwards, drawing the lower half of the waveform
for (let x = waveform.length - 1; x >= 0; x--) {
  const val = channel.min_sample(x);

  ctx.lineTo(x + 0.5, scaleY(val, canvas.height) + 0.5);
}

ctx.closePath();
ctx.stroke();
ctx.fill();

D3.js example

See demo/d3.html.

HTML

<div id="waveform-container"></div>

JavaScript

const waveform = WaveformData.create(raw_data);
const channel = waveform.channel(0);
const container = d3.select('#waveform-container');
const x = d3.scaleLinear();
const y = d3.scaleLinear();
const offsetX = 100;

const min = channel.min_array();
const max = channel.max_array();

x.domain([0, waveform.length]).rangeRound([0, 1000]);
y.domain([d3.min(min), d3.max(max)]).rangeRound([offsetX, -offsetX]);

const area = d3.svg.area()
  .x((d, i) => x(i))
  .y0((d, i) => y(min[i]))
  .y1((d, i) => y(d));

const graph = container.append('svg')
  .style('width', '1000px')
  .style('height', '200px')
  .datum(max)
  .append('path')
  .attr('transform', () => `translate(0, ${offsetX})`)
  .attr('d', area)
  .attr('stroke', 'black');

In Node.js

You can use waveform-data.js to consume or generate waveform data from a Node.js application, e.g., a web server.

const WaveformData = require('waveform-data');
const express = require('express');
const fs = require('fs');
const app = express();

app.get('/waveforms/:id.json', (req, res) => {
  res.set('Content-Type', 'application/json');

  fs.createReadStream(`path/to/${req.params.id}.json`)
    .pipe(res);
});

The following example shows a Node.js command-line application that requests waveform data from a web API and resamples it to a width of 2000 pixels.

#!/usr/bin/env node

// Save as: app/bin/cli-resampler.js

const WaveformData = require('waveform-data');
const request = require('superagent');
const args = require('yargs').argv;

request.get(`https://api.example.com/waveforms/${args.waveformid}.json`)
  .then(response => {
    const waveform = WaveformData.create(response.body);
    const resampledWaveform = waveform.resample({ width: 2000 });
    const channel = resampledWaveform.channel(0);

    process.stdout.write(JSON.stringify({
      min: channel.min_array(),
      max: channel.max_array()
    }));
});

Usage: ./app/bin/cli-resampler.js --waveformid=1337

Data format

The file format used and consumed by WaveformData is documented here as part of the audiowaveform project.

JavaScript API

Please refer here for full API documentation.

Browser support

Any browser supporting ECMAScript 5 will be enough to use the library - think Array.forEach:

  • IE9+, Firefox Stable, Chrome Stable, Safari 6+ are fully supported;
  • IE10+ is required for the TypedArray Adapter;
  • Firefox 23+ and Webkit/Blink browsers are required for Web Audio API support.

Development

To develop the code, install Node.js and npm. After obtaining the waveform-data.js source code, run npm install to install Node.js package dependencies.

Credits

This library was written by:

Thank you to all our contributors.

This program contains code adapted from Audacity, used with permission.

License

See LICENSE for details.

Contributing

Every contribution is welcomed, either it's code, idea or a merci!

Guidelines are provided and every commit is tested against unit tests using Karma runner and the Chai assertion library.

Copyright

Copyright 2021 British Broadcasting Corporation

waveform-data.js's People

Contributors

a1k0n avatar afellman avatar artemkosenko avatar bitwit avatar chainlink avatar chrisn avatar dantist avatar davidturissini avatar dependabot[bot] avatar dodds-cc avatar gr2m avatar jdelstrother avatar jonkoops avatar jonsadka avatar kangaroux avatar mdesenfants avatar semiaddict avatar sroucheray avatar thom4parisot avatar wong2 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

waveform-data.js's Issues

Supported way of accessing WaveformDataArrayBufferAdapter's DataView?

In v2 we used to be able to read the underlying ArrayBuffer from a WaveformData instance with waveform.adapter.data.buffer. In v3 we can still technically do it (waveform._adapter._data.buffer) but that doesn't seem ideal.

Would you be opposed to adding a public accessor for the DataView?

Can I use waveform-data with bytearray audio data ?

Can I use waveform-data to create waveform from raw pcm audio data in bytearray format ? I have to deal with 16-bit signed PCM to create waveform on web page and I am looking for an API that can support this audio data format. After reading waveform-data js in github I cannot find the answer for using this API with bytearray audio data, so I decide to ask you directly.

Can not do a function waveform

Is it can do a function like this:

if waveform is low then hide image, else show image.

repeating function on when playing audio

Example:
audio-waveform

Deprecate point, segment, and offset methods

I'm currently working on #56, to add support for multi-channel waveforms, and as part of this I am reviewing all API methods provided by waveform-data.js.

There are a number of methods that are unused in Peaks.js and elsewhere:

  • .offset(), .offset_start, .offset_end, .offset_length, and .offset_duration
  • .segments and .set_segment()
  • .points, .set_point(), and .remove_point()

As multi-channel support will be a breaking API change, I am considering also removing the above methods at the same time. But, I don't want to break existing applications, so if you are using any of these methods, please leave a comment to let me know.

Using Angular: Uncaught ReferenceError: global is not defined

Hi.

I'm having an issue using this library:

import WaveformData from 'waveform-data';

@Injectable()
export class JSArtSoundService {
  ...
  getInfo(blob: Blob): Observable<WaveformData> {
    return from(blob.arrayBuffer()).pipe(
      map((ab: ArrayBuffer) => WaveformData.create(ab))
    );
  }
}

In the bowser I see this:

index.js:1 Uncaught ReferenceError: global is not defined
    at Object../node_modules/inline-worker/index.js (index.js:1)
    at __webpack_require__ (bootstrap:79)
    at Object../node_modules/waveform-data/lib/builders/audiodecoder.js (audiodecoder.js:4)
    at __webpack_require__ (bootstrap:79)
    at Object../node_modules/waveform-data/lib/builders/webaudio.js (webaudio.js:3)
    at __webpack_require__ (bootstrap:79)
    at Object../node_modules/waveform-data/waveform-data.js (waveform-data.js:5)
    at __webpack_require__ (bootstrap:79)
    at Module../src/app/services/sound.service.ts (sound.service.ts:1)
    at __webpack_require__ (bootstrap:79)

This is coming from index.js:

image

Any idea how to fix this ?

Thanks !

Waveform creation using WebAudio decodeAudioData() differs between browsers

Related to this audiowaveform issue, there is a difference between Chrome and Firefox when using decodeAudioData(). It appears that Chrome 75 skips information frames in MP3, while Firefox 67 does not (both browsers running on Ubuntu Linux).

I have created a demo page here that shows the difference between audiowaveform 1.0.12 and 1.1.0, where info frame skipping was introduced, and an example that uses the Web Audio API to compute the waveform data.

Also note WebAudio/web-audio-api#1305.

Running out of hardware contexts when using in iframe(s) in Chrome

When the waveform-data.js library is loaded in multiple iframes on the same page the current tab will run out of hardware contexts to spawn new AudioContext instances. This seems to be an issue with at least Chrome (latest as of today).

embedded-page.html

<script src="https://cdn.rawgit.com/bbcrd/waveform-data.js/master/dist/waveform-data.js"></script>

embedding-page.html

<iframe src="embedded-page.html"></iframe>
<iframe src="embedded-page.html"></iframe>
<iframe src="embedded-page.html"></iframe>
<iframe src="embedded-page.html"></iframe>
<iframe src="embedded-page.html"></iframe>
<iframe src="embedded-page.html"></iframe>
<iframe src="embedded-page.html"></iframe>

Demo (open dev console): https://jsfiddle.net/r9sv6q9f/

You should see the following error in the console:

Uncaught NotSupportedError: Failed to construct 'AudioContext': The number of hardware contexts provided (6) is greater than or equal to the maximum bound (6).

The error is produced by this line: https://github.com/bbcrd/waveform-data.js/blob/master/dist/waveform-data.js#L1380

If you need further information please let me know.

Render WaveformData object using peaks.js

After generating waveform json data server side and using new WaveformData(xhr.responseText, WaveformData.adapters.object); to create my WaveformData object, is there any way I can render the waveform by passing the WaveformData object to peaks.js? I can't use the json data to generate the waveform as it isn't stored on the server.

D3.js example not working

Hi,
I'm trying to visualize D3 example from readme, however, I'm no able to make it work.
I'm new to D3 and do not understand how I can render a waveform using that example. I think its outdated and there are variables/methods that doesnt exists.
I'd like to draw a waveform using D3 so I modified example with this, but its rendering absolutelly in screen, without scaling.

    const channel = waveform.channel(0);
    const offsetX = 100;
    const min = channel.min_array();
    const max = channel.max_array();

    const svg = d3.select('#waveform-container')
        .append('svg')
        .attr("width", 100)
        .attr("height", 100)
        .attr('viewBox', '-5 -20 35 40');

    const x = d3.scaleLinear()
        .domain([0, waveform.length])
        .rangeRound([0, 1024]);

    const y = d3.scaleLinear()
        .domain([d3.min(min), d3.max(max)])
        .rangeRound([offsetX, -offsetX]);
            
    const area = d3.area()
        .x((d, i) => x(i))
        .y0((d, i) => y(min[i]))
        .y1((d, i) => y(d));

    svg.select('path')
        .datum(max)
        .attr('transform', () => `translate(0, ${offsetX})`)
        .attr('d', area);

Plz help, how could I draw waveform that adapts to container's width?
Thanks!

Disabling web worker

Is there a way to disable the web worker and just run the process in the current thread?
I'm running a batch analysis in a thread pool with Threads.js and this already creates a web worker for each task. Running the web worker inside the web worker causes problems during compile of Threads and running a web worker inside a web worker is not very efficient either.

Get frequency data for a specific time window in a WebAudio context?

In my app I allow the user to upload a file; I use peaks.js to show a waveform using webAudio and allow the user to play the file.

For my application, I also want to be able to allow the user to visualize beats for specific frequencies. E.g. I'll have a light pulsing with the beat of the lowest third frequencies, one pulsing with the mid range frequencies, and one for high.

I see WebAudio let's me create an analyser but I don't think it lets me specify a start and duration to return frequency data for. I could use Wavesurfer to export PCM data, but I haven't found a library to process it.

Could waveform-data help me? And/or any suggestions? There is so much code floating around doing beat analysis, PCM export, and analyzing using WebAudio, I think it's just a matter of connecting the dots.....

Unable to visualise HLS chunks

I'm trying to visualise a period of N (time in seconds), using a HLS stream handled by hls.js.

I've created an AudioContext and connected it up with my media element correctly. I then constructed a new process from the context using createScriptProcessor.
Binding a function to onaudioprocess, I grab each audioProcessingEvent.inputBuffer (AudioBuffer) over N seconds, and append them to each other, ultimately creating a AudioBuffer representing N period of time.

I then pass the constructed AudioBuffer to WaveformData.createFromAudio with a scale of 128. The output waveform seems ok at a glance, although i'm not too sure how to verify this...

I'm unable to represent the waveform data using the canvas example in the README.
Are there any tools i can use to verify the data i've produced is correct? Or at least any points to look for.
Should i normalise the data in the ArrayBuffer produced between 0 and 1 before trying to render it? I've noticed there's lots of peaks and troughs.

Furthermore, i've tried to pass the waveform data produced to be represented by peaks.js. The duration of the output is correct, however there are no data points displayed.

"can generate waveform data using the Web Audio API."

I'm only seeing examples to generate waves through raw_data which is .json or .dat.

But there are no examples of how to generate a wavedata.json through a direct link to either mp3 or wav. This should be possible right?

Angular issues upgrading from 3.1.0 to 3.3.1

I am using the latest version of Angular 9 at the time of this writing. It is an Angular-CLI project. When I upgrade and try to serve the project, I am greeted with the following errors after upgrade:

ERROR in node_modules/waveform-data/waveform-data.d.ts:46:4 - error TS1036: Statements are not allowed in ambient contexts.

46 };
~
node_modules/waveform-data/waveform-data.d.ts:194:3 - error TS2309: An export assignment cannot be used in a module with other exported elements.

194 export = WaveformData;

Using waveform.min causes the call stack to exceed it's limit

When using the waveform.min property I get the following error: Uncaught RangeError: Maximum call stack size exceeded. Seems my .dat file might be too big and causes the stack size to overflow.

This is the code I'm currently using:

var request = new XMLHttpRequest();
var waveform;

request.responseType = 'arraybuffer';
request.open('GET', '/test.dat');

request.addEventListener('load', function onLoad( evt ) {
  waveform = WaveformData.create(evt.target);
  var min = waveform.min;
});

request.send();

The .dat file I'm using can be found here.

Process audio file in PHP to generate waveform data

Just wondering if there is any way to process an audio into the data (JSON) file that is used by waveform-data?

It would make it easier for users who are using any CRM/web interface that is based on PHP to process a file to get the waveform data without needing to run a command line tool.

Right channel is never used

On https://github.com/bbc/waveform-data.js/blob/master/lib/builders/audiodecoder.js#L38-L39 it should read:

left_channel = audio_buffer.getChannelData(0);
right_channel = audio_buffer.getChannelData(1);

Also, the assumption that there are always 2 channels is flawed. Something like this would be more robust:

var channels = [];

for (var i = 0; i < audio_buffer.numberOfChannels; ++i) {
	channels[i] = audio_buffer.getChannelData(i);
}

for (var i = 0; i < audio_buffer.length; ++i) {
	var sample = 0;

	for (var channel = 0; channel < channels.length; ++channel) {
		sample = channels[channel][i];
	}

	sample = sample / channels.length * scale_adjuster;
}

Or even:

var channels = [];

for (var i = 0; i < Math.min(2, audio_buffer.numberOfChannels); ++i) { 
	channels[i] = audio_buffer.getChannelData(i);
}

To just consider stereo channels.

Combining multiple waveforms

Hi,

I'm currently working on the British Library's Save our Sounds project. We have a use case where we need to combine multiple separate waveforms into a single zoomable waveform on the client-side. Do you know of any examples of the best way to do this? Here's my current attempt:

https://github.com/edsilv/peaksjs-test/blob/gh-pages/multi-waveform.html

However this is crashing Chrome when certain durations are selected.

I'm pretty sure I'm approaching this naively, and need to be using something like resample?

Any pointers greatly appreciated....

ES6 examples in README.md

The README.md doesn't provide a basic example with import statements, when it really should, as it's what most developers will use.

resample `from` and `to` options

I have recently started playing around with this library. I have generated a .dat file using the CLI tool with the following command

audiowaveform -i audio.mp3 -o audio.dat -z 256 -b 8

My audio file length is approximately 10 minutes. And I used the resample method to be able to draw a scaled visualisation in an HTML5 Canvas.

Reading through the documentation I noticed two options from and to that can be passed to that method in an options object. However I could not get this to work.

Check the documentation for WaveformData#resample

fetch('audio.mp3')
  .then(response => response.arrayBuffer())
  .then(buffer => {
    const waveform = WaveformData.create(buffer);
    const resampled = waveform.resample({
      width: 500,
      from: 10,
      to: 20
    });
  });

My understanding of those two attributes is that it would return a WaveFormData object from/to a specific time within the audio file (i.e. I want to get my audio sample from second 10 to second 20 and scale them so that they can be drawn on a 500px wide canvas)

After digging in the source code, I could see no reference to from or to in lib/core.js. I'm not sure whether this is still relevant?

I am wondering what would be the right approach using waveform-data.js to achieve what I'd like to do?

I'm happy to PR the documentation updates as needed.

Thank you again for open sourcing this project 💯

Include builders/webaudio.js as browserify dependency

Great Library!
I'm currently including the builders/webaudio.js file as a totally separate script as it's not in picked up by browserify currently. Is there an easy way to include it in the WaveformData.adapters.arraybuffer namespace?

v2 dat and json toJSON() - adapters give different peaks

Hello!

I'm using audiowaveform v1.5.0 to generate both a json and dat file for testing atm.

  • same mp3
  • same zoom and bits
audiowaveform -i Vocals30.mp3 -o test.json -z 1000 -b 16

audiowaveform -i Vocals30.mp3 -o test.dat -z 1000 -b 16

I've noticed after using WaveformData.create() from both of these loaded sources, and then calling toJSON() that the peaks array in the dat file doesn't look correct. The files have matching metadata at the top however, it's just the data array that is quite different.

Are these outputs intended to be identical? I could give this a look.

Create adaptor for MediaStreams

Problem

Currently, waveform-data only supports rendering waveforms for audio files that have a known length. I would like to use this library to render waveforms from WebRTC MediaStreams.

Proposal

I propose that a new Adapter type be created that is backed by a ScriptProcessor node (https://developer.mozilla.org/en-US/docs/Web/API/ScriptProcessorNode). This would unlock rendering MediaStream waveforms, and, as a side effect, allow arbitrary combinations of AudioNodes to be rendered as well.

Caveats

Because of the nature of Media Streams vs Audio files, it is necessary to update or introduce API changes to allow for async activity. This represents a major change in existing waveform api, where it is currently possible to get min and max values via waveform.min and waveform.max calls. This change will require some sort of callback pattern, and I think that the Observable pattern is a good fit for this.

This change would also require that onAudioDecoded (https://github.com/bbc/waveform-data.js/blob/master/lib/builders/audiodecoder.js#L39) be broken down to return a DataView instead of a Waveform object.

This kind of adapter is also different from current adapters because there needs to be a way to tell the adapter to start and stop collecting data for its internal AudioBuffer.

Code

Here is my first pass implementation of a ScriptProcessorNode. Note that some of this code uses libraries that are available in my application at large, and I certainly wouldn't advocate using them in a real implementation. This is just to get the conversation started.

class ScriptProcessorAdapter {
    constructor(audioContext, processor) {
        this.audioContext = audioContext;
        this.processor = processor;
        this.buffer = audioContext.createBuffer(2, 1, audioContext.sampleRate);
        this.data = normalize(this.buffer);
    }

    start() {
        return Observable.create((o) => { //RXJS flavor observables
            const onAudioProcess = (evt) => {
                // get the processor node's input
                const { inputBuffer } = evt;
                const { length:start, audioContext, buffer } = this; // cache length
                 // We need to concat the new inputBuffer with what we already have
                this.buffer = concatenateAudioBuffers(audioContext, buffer, inputBuffer);
                // normalize this to DataView. `normalize` is a broken out version of `onAudioDecoded`
                this.data = normalize(this.buffer);
                o.next();
            }
            this.processor.addEventListener('audioprocess', onAudioProcess);

            return () => {
                this.processor.removeEventListener('audioprocess', onAudioProcess);
            }
        })
    }

    at(index) {
        return Math.round(this.data.getInt8(20 + index));
    }

    get length() {
        return this.data.getUint32(16, true);
    }

    fromResponseData(responseData) {
        return this;
    }
}

Usage:

const adapter = new ScriptProcessorAdapter(audioContext, scriptProcessor);
const waveform = new WaveformData(scriptProcessor, adapter);

// Elsewhere
const subscription = adapter.start()
  .subscribe(() => {
        // This line is a bit awkward because we have to tell the waveform that new data
        //  is present, and it needs to rebuild its min/max arrays
        this.waveform.offset(0, this.adapter.length);
        // render to canvas
        this.draw(this.waveform);
  });


// Elsewhere again
subscription.unsubscribe();

async resampling

Provide an additional callback argument, to compute the resampling during the nextTick instead of the current loop.

.min and .max performs expensive calculations

I just debugged a performance issue after moving from .forEach to a regular for loop to draw my waveform. Now, I would agree that the iterator methods are better most of the time, but still, I was very baffled to find that this change made my rendering ~50x slower. The reason is that data.min is a computed property, and it computes on every access. I would not expect waveform-data to make any calculations outside calls to resample, so I would suggest the min and max properties either be calculated when resampling, or at least be made lazy, so that they only ever compute once.

Export to JSON

Hey,

As I can see this library "can generate waveform data using the Web Audio API."

I'm wondering if it can export the generated wave into a JSON structure or data file just like the C++ library?

Thanks.

Outdated documentation?

The README shows this example:

xhr.addEventListener('load', (progressEvent) => {
  WaveformData.builders.webaudio(audioContext, progressEvent.target.response, (err, waveform) => {
    if (err) {
      console.error(err);
      return;
    }

    console.log(waveform.duration);
  });
});

But WaveformData.builders is undefined. I'm importing it like so:

import WaveformData from 'waveform-data';

Chrome crashes when generating a waveform of a large file

When I try to generate the waveform of a large file (55 minutes, MP3 320kbps) in the browser (with Peaks.js), by not passing a json waveform-file, the memory usage of Chrome starts to get higher rapidly. First it downloads the whole file (obviously), then the CPU gets to 50% on my dual core VM, and quickly after that the memory gets to 1GB, and within a second to 2GB. After another few seconds the tab just crashes and I have to reload the page.

This isn't happening when I pass in a JSON-waveform. It is happening with version 1.4.3 and 1.5.1 (in combination with Peaks.js)

Looks like it's a memory issue, but I don't really know where to look to fix this issue. Would be nice if there's someone who does :-)

Is it possible to generate WaveformData from Node.js?

I believe that the short answer to this questions is "no" given that the webaudio builder uses the AudioContext object from the browser.
However, maybe a 'nodebuilder' plugin could be created/PR'd that uses a different method to perform the decodeAudioData() function and complete the process, or that's what I'm thinking anyway.

For example, I found a WIP node web audio project here: https://github.com/sebpiq/node-web-audio-api , though I'm a bit iffy on its semi-complete status.

In any case I wanted to inquire here first before going on a fork adventure since I'm sure BBCRD understands waveform data better than I do.

Create Waveform Data Web Audio Node

Similar to #46, but I think a better approach is to write implement this as a Web Audio Node. Here is what I propose:

const waveformDataNode = createWaveformDataNode(audioContext, { scale: 512 }, (fullWaveform, sampleWaveform) => {
    // This callback gets two arguments:
    // Argument 1 (fullWaveform): Is the full waveform for every sample that has been processed
    // Argument 2 (sampleWaveform): Is the waveform only for the current sample.
});

anyWebAudioNode.connect(waveformDataNode);
waveformNode.connect(anyOtherWebAudioNode);

// API call to start gathering Waveform data as it happens, passthrough otherwise
waveformNode.beginRender();

Couple thoughts:

  1. By implementing this as a web audio node (ScriptProcessorNode or possible AudioWorklet), we can generate a waveform from any audio data that can be processed via Web Audio API. That means that we can get waveform data from attached gain nodes and microphone streams.

  2. This approach works with either a regular "AudioContext" or an "OfflineAudioContext".

  3. My initial implementation was just a "container" Waveform that held an array of other Waveforms. This may or may not be the best way, but I don't have a lot of experience (or use cases) for zooming in and out. Some help on this front would be much appreciated @chrisn

  4. There are some tasks that need to be done before this can be production ready, most notably this: #61. We simply cannot spin up a new Worker instance for every sample. It's very inefficient and has caused my browser to spin many times.

Using API to generate .dat or .json from .mp3

Hi,

I am trying to make use of peaks.js and waveform-data.js to show waveforms for my mp3 audio files. I went through the github repos, and found that to generate audio waveforms we need .dat format files or .json files. How can I convert my mp3 to suitable formats using your libraries and generate waveforms?
I am making use of python's django framework and tried with PyDUB library to convert mp3 file data to store it as .JSON but the array data seems to be in different form compared to test_data sample files in your repository? Can you please guide me how can I implement this in my django website? There must be something obvious that I am missing.

Thanks

Resampling a time segment with "from" and "to" options don't exist in code, but are described in the documentation.

The API Documentation describes the ability to resample a particular segment of the waveform, however, invoking this function as described only responds to the width property of the options object. No value for the from or to properties has an effect on the output. Please see the fourth example referenced in the preceding link.

I searched the source and it seems this feature isn't implemented (and doesn't appear in the tests). Is this a holdover from a previous version? I can see how it might have been moved to Peaks at some point.

Remove InlineWorker from closure in getAudioDecoder

getAudioDecoder creates an InlineWorker on every invocation. This is ok for processing a single waveform, but it quickly becomes a problem when multiple waveforms are processed. It would be better if the worker was created only once.

would it be possible to stream audio data?

All of the examples I've seen operate on a complete buffer of data. What I'd like to do is generate a visualization of a waveform as it's being recorded in the browser.

Is this possible with waveform-data.js?

thanks for a great project by the way!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.