Giter VIP home page Giter VIP logo

web-audio-api's People

Contributors

arnaudbienner avatar bill-hofmann avatar billhofmann avatar chrisguttandin avatar chrisn avatar cwilso avatar domenic avatar dontcallmedom avatar ehsan avatar foolip avatar guest271314 avatar hoch avatar hughrawlinson avatar jdsmith3000 avatar joeberkovitz avatar jussi-kalliokoski avatar karlt avatar korilakkuma avatar manishearth avatar marcoscaceres avatar mark-buer avatar mdjp avatar olivierthereaux avatar padenot avatar rocallahan avatar rtoy avatar saschanaz avatar satelllte avatar svgeesus avatar tidoust avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

web-audio-api's Issues

Proposed: recorderNode

Originally reported on W3C Bugzilla ISSUE-21533 Tue, 02 Apr 2013 15:13:13 GMT
Reported by Olivier Thereaux
Assigned to

Per discussion at Audio WG f2f 2013-03-26

Proposal for a real-time recorderNode. Already possible with scriptProcessorNode - but dedicated node would be more conveinient.

Audio Workers: number of input/output channels and inputs/outputs

Originally reported on W3C Bugzilla ISSUE-17534 Mon, 18 Jun 2012 11:27:42 GMT
Reported by Marcus Geelnard (Opera)
Assigned to

The JavaScriptAudioNode does not have the ability to dynamically change its number of input/ouptut channels after creation.

This makes it impossible to re-implement nodes such as AudioGainNode (depends on number of input channels) and ConvolverNode (depends on number of AudioContext output channels - according to [1]).

[1] https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/specification.html#Convolution-reverb-effect

Specify what should happen when the input channel count changes for delay nodes

Originally reported on W3C Bugzilla ISSUE-21426 Thu, 28 Mar 2013 17:05:04 GMT
Reported by Ehsan Akhgari [:ehsan]
Assigned to

For example, when the input channel count increases, it's not clear whether we need to playback silence or an upmixed version of the delayed buffer. The rather might not be efficient to implement because it might require a large amount of work when processing the first buffer after the input channel count change.

AudioParam Automation Example graph missing setTargetAtTime

Originally reported on W3C Bugzilla ISSUE-17701 Thu, 05 Jul 2012 15:03:59 GMT
Reported by Olivier Thereaux
Assigned to

The example at the end of the section on AudioParam Automation is giving examples for setValueAtTime, linearRampToValueAtTime, exponentialRampToValueAtTime and setValueCurveAtTime but not setTargetValueAtTime - which is too bad since that one seems to be the one hardest to comprehend.

Add normative reference to XHR spec

Originally reported on W3C Bugzilla ISSUE-21527 Tue, 02 Apr 2013 14:58:44 GMT
Reported by Olivier Thereaux
Assigned to

Per discussion at Audio WG f2f 2013-03-26

The XHR spec should have an entry in the web audio API references table, and we should point all references in the prose to that entry.

Conformance section: need to note use of MUST that is "RFC-legal" as opposed to common English usage

Originally reported on W3C Bugzilla ISSUE-21515 Tue, 02 Apr 2013 12:30:25 GMT
Reported by Olivier Thereaux
Assigned to

Per discussion at Audio WG f2f 2013-03-26

The conformance section should note that the use of keywords MUST, MAY, SHOULD are used per RFC-2119 and constitute normative statements.

The group did not express a strong preference for either of the following two options. The choice will be left to the discretion of the editor:

  1. make sure all normative statements use upper-case MUST, MAY, etc

or

  1. state in conformance statement that all usage of words must, may, should are per RFC2119 and remove/paraphrase all other uses of the keywords, so that only conformance assertions use them.

Clarify "dezippering" for AudioParam

Originally reported on W3C Bugzilla ISSUE-21546 Tue, 02 Apr 2013 16:16:51 GMT
Reported by Olivier Thereaux
Assigned to

Per discussion at Audio WG f2f 2013-03-26

Document inital time constant and algorythm for dezippering, allow it to be disabled. (explains how to make it "sound good")

AudioNode.disconnect() needs to be able to disconnect only one connection

Originally reported on W3C Bugzilla ISSUE-17793 Tue, 17 Jul 2012 18:30:24 GMT
Reported by Chris Wilson
Assigned to

(Summary of email conversation in list)

There is currently no way to disconnect node A's connection to node B without disconnecting all connections from node A to other nodes. This makes it impossible to disconnect node B from the graph without potential side effects, as you have to:

  • call disconnect() on node A (which disconnects all its outputs)
  • reconnect every connection that node A used to have, EXCEPT the connection to node B.

Not only is this cumbersome, it will be problematic in the future when we solve the related issue of unconnected streams - which is currently exhibiting incorrect behavior in Chrome (it pauses the audio stream), but is underspecified in the spec today. (filing separate bug). Disconnecting then reconnecting would have to have no side effects. (It works okay today, but not ideal - can click.)

Recommended solution:

  • there should be a way to remove a single connection (by supplying the destination node to be disconnected, since there can only be one connection to a given destination node [tested]).

E.g.: the IDL for disconnect should read:

    void disconnect(in [Optional] AudioNode destination, in [Optional] unsigned long output = 0)
        raises(DOMException);

this lets us keep most compatibility - node.disconnect() will still remove all connections.

Behavior of unconnected nodes needs to be specified

Originally reported on W3C Bugzilla ISSUE-17794 Tue, 17 Jul 2012 18:34:19 GMT
Reported by Chris Wilson
Assigned to

This can be described as "What happens when a playing node is temporarily disconnected?" - or, conversely, "If you play a node, and no one is listening (aka connected), does it really play?".

This first came to my attention when I was working with Z Goddard on the Fieldrunners article for HTML5Rocks (http://www.html5rocks.com/en/tutorials/webaudio/fieldrunners/) - particularly, read the section entitled Pausing Sounds. In short - they'd noticed that if you disconnected an audio connection, it paused the audio "stream". I thought this seemed pretty wrong - knowing what I knew about how automation on AudioParams works - in discussions with Chris Rogers, he confirmed this wasn't his expected behavior.

My mental model of connections as an API user still really wants to be "they're just like plugging 1/4" audio cables between hardware units," despite knowing that is not the case here; I would expect if a node was playing and I disconnected its graph, then replugged it 0.5 sec later, it would be 0.5 sec further along - i.e., I would expect the behavior to be the same as if I had connected the node to a zero-gain gain node connected to the audiocontext.destination.

decodeAudioData should accept a mime-type

Originally reported on W3C Bugzilla ISSUE-18510 Thu, 09 Aug 2012 16:16:41 GMT
Reported by Tony Ross [MSFT]
Assigned to

The decodeAudioData method on AudioContext is stated to support any of the formats supported by the element, but unlike the element it doesn't allow the author to state the format of the audio data (since the ArrayBuffer is already a step removed from the XMLHttpRequest likely used to fetch the data).

We should fix this by adding an (ideally required) contentType argument to decodeAudioData to communicate the format of the audio in the provided ArrayBuffer.

Need a way to determine AudioContext time of currently audible signal

Originally reported on W3C Bugzilla ISSUE-20698 Thu, 17 Jan 2013 14:15:09 GMT
Reported by Joe Berkovitz / NF
Assigned to

Use case:

If one needs to display a visual cursor in relationship to some onscreen representation of an audio timeline (e.g. a cursor on top of music notation or DAW clips) then knowing the real time coordinates for what is coming out of the speakers is essential.

However on any given implementation an AudioContext's currentTime may report a time that is somewhat ahead of the time of the actual audio signal emerging from the device, by a fixed amount. If a sound is scheduled (even very far in advance) to be played at time T, the sound will actually be played when AudioContext.currentTime = T + L where L is a fixed number.

On Jan 16, 2013, at 2:05 PM [email protected] wrote:

It's problematic to incorporate scheduling other real-time events (even knowing precisely "what time it is" from the drawing function) without a better understanding of the latency.

The idea we reached (I think Chris proposed it, but I can't honestly remember) was to have a performance.now()-reference clock time on AudioContext that would tell you when the AudioContext.currentTime was taken (or when that time will occur, if it's in the future); that would allow you to synchronize the two clocks. The more I've thought about it, the more I quite like this approach - having something like AudioContext.currentSystemTime in window.performance.now()-reference.

On Jan 16, 2013, at 3:18 PM, Chris Rogers [email protected] wrote:

the general idea is that the underlying different platforms/OSs can have very different latency characteristics, so I think you're looking for a way to query the system to know what it is. I think that something like AudioContext.presentationLatency is what we're looking for. Presentation latency is the time difference between when you tell an event to happen and the actual time when you hear it. So, for example, with source.start(0), you would hope to hear the sound right now, but in reality will hear it with some (hopefully) small delay. One example where this could be useful is if you're trying to synchronize a visual "playhead" to the actual audio being scheduled...

I believe the goal for any implementation should be to achieve as low a latency as possible, one which is on-par with desktop/native audio software on the same OS/hardware that the browser is run on. That said, as with other aspects of the web platform (page rendering speed, cache behavior, etc.) performance is something which is tuned (and hopefully improved) over time for each browser implementation and OS.

The .stop function of AudioBufferSourceNode is poorly defined.

Originally reported on W3C Bugzilla ISSUE-20229 Tue, 04 Dec 2012 08:46:43 GMT
Reported by Li Yin
Assigned to

From the spec, it says "stop must only be called one time and only after a call to start or stop, or an exception will be thrown."

It's confused to me that if stop can be called only one time, it should be impossible that stop can be called after stop. In offlinemode, stop can be called multiple times from web developers' eyes.

So maybe it will be more reasonable if we describe it like this:
start can be called only when playbackState is UNSCHEDULED_STATE, or InvalidStateError exception will be thrown.
stop can be called only when playbackState is SCHEDULED_STATE or PLAYING_STATE, if not, InvalidStateError exception will be thrown.

(BiquadFilterNode): BiquadFilterNode is underdefined

Originally reported on W3C Bugzilla ISSUE-17363 Tue, 05 Jun 2012 11:51:00 GMT
Reported by Philip Jägenstedt
Assigned to

Audio-ISSUE-76 (BiquadFilterNode): BiquadFilterNode is underdefined [Web Audio API]

http://www.w3.org/2011/audio/track/issues/76

Raised by: Philip Jägenstedt
On product: Web Audio API

https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/specification.html#BiquadFilterNode

The filter operation is undefined, with wording such as "standard second-order resonant lowpass filter with 12dB/octave rolloff." A lot more specificity is required.

Wikipedia is the reference for several of the filter modes. We could not find any mode that is implementable given the information provided.

how do multiple offline/online contexts interact

Originally reported on W3C Bugzilla ISSUE-21530 Tue, 02 Apr 2013 15:07:03 GMT
Reported by Olivier Thereaux
Assigned to

Per discussion at Audio WG f2f 2013-03-26

Need to add details to the spec around startRendering().

How do multiple offline/onilne contexts interact?

ScriptProcessorNode: number of inputs/outputs

Originally reported on W3C Bugzilla ISSUE-17533 Mon, 18 Jun 2012 11:22:09 GMT
Reported by Marcus Geelnard (Opera)
Assigned to

The number of inputs & outputs of a JavaScriptAudioNode can not be specified.

Without this ability, it is impossible to re-implement nodes such as AudioChannelSplitter.

Specify what AnalyserNode should do

Originally reported on W3C Bugzilla ISSUE-21446 Sat, 30 Mar 2013 22:06:05 GMT
Reported by Ehsan Akhgari [:ehsan]
Assigned to

The current AnalyserNode is under-spec'ed. We need to provide more information on what an implementation needs to do.

There should be notification when the destination output changed

Originally reported on W3C Bugzilla ISSUE-21345 Wed, 20 Mar 2013 06:04:34 GMT
Reported by Wei James
Assigned to

When changing accessories, the max number of channels can change, which has an impact on virtualization and 3D positioning, you wouldn't use the same settings and algorithms when switching from headset to speakers.

In case you are switching from local speakers to headphones, you are really sending the same stream to the same low-level driver, and the switch is typically handled in the audio codec hardware. You will have continuity of the playback by construction, and the only time you'd need to reconfigure the graph is if you have any sort of 3D positioning.
But if the new output is HDMI, Bluetooth A2DP, USB, there will be a delay and volume ramps when switching, and it'd be perfectly acceptable to stop and reconfigure without any impact to user experience. It'd be interesting to capture this difference in the notification.

Deprecate AudioContext.createBuffer

Originally reported on W3C Bugzilla ISSUE-21518 Tue, 02 Apr 2013 12:42:10 GMT
Reported by Olivier Thereaux
Assigned to

Per discussion at Audio WG f2f 2013-03-26

AudioContext.createBuffer (synchronous) will be deprecated in favor of decodeAudioBuffer (asynchronous)

OfflineAudioContext renders as quickly as possible (not real time)

Originally reported on W3C Bugzilla ISSUE-21532 Tue, 02 Apr 2013 15:11:49 GMT
Reported by Olivier Thereaux
Assigned to

Per discussion at Audio WG f2f 2013-03-26

The section on OfflineAudioContext states that "rendering/mixing-down is faster than real-time". Need to specify that the rendering should be "as fast as possible", with no relation to real time.

Enable AudioContext to be created in a Worker

Originally reported on W3C Bugzilla ISSUE-19991 Sat, 17 Nov 2012 17:12:35 GMT
Reported by Jussi Kalliokoski
Assigned to

Currently there are several annoyances to scheduling in the Web Audio API. For example, if you want to play back a dynamic sequence, you would power it up with a setTimeout/setInterval, both of which are throttled to once per second when in a background tab. Now, if you set up a timer that happens once per second what happens if a reflow or other event delays the timer? To protect against this, you can schedule events for more than a second at a time, but it's a tradeoff for responsiveness of the application. Responsiveness or robustness is not a nice tradeoff to make.

A suggested approach to this problem has been to add a callbackAtTime() method to the AudioContext, but I fear that introducing yet another timer mechanism to the main thread won't help much. Say you setup a callback to trigger one second from the current time. Should it

a) Fire before the clock actually hits the specified time to be a bit more sure to make it in time?
b) Fire exactly when the clock actually hits the specified time? In this case, the desired target is most likely missed.
c) Fire after the time? This is a ridiculous idea. :D

Anyway, even that would be suspect to being delayed by other main thread events like reflows etc., being not much more reliable than a setTimeout(). I think we're going to get a lot of "no" sound from other working groups and browser vendors if the callbackAtTime() had no throttling rules, when browsers finally have painfully put those restrictions to place for existing main thread timer callbacks, so I don't think we'd get even that advantage.

Hence I'd suggest specifying access to the AudioContext interface from Web Workers, where one doesn't need to worry about main thread events delaying anything, nor about timer throttling.

For the time being, the Workers would obviously support less features (supporting MediaStreamSourceNode and MediaElementSourceNode in the Workers would require transferring these entities to the Worker as well). One option would of course be that AudioContexts would be defined as Transferrables, as well as a AudioNodes, letting graphs be shared across threads. This would probably actually be the best way to achieve this, provided we can eliminate race conditions by having value setters and getters exclusive to the thread that currently has the ownership of each node. But there aren't many critical features like this in the Web Audio API, which makes it a prime candidate for being a Transferrable.

Should we leave block size mandated at 128?

Originally reported on W3C Bugzilla ISSUE-21535 Tue, 02 Apr 2013 15:17:16 GMT
Reported by Olivier Thereaux
Assigned to

Per discussion at Audio WG f2f 2013-03-26

This is a placeholder for discussion on where block size limits should be defined in the spec, and whether or not the 128 sample value is appropriate.

AudioParam - min/maxValue, intrinsic value, computedValue

Originally reported on W3C Bugzilla ISSUE-21545 Tue, 02 Apr 2013 16:14:51 GMT
Reported by Olivier Thereaux
Assigned to

Per discussion at Audio WG f2f 2013-03-26:

  • min/maxValue do not need to be exposed as attributes
  • "intrinsic value" is unclear. Move current text (4.5.1) higher in the spec and reword.
  • remove computedValue attribute

A NoiseGate/Expander node would be a good addition to the API.

Originally reported on W3C Bugzilla ISSUE-19977 Fri, 16 Nov 2012 00:20:11 GMT
Reported by Chris Wilson
Assigned to

One of the few node types I've been sorely missing, that could be implemented in JS but with needless latency, is a noise gate/expander node.

Would need standard noise gate controls: threshold, attack, release, hold, and possibly an attenuation setting, maybe even hysteresis control. Additionally, an AudioNode output of the attenuation would be very helpful for doing sidechain gating.

Record all documentation that is considered developer documentation

Originally reported on W3C Bugzilla ISSUE-21548 Tue, 02 Apr 2013 16:20:21 GMT
Reported by Olivier Thereaux
Assigned to

Per discussion at Audio WG f2f 2013-03-26

This is a placeholder to keep track of all the sections in the spec which are considered "developer documentation", to be split out from the spec and into a primer/developer doc type document.

Clarify the exception codes thrown by AudioParam.exponentialRampToValueAtTime

Originally reported on W3C Bugzilla ISSUE-20822 Tue, 29 Jan 2013 22:46:33 GMT
Reported by Ehsan Akhgari [:ehsan]
Assigned to

The spec currently says:

"The value parameter is the value the parameter will exponentially ramp to at the given time. An exception will be thrown if this value is less than or equal to 0, or if the value at the time of the previous event is less than or equal to 0."

We need to clarify what exception gets raised in these cases.

Fix the wording in the AudioDestinationNode spec

Originally reported on W3C Bugzilla ISSUE-20841 Thu, 31 Jan 2013 19:57:26 GMT
Reported by Ehsan Akhgari [:ehsan]
Assigned to

The spec currently says:

"maxNumberOfChannels is the maximum number of channels that this hardware is capable of supporting. If this value is 0, then this indicates that maxNumberOfChannels may not be changed."

I believe the second maxNumberOfChannels should be numberOfChannels.

Specify what should happen when passing invalid offset/duration values to AudioBufferSourceNode.start

Originally reported on W3C Bugzilla ISSUE-21240 Sun, 10 Mar 2013 17:10:27 GMT
Reported by Ehsan Akhgari [:ehsan]
Assigned to

Created attachment 1331 [details]
Test case

We need to specify what happens when these values are invalid, for example, negative, or greater than the length of the buffer.

Currently, WebKit ignores offset if it's negative but smaller than the length of the buffer, and in that case respects duration. If a value larger than the length of the buffer is passed as offset, then WebKit ignores both offset and duration. It would have probably made much more sense if these kinds of invalid values would throw DOM_SYNTAX_ERR as these are probably not what the author would intend to pass in.

ConvolverNode: Make no. of output channels user controllable

Originally reported on W3C Bugzilla ISSUE-17542 Tue, 19 Jun 2012 08:30:43 GMT
Reported by Marcus Geelnard (Opera)
Assigned to

https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/specification.html#ConvolverNode

Currently, the number of output channels of the ConvolverNode is controlled by the number of output channels of the AudioContext (although it's not very clear in the spec).

I think it would be a better idea to be able to control the number of output channels of the ConvolverNode upon construction rather than relying on the AudioDestinationNode.

It would give you more freedom to do custom processing, and makes the actual usage of the impulse response channels much more user controllable.

The last point is important, since the number of output channels controls which matrixing operation will be used.

decodeAudioData Prose: avoid video containers that have an audio track

Originally reported on W3C Bugzilla ISSUE-21520 Tue, 02 Apr 2013 14:41:45 GMT
Reported by Olivier Thereaux
Assigned to

Per discussion at Audio WG f2f 2013-03-26

Change:

"Audio file data can be in any of the formats supported by the audio element"

For:

"...can be accepted in formats containing only audio data (w/o video)"

This is to avoid the overhead of dealing with video containers that have an audio track.

Add detail of connecting audio node to non audio node

Originally reported on W3C Bugzilla ISSUE-21538 Tue, 02 Apr 2013 16:00:50 GMT
Reported by Olivier Thereaux
Assigned to

Per discussion at Audio WG f2f 2013-03-26

In section "The connect to AudioParam method", add detail of connecting audio node to non audio node. An explanation on why would you do it (LFO example) could be added to the graph routing introduction.

OfflineAudioContext needs a way to handle audio of arbitrary duration

Originally reported on W3C Bugzilla ISSUE-21311 Sat, 16 Mar 2013 17:58:03 GMT
Reported by Joe Berkovitz / NF
Assigned to

Reference from mailing list:
post: http://lists.w3.org/Archives/Public/public-audio/2013JanMar/0395.html
author: Russell McClellan [email protected]

"[OfflineAudioContext] really should provide some way to receive data block-by-block rather than in a single "oncomplete" callback. Otherwise, the memory footprint grows quite quickly with the rendering time. I don't think this would a major burden to implementors, and it would make the API tremendously more useful. Currently it's just not feasible to mix down even a minute or so. If this is ever going to be used for musical applications, this has to change."

Chris Rogers stated in teleconference 14 Mar 2013 that it is in fact feasible to mix down typical track lengths of several minutes with the single oncomplete call. A discussion of block size suggested that any breaking of audio rendering into chunks should be fairly large to avoid overhead of switching threads and passing data.

OfflineAudioContext constructor method is not documented

Originally reported on W3C Bugzilla ISSUE-20842 Thu, 31 Jan 2013 23:26:08 GMT
Reported by Ehsan Akhgari [:ehsan]
Assigned to

The spec doesn't describe at all how the constructor arguments of OfflineAudioContext are supposed to change the behavior of the context. This is highly under-specified...

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.