Comments (18)
Some expected changes:
partial interface AudioContext : BaseAudioContext {
attribute EventHandler onerror;
}
-
Removal of notes: "It is unfortunately not possible to programatically notify authors..."
-
Handling the system resource failure in handling a control message from the constructor:
Attempt to acquire system resources to use a following audio output device based on [[sink ID]] for rendering:
...
In case of failure, abort the following steps.
Here we need to queue a task to dispatch an event to error
at the AudioContext.
WDYT? @padenot @chrisguttandin @mjwilson-google @gsinafirooz
from web-audio-api.
Teleconf:
- The general pattern on the web API is to use
onerror
. - The context state will be
closed
.
More thoughts:
- Look into what MediaElement does for error handling.
- For privacy aspect, look into how WebRTC APIs are handling device failure.
from web-audio-api.
More details on the teleconf discussion 02/08:
onerror
is the common pattern from the Web API- For backward compat, if the context constructor is invoked with no sinkId option and the device activation fails, the implementation should use the current behavior. (e.g. Chrome will render the graph with a fake device)
- If the context constructor is invoked with a sinkId option and the device activation fails, the implementation will fire
onerror
via the task scheduler. (async)- When this happens, the context will be closed. (i.e.
context.state === 'closed'
)
- When this happens, the context will be closed. (i.e.
The expected spec change:
- We can remove the following section:
Note: It is unfortunately not possible to programatically notify authors that the creation of the AudioContext failed. User-Agents are encouraged to log an informative message if they have access to a logging mechanism, such as a developer tools console.
- Add
onerror
to AudioContext's IDL. - Add the description of
onerror
:
onerror
is a typeEventHandler
. An ErrorEvent will be dispatched if the underlying audio device fails to activate.
from web-audio-api.
I propose suspending the AudioContext
instead of closing it. Developers will still be able to close it if they want.
from web-audio-api.
There are 2 major scenarios that an error may occur:
- The sink ID provided to the constructor of
AudioContext
doesn't exist. - The active sink ID goes non-existent (gets physically unplugged) while rendering.
Falling back to the default sink ID is also a possibility. So here is the list of proposals by far:
- Suspend the
AudioContext
; and/or - Fallback to the default sink ID
from web-audio-api.
Teleconference 3/7:
This change is only relevant to the device selection via the AudioContext constructor:
- When the sinkId is not given to the AC constructor, there should be no changes on the behavior. As of today Chrome and FireFox both fall back to a silent (fake) output device for this case. No exception or an error will be thrown.
- When the given sinkID is not valid (i.e. fails on the ID validation), the created context will be in a suspended status and its
onerror
will be dispatched.
AudioContext.onerror can also be dispatched:
- When an error is occurred from the system audio stream while running the rendering loop.
The following spec text needs to be removed:
(Under https://webaudio.github.io/web-audio-api/#dom-audiocontext-audiocontext)
Note: It is unfortunately not possible to programatically notify authors that the creation of the AudioContext failed. User-Agents are encouraged to log an informative message if they have access to a logging mechanism, such as a developer tools console.
This means that the creation of AudioContext can't be failed. It will succeed no matter what, but invalid config options (e.g. invalid sinkId) will put the context in a suspended status with an error message via the onerror event handler.
from web-audio-api.
I think the acquire system resources section is part of the setSinkId() method, correct?
So we would also need additional text in:
- AudioContext constructor
- Somewhere to note that
onerror
should fire if the existing sink suddenly becomes unavailable (scenario 2 in #2567 (comment)).
from web-audio-api.
I think the acquire system resources section is part of the setSinkId() method, correct?
No, the location I meant was the second half of the AudioContext constructor.
Somewhere to note that onerror should fire if the existing sink suddenly becomes unavailable
I think we have two options:
- Add a check within the rendering loop to monitor device status and dispatch an appropriate event.
- Add a clarifying note to the onerror description, indicating that the system might trigger it in case of sink failure while the renderer is running.
The second one is my preference, but would love to hear what others think.
from web-audio-api.
No, the location I meant was the second half of the AudioContext constructor.
I see it now. Thank you.
We probably should be explicit that onError is for device / sink problems, and not other types of errors.
- Add a check within the rendering loop to monitor device status and dispatch an appropriate event.
This would be in https://webaudio.github.io/web-audio-api/#rendering-loop, correct? It would be a more explicit specification, which should increase consistency across implementers but will give less flexibility. The rendering loop is one of the more performance-critical sections, so I would prefer to not put anything in it that's not strictly necessary.
- Add a clarifying note to the onerror description, indicating that the system might trigger it in case of sink failure while the renderer is running.
This would give implementers more flexibility. However, I'm worried that if we underspecify it we could have interoperability problems. Could we write it as a normative SHOULD?
from web-audio-api.
As long as there's defined ordering between onerror
being called and e.g. AudioWorkletProcessor
having its process
method called. We can spec that no process
can happen after onerror
has been triggered from the backend (not fired on the main thread).
from web-audio-api.
Based on the discussion so far, onerror
should triggers suspension. It will perform this algorithm:
https://webaudio.github.io/web-audio-api/#dom-audiocontext-suspend
The rendering loop algorithm 4.1 already checks the rendering state flag:
https://webaudio.github.io/web-audio-api/#rendering-loop
Re padenot@'s thought:
- An interruption from the underlying audio device happens (WebAudio render thread?)
- Upon this interruption, change the rendering state flag and queue a task to fire
onerror
andstatechange
(from WebAudio render thread) - AWP's
process
won't get called anymore because the loop will stop due to the rendering state flag
One thing I am not so sure is that 1) might not called from the WebAudio's render thread. It could be a platform's own I/O thread. I'll have to dig into Chromium code to confirm this.
from web-audio-api.
It could be a platform's own I/O thread. I'll have to dig into Chromium code to confirm this.
Indeed, it can also be both the thread on which the real-time callbacks are called, or some other non-realtime thread on the same platform (sometimes it differs depending on the version or settings). Generally we see the following breakdown:
- Windows: render thread (if performing e.g. a read/write on a device that just went away) or some other thread (device change notification thread, e.g. in between two callbacks)
- macOS: generally some system thread, but it can be the render thread in some cases. I assume but haven't checked that iOS is the same
- Android: depending on the version, it can be the render thread or a device event callback
- Linux: the PA event loop (not sure about others)
In all cases Gecko doesn't assume anything about the thread and immediately dispatches an event to a known thread it has, for uniform handling.
from web-audio-api.
One thing I am not so sure is that 1) might not called from the WebAudio's render thread. It could be a platform's own I/O thread. I'll have to dig into Chromium code to confirm this.
Chromium will dispatch the event to WebAudio rendering thread or the main thread. WebAudio never interacts with platform audio threads directly.
We can ensure that the event always arrives on the main thread if it's preferred.
from web-audio-api.
@padenot Thanks for looking into that. That confirms my suspicions.
@o1ka Thanks for jumping in! The discussion above is not about AudioContext.onerror
dispatching. It is about the system interruption to the WebAudio render thread. (e.g. RWADI::OnRenderError)
In Chromium, I believe this is called from AOD::NotifyRenderCallbackOfError() and this is the browser's I/O thread - please correct me if I am mistaken - not the WebAudio render thread. Hence the question: how do we describe this with the current spec text (without introducing new concepts/terms) when we don't define browser's I/O thread anywhere?
from web-audio-api.
@hoch thanks for pointing to that one.
So from Chromium perspective, render error can be raised on the real-time rendering thread, on the main thread or on the IO thread (in case of IPC failures). As I said earlier, the implementation can ensure all these events are eventually delivered on the main thread - if it's convenient.
Why do we need to describe the behavior in terms of threads? Isn't it an implementation detail?
As I see it: the error can occur during construction, during setSinkId() processing, or during rendering. The sources of error may be: invalid sink id (by mistake, or a sink is removed, or no permissions) or device malfunction/system malfunction (ex: CoreAudio crash).
In any case (% some exceptions for backward compatibility) if the rendering has already started, it will stopped (is this what you refer to as the system interruption to the WebAudio render thread?) and the error event will be dispatched. The system can guarantee that processing/rendering is stopped by the time the error event fired. That's all the observed behavior. Isn't it enough? Am I missing something?
from web-audio-api.
So from Chromium perspective, render error can be raised on the real-time rendering thread, on the main thread or on the IO thread (in case of IPC failures).
- The main thread: constructor
- The audio render thread: ??? - I am not sure how this can interrupt itself?
- The IO thread: some system/IPC failure
Why do we need to describe the behavior in terms of threads? Isn't it an implementation detail?
That's because unlike other audio-related APIs on the web, the Web Audio API spec exposes the notion of the control thread and the render thread. The problem is that we want to stop the render loop by setting render thread flag
before anything else, so we don't continue the subsequent iteration of the render call. We have to add spec text for this behavior.
(This is what @padenot meant by "We can spec that no process can happen after onerror has been triggered from the backend", which I agree)
The system can guarantee that processing/rendering is stopped by the time the error event fired. That's all the observed behavior. Isn't it enough? Am I missing something?
I agree that it is something obvious, but we still need to explain how that's guaranteed.
One thing we have not discussed so far is the interaction between onerror
and suspend()
. When onerror
gets fired in the middle of the rendering loop, the algorithm of suspend()
will also be executed. The suspend()
algorithm has tasks for both threads (control, render) and also changes render thread flag
. I have not looked into this in detail yet...
from web-audio-api.
So from Chromium perspective, render error can be raised on the real-time rendering thread, on the main thread or on the IO thread (in case of IPC failures).
- The main thread: constructor
And also setSinkId() processing
- The audio render thread: ??? - I am not sure how this can interrupt itself?
I don't quire understand the question - what does interrupt mean here?
(This is what @padenot meant by "We can spec that no process can happen after onerror has been triggered from the backend", which I agree)
Yes, this makes sense. And also this seems to be enough? How error events are shuffled between internal threads inside, let's say, chromium, and what additional threads (IO or other) are involved are implementation details. For example, IO thread is involved when AudioContext starts/stops rendering - this is not surfaced anywhere.
from web-audio-api.
I don't quire understand the question - what does interrupt mean here?
For example, can the audio render thread (AudioOutputDevice in Chromium) can invoke an error notification (e.g. onRenderError()
in Chromium) directly? That sounds... a bit weird.
Yes, this makes sense. And also this seems to be enough? How error events are shuffled between internal threads inside, let's say, chromium, and what additional threads (IO or other) are involved are implementation details. For example, IO thread is involved when AudioContext starts/stops rendering - this is not surfaced anywhere.
If you look into the Web Audio API specification in details, you'll see we acknowledge the existence of the audio render thread and some algorithms are executed on that thread. So I kinda agree with you that we can get away with the "implementation detail" argument for some parts, but there are other parts we need to specify clearly. I won't know for sure until I start writing an actual PR. (which is my next step)
from web-audio-api.
Related Issues (20)
- Lift autoplay restriction on a muted AudioContext HOT 7
- no ignore background noise HOT 1
- Confused by "Mono-to-stereo processing is used when all connections to the input are mono" HOT 1
- Average AudioContext.outputLatency HOT 6
- PreservePitch in web Audio API HOT 2
- Should copyToChannel accept a shared Float32Array? HOT 2
- Handling unconnected AudioWorkletNode's output HOT 11
- Event Loop execution in `closed` state HOT 3
- Using `default` device ID for setSinkId() HOT 7
- Mono audio plays on only the left speaker if a ChannelSplitterNode is used HOT 11
- Review mute/unmute/ended and constraints on track in audioContext.createMediaStreamDestination().stream HOT 1
- AudioWorkletProcessor difficult to use i have new Idea
- add AudioNode methods function for manipulation
- Whether process() is called for an AudioWorkletNode should depend on whether the active source flag is true and whether the AudioNodes connected to inputs are actively processing HOT 4
- AudioDestinationNode numberOfOutputs=1 according to the spec, but all browsers return 0 HOT 3
- AudioWorkletProcessCallback should not be a callback type HOT 8
- Audio routed to the earpiece speaker HOT 1
- AudioContext stuck on "interrupted" in Safari HOT 1
- Can ASSN.start() start an AudioContext that would otherwise not be allowed to start? HOT 5
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from web-audio-api.