Comments (10)
I am making a video player extension. The audio source is an arbitrary <video>
or <audio>
element. The user provides the media to play. The source is created with a call to createMediaElementSource
. There is no attribute on the resulting source to obtain the number of channels in the media. (channelCount
is always 2)
from web-audio-api.
Thank you, I think I understand the problem now.
The spec for MediaElementAudioSourceNode says: "The number of channels of the output corresponds to the number of channels of the media referenced by the HTMLMediaElement. Thus, changes to the media element’s src attribute can change the number of channels output by this node."
and later: "The number of channels of the single output equals the number of channels of the audio referenced by the HTMLMediaElement passed in as the argument to createMediaElementSource(), or is 1 if the HTMLMediaElement has no audio."
However, I don't see a straightforward way to get the number of channels from the HTMLMediaElement either.
channelCount is used for inputs to a node, so not relevant here.
I think this needs further discussion by the working group:
- One proposal is to add a numberOfChannels attribute to the MediaElementAudioSourceNode.
- The spec seems to imply that the number of channels might change while the MediaElementAudioSourceNode is active; if that is possible then we need a mechanism to notify the program when this happens.
Are there any other nodes or situations you have found where the channel count was unknown? Or would adding information to MediaElementAudioSourceNode resolve everything?
from web-audio-api.
Thanks you for looking into this issue. I believe that adding a numberOfChannels attribute to the MediaElementAudioSourceNode will resolve the issues I am having for my use case.
from web-audio-api.
Teleconference 2024-03-07 notes:
For potential solution 1: the specification currently has channel mapping as a property applied to node inputs. Adding channel mappings for outputs would be a significant change to the specification, probably too big.
For potential solution 2: this is one of the use cases of the ChannelSpliiter and ChannelMerger nodes. AudioWorklet should also be able to do this.
This behavior is due to the ChannelSplitter always having ChannelInterpretation "discrete", and the ChannelMerger always having ChannelInterpretation "speakers" -- this was done intentionally and they were designed to be used together.
Regarding "Adjusting this behavior programmatically is difficult": is there a specific use case that we're missing where it's not possible to set up the ChannelSplitter and ChannelMerger in advance? Or a situation where the channel mixing rules are not enough?
from web-audio-api.
I must be able to apply different filter nodes to specific audio channels after mixing occurs. This is not currently possible as there is no way to know how many channels the audio has.
from web-audio-api.
I think I'm missing something necessary to understand the problem.
If your program is doing the mixing, then it should know how many channels are in the result. You should be able to use ChannelSplitterNode, apply the filters, and then merge back to the mixed format with ChannelMergerNode.
What is the audio source in your program? AudioBuffer, for instance, has a numberOfChannels attribute; is there another audio source in Web Audio that doesn't have any channel information?
Or, is the problem only on the output side? I think the audio playing out of the left speaker only is due to the 'discrete' channelInterpretation from ChannelSplitterNode, so as long as you use a ChannelMergerNode after it, or any other node that can be set to 'speakers', the upmixing rules should apply.
from web-audio-api.
Also, thank you for the diagrams and detailed explanation. I can see you've spent some time on this, so I want to make sure I understand the issue fully.
from web-audio-api.
This is possible today, but requires some code. It handles the change in channel count nicely. Here's a stand-alone program that does it (more or less in code what @mjwilson-google said in english above)
<button>
Start
</button>
<input type="file" accept="audio/*" id="audioFilePicker" />
<audio controls id=a></audio>
<span id=result></span>
<script type="worklet">
registerProcessor('channel-counter', class param extends AudioWorkletProcessor {
constructor() {
super()
this.inputChannelCount = 0;
}
process(input, output, parameters) {
if (input && input[0].length != this.inputChannelCount) {
this.inputChannelCount = input[0].length;
this.port.postMessage(input[0].length)
}
return true;
}
});
</script>
<script>
var ac = new AudioContext;
var e = document.querySelector("script[type=worklet]")
var text = e.innerText;
const blob = new Blob([text], {type: "application/javascript"});
var url = URL.createObjectURL(blob);
ac.audioWorklet.addModule(url).then(() => {
counter = new AudioWorkletNode(ac, 'channel-counter');
counter.port.onmessage = function(e) {
result.innerHTML = `Audio has ${e.data} audio channels`;
}
});
const fileInput = document.getElementById("audioFilePicker");
fileInput.onchange = function() {
if (fileInput.files && fileInput.files[0]) {
const reader = new FileReader();
reader.onload = function(event) {
a.src = event.target.result;
a.controls = true;
a.play();
var source = ac.createMediaElementSource(a);
source.connect(counter);
source.connect(ac.destination);
};
reader.readAsDataURL(fileInput.files[0]);
}
}
document.querySelector("button").onclick = function() {
ac.resume();
}
</script>
from web-audio-api.
Hi,
i have exactly the same problem.
I have to create a channel selector that works for mono and stereo file.
If i use stereo panner node, i can't separate the two channels. Just heard each channels on one speaker at time.
So i decide to use a splitter node connected whith two gainNode, to control the volume of the two channels separately.
For stereo files, it works, but mono file not. Only one of the two speakers can be heard.
So I decided to switch to a different approach based on the number of channels, but it's always 2 so i can't.
Some example code:
Actual code
const audio= new Audio(url)
const audioContext = new AudioContext()
const source = audioContext.createMediaElementSource(audio)
const splitter = audioContext.createChannelSplitter()
const merger = audioContext.createChannelMerger()
const leftGainNode = audioContext.createGain()
const rightGainNode = audioContext.createGain()
source.connect(splitter)
splitter.connect(leftGainNode, 0)
splitter.connect(rightGainNode, 1)
leftGainNode.connect(merger, 0, 0)
rightGainNode.connect(merger, 0, 1)
merger.connect(audioContext.destination)
Channels based approch (not working):
const audio= new Audio(url)
const audioContext = new AudioContext()
const source = audioContext.createMediaElementSource(audio)
if (source.channelCounts > 1) { // it's always 2
const splitter = audioContext.createChannelSplitter()
const merger = audioContext.createChannelMerger()
const leftGainNode = audioContext.createGain()
const rightGainNode = audioContext.createGain()
source.connect(splitter)
splitter.connect(leftGainNode, 0)
splitter.connect(rightGainNode, 1)
leftGainNode.connect(merger, 0, 0)
rightGainNode.connect(merger, 0, 1)
merger.connect(audioContext.destination)
} else {
const stereoPanner = audioContext.createStereoPanner()
source.connect(stereoPanner)
stereoPanner.connect(audioContext.destination)
}
from web-audio-api.
We can consider exposing a 'numberOfChannels' property out of MediaStreamTrackAudioSourceNode
.
This comes with a caveat: the number of channels of the MediaStreamTrack can change dynamically and the inspection of this value from the main thread will always be (slightly) stale. We can consider adding an event for the change - not ideal but acceptable for the majority use cases.
@padenot's approach above will be the only sample-accurate way to figure it out. It's rather heavy-weight, but it's correct and doesn't need a new API.
from web-audio-api.
Related Issues (20)
- atob() is missing in AudioWorkletGlobalScope HOT 8
- Lift autoplay restriction on a muted AudioContext HOT 7
- no ignore background noise HOT 1
- Confused by "Mono-to-stereo processing is used when all connections to the input are mono" HOT 1
- Average AudioContext.outputLatency HOT 6
- PreservePitch in web Audio API HOT 2
- Should copyToChannel accept a shared Float32Array? HOT 2
- Handling unconnected AudioWorkletNode's output HOT 11
- Device-related error reporting via AudioContext.onerror HOT 18
- Event Loop execution in `closed` state HOT 3
- Using `default` device ID for setSinkId() HOT 7
- Review mute/unmute/ended and constraints on track in audioContext.createMediaStreamDestination().stream HOT 1
- AudioWorkletProcessor difficult to use i have new Idea
- add AudioNode methods function for manipulation
- Whether process() is called for an AudioWorkletNode should depend on whether the active source flag is true and whether the AudioNodes connected to inputs are actively processing HOT 4
- AudioDestinationNode numberOfOutputs=1 according to the spec, but all browsers return 0 HOT 3
- AudioWorkletProcessCallback should not be a callback type HOT 8
- Audio routed to the earpiece speaker HOT 1
- AudioContext stuck on "interrupted" in Safari HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from web-audio-api.