Comments (11)
Really cool, thanks for sharing that solution! I honestly wasn't aware that cFFmpegSource could be used this way. 👍
from opensmile.
SOLUTION:
I am able to pass RTMP stream directly to opensmile API using cffmpefsource
in the config and passing -I rtmp_url
in the command line.
config:
[componentInstances:cComponentManager]
instance[waveIn].type=cFFmpegSource
[waveIn:cFFmpegSource]
writer.dmLevel = wave
blocksize_sec=1.0
filename=\cm[inputfile(I){test.wav}:name of input file]
monoMixdown=1
outFieldName = pcm
from opensmile.
There is no inbuilt support for RTMP in openSMILE but you could receive the RTMP stream outside of openSMILE and pass the received data to openSMILE via SMILEapi. For more information on SMILEapi, see https://audeering.github.io/opensmile/reference.html#smileapi-c-api-and-wrappers. If you are working with Python, I would recommend https://audeering.github.io/opensmile-python/.
from opensmile.
I am able to stream the RTMP
link using GStreamer
and if I use cPortaudioSource
, I can get the data using smileapi
but i see that the config has device = -1
, which means I have to set the audio device to whichever device is available. How do I make the device as an argument in the command line.
Also, I used pulseaudio
to create virtual sound cards and pass that to gstreamer and then set device=-1 in the config file. But how do I change the device value if i have to test on multiple virtual sound cards at once. That is, How can I execute opensmile if I had to srteam from two or more RTMP source on multiple virtual sound cards.
from opensmile.
How do I make the device as an argument in the command line.
You can make the option configurable via the \cm
directive:
device = \cm[device{-1}:audio device]
which defines a default value of -1 but can be overridden on the command-line via -device <value>
.
That is, How can I execute opensmile if I had to srteam from two or more RTMP source on multiple virtual sound cards.
If you want to run openSMILE independently on multiple streams, you could just start multiple instances of openSMILE and specify different audio devices on the command-line. If you need multiple audio streams in a single openSMILE instance, you can declare multiple cPortAudioSource
instances in your config.
from opensmile.
Thank you for the quick response.
So, I am able to take the device from the command line and able to extract the features.
I have created multiple virtual sound cards but when I make listDevices = 1
it always shows 2 devices only.
== cPortAudioSource == There are 2 audio devices:
-> Device #0: 'pulse'
#inputChan=32 #outputChan=32
-> Device #1: 'default'
#inputChan=32 #outputChan=32
The default device is #1
However, I can see all the other audio devices via pacmd list-sinks
command and it shows 5 audio devices.
i am using pacmd load-module module-null-sink sink_name=VirtSoundCard sink_properties=device.description=Virtual-Sound-Card
to create a virtual sound card.
from opensmile.
Not sure, we are using the functions in PortAudio to enumerate devices: http://www.portaudio.com/docs/v19-doxydocs/querying_devices.html. PortAudio might not be compatible with PulseAudio virtual sound cards.
from opensmile.
Can you help me with any alternative way to create and pass the virtual audio device to opensmile api?
Currently, I am using the following commands to create and send virtual audio device data to opensmile api.
$ pulseaudio --start
$ pacmd load-module module-null-sink sink_name=VirtSoundCard sink_properties=device.description=Virtual-Sound-Card
$ gst-launch-1.0 rtmpsrc location="rtmp://xxx.xxx.xxx.xxx/live/test live=1" ! decodebin ! pulsesink stream-properties="props,media.title=VirtSoundCard"
$ /home/ubuntu/opensmile/build/progsrc/smilextract/SMILExtract -C /home/ubuntu/opensmile/config/prosody/liveProsodyAcf.conf -O tmp/t_prosody.csv -device 1
from opensmile.
Can you help me with any alternative way to create and pass the virtual audio device to opensmile api?
I'm afraid, I haven't got any more hints or clues how to make virtual audio devices work with PortAudio. What should be possible is to implement the receiving of the stream yourself and pass the data to openSMILE via SMILEapi but I haven't worked with RTMP streams before so cannot give you concrete suggestions on how to approach it.
from opensmile.
Is there a way to make some changes in the API to read the RTMP packet directly? I have tried different approaches to stream the audio to a virtual device but although I am able to save the audio using GStreamer file-sink, I am unable to generate any features when I pass the same device as input to opensmile API. However, if I set the device to default, then I am able to generate data in some cases. But making the device default is not suitable to my usecase.
from opensmile.
Is there a way to make some changes in the API to read the RTMP packet directly?
We haven't got any experience with RTMP so far, so can't give you any concrete hints how to approach RTMP integration, I'm afraid.
from opensmile.
Related Issues (20)
- libm.so.6: version 'GLIBC_2.27' not found HOT 7
- (MSG): No filename given HOT 3
- zsh: command not found: SMILExtract HOT 4
- OpenSmile output export HOT 2
- Ndst in ./src/lldcore/intensity.cpp HOT 2
- Why the output line in csv file always have some badlines? HOT 1
- eGeMAPS Implementation HOT 1
- build error no such file src/include/io/rosSink.hpp:55:10: fatal error: ros/ros.h: HOT 2
- AttributeError: module 'opensmile' has no attribute 'Smile' HOT 1
- version `GLIBC_***` is too low to run. HOT 1
- Reading features from output file HOT 4
- Custom `win_len` and `hop_dur` in openSmile python
- Compiling from Source m1 Mac; finite
- Issue with smileMath_csplint_init when migrating from version 2.0 to 3.*
- "SMILE3.0.1 error mentioning no smile-conf file"
- Spectral Centroid for white noise has large offset HOT 1
- Error about “Extracting features with OpenCV” HOT 1
- error in prosodyShs.conf
- The audio name in the output of SMILExtract is noname HOT 5
- (ERR) [1] configManager: cFileConfigReader::openInput : cannot find input file 'smile.conf'! HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from opensmile.