Giter VIP home page Giter VIP logo

pya's Introduction

PyPI License

pya

Branch master develop
CI-Linux/MacOS Build Status Master Build Status Develop
CI-Windows Build status AppVeyor Build status AppVeyor
Changes GitHub commits GitHub commits
Binder Master Binder Develop Binder

What is pya?

pya is a package to support creation and manipulation of audio signals with Python. It uses numpy arrays to store and compute audio signals.

It provides:

  • Asig - a versatile audio signal class
    • Ugen - a subclass of Asig, which offers unit generators such as sine, square, sawtooth, noise
  • Aserver - an audio server class for queuing and playing Asigs
  • Arecorder - an audio recorder class
  • Aspec - an audio spectrum class, using rfft as real-valued signals are always implied
  • Astft - an audio STFT (short-term Fourier transform) class
  • A number of helper functions, e.g. device_info()

pya can be used for

  • multi-channel audio processing
  • auditory display and sonification
  • sound synthesis experiment
  • audio applications in general such as games or GUI-enhancements
  • signal analysis and plotting

At this time pya is more suitable for offline rendering than realtime.

Authors and Contributors

  • Thomas (author, maintainer)
  • Jiajun (co-author, maintainer)
  • Alexander (maintainer)
  • Contributors will be acknowledged here, contributions are welcome.

Installation

Install using

pip install pya

However to play and record audio you need a backend.

  • pip install pya[remote] for a web based Jupyter backend
  • pip install pya[pyaudio] for portaudio and its Python wrapper PyAudio

Using Conda

Pyaudio can be installed via conda:

conda install pyaudio

Disclaimer: Python 3.10+ requires PyAudio 0.2.12 which is not available on Conda as of December 2022. Conda-forge provides a version only for Linux at the moment. Users of Python 3.10 should for now use other installation options.

Using Homebrew and PIP (MacOS only)

brew install portaudio

Then

pip install pya

For Apple ARM Chip, if you failed to install the PyAudio dependency, you can follow this guide: Installation on ARM chip

  • Option 1: Create .pydistutils.cfg in your home directory, ~/.pydistutils.cfg, add:

    echo "[build_ext]
    include_dirs=$(brew --prefix portaudio)/include/
    library_dirs=$(brew --prefix portaudio)/lib/" > ~/.pydistutils.cfg
    

    Use pip:

    pip install pya
    

    You can remove the .pydistutils.cfg file after installation.

  • Option 2: Use CFLAGS:

    CFLAGS="-I/opt/homebrew/include -L/opt/homebrew/lib" pip install pya
    

Using PIP (Linux)

Try sudo apt-get install portaudio19-dev or equivalent to your distro, then

pip install pya

Using PIP (Windows)

PyPI provides PyAudio wheels for Windows including portaudio:

pip install pyaudio

should be sufficient.

A simple example

Startup:

import pya
s = pya.Aserver(bs=1024)
pya.Aserver.default = s  # to set as default server
s.boot()

Create an Asig signal:

A 1s / 440 Hz sine tone at sampling rate 44100 as channel name 'left':

import numpy as np
signal_array = np.sin(2 * np.pi * 440 * np.linspace(0, 1, 44100))
atone = pya.Asig(signal_array, sr=44100, label='1s sine tone', cn=['left'])

Other ways of creating an Asig object:

asig_int = pya.Asig(44100, sr=44100)  # zero array with 44100 samples
asig_float = pya.Asig(2., sr=44100)  # float argument, 2 seconds of zero array
asig_str = pya.Asig('./song.wav')  # load audio file
asig_ugen = pya.Ugen().square(freq=440, sr=44100, dur=2., amp=0.5)  # using Ugen class to create common waveforms

Audio files are also possible using the file path. WAV should work without issues. MP3 is supported but may raise error if FFmpeg.

If you use Anaconda, installation is quite easy:

conda install -c conda-forge ffmpeg

Otherwise:

Key attributes

  • atone.sig --> The numpy array containing the signal is
  • atone.sr --> the sampling rate
  • atone.cn --> the list of custom defined channel names
  • atone.label --> a custom set identifier string

Play signals

atone.play(server=s)  

play() uses Aserver.default if server is not specified

Instead of specifying a long standing server. You can also use Aserver as a context:

with pya.Aserver(sr=48000, bs=256, channels=2) as aserver:
    atone.play(server=aserver)  # Or do: aserver.play(atone)

The benefit of this is that it will handle server bootup and shutdown for you. But notice that server up/down introduces extra latency.

Play signal on a specific device

from pya import find_device
from pya import Aserver
devices = find_device() # This will return a dictionary of all devices, with their index, name, channels.
s = Aserver(sr=48000, bs=256, device=devices['name_of_your_device']['index'])

Plotting signals

to plot the first 1000 samples:

atone[:1000].plot()

to plot the magnitude and phase spectrum:

atone.plot_spectrum()

to plot the spectrum via the Aspec class

atone.to_spec().plot()

to plot the spectrogram via the Astft class

atone.to_stft().plot(ampdb)

Selection of subsets

  • Asigs support multi-channel audio (as columns of the signal array)
    • a1[:100, :3] would select the first 100 samples and the first 3 channels,
    • a1[{1.2:2}, ['left']] would select the channel named 'left' using a time slice from 1

Recording from Device

Arecorder allows recording from input device

import time

from pya import find_device
from pya import Arecorder
devices = find_device()  # Find the index of the input device
arecorder = Arecorder(device=some_index, sr=48000, bs=512)  # Or not set device to let pya find the default device 
arecorder.boot()
arecorder.record()
time.sleep(2)  # Recording is non-blocking
arecorder.stop()
last_recording = arecorder.recordings[-1]  # Each time a recorder stop, a new recording is appended to recordings

Method chaining

Asig methods usually return an Asig, so methods can be chained, e.g

atone[{0:1.5}].fade_in(0.1).fade_out(0.8).gain(db=-6).plot(lw=0.1).play(rate=0.4, onset=1)

Learning more

  • Please check the examples/pya-examples.ipynb for more examples and details.

Contributing

  • Please get in touch with us if you wish to contribute. We are happy to be involved in the discussion of new features and to receive pull requests.

pya's People

Contributors

aleneum avatar baum-testtester avatar dependabot[bot] avatar dreinsch avatar iclcv-ci avatar lluni avatar thomas-hermann avatar wiccy46 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

pya's Issues

Asig metadata

For Arecorder.record(), it would be useful to store the time stamps when recordings were made. One way would be to change recordings.append((timestamp, asig)), but that would make access and interpretation less intuitive. So that lead me to the idea of Asig.metadata:

I propose to add an attribute Asig.metadata, which could be a dict with fields such as

  • timestamp,
  • author,
  • recording_device,
  • history (which could be a list itself and store those things currently appended to label)
  • etc. (ideas?)

By the way, Asig.label would be a perfect item in the meta data dict!

Before such a far-reaching change, I'd like to hear your feedback and ideas on this issue.

Note that we already have a context variable named '_' (yes, funny name, but short and nice, e.g. to access results of a1.find_events().plot() via a1._['events'] and self._['plot'].
This is convenient for returning auxilliary data without breaking the logic of returning self always.

Hand in hand with that, I feel that the currently often used strategy to return a new Asig with new data

return Asig(newsig, self.sr, self.label, self.cn,...)

is suboptimal as it often fails to copy '_' and would, if metadata should be implemented, become even more complex. So probably a static method Asig._copy() for devs, which generally duplicates self w/o copying sig would help, so instead we could simply write

return Asig._copy(self, newsig, attr_to_overwrite=new_value,...)

Looking forward to hear your thoughts on both issues.

Bug: Arecorder is using the wrong channels attribute to wrap channels into columns

Currently, there is a bug in the Arecorder callback:

data_float = np.reshape(sigar, (len(sigar) // self.channels, self.channels))

while it should be

data_float = np.reshape(sigar, (len(sigar) // self.input_channels, self.input_channels))

In the 0.3.0 Arecorder I introduced input_channels and input_devices originally to separate from Aserver's channels and device. But since Arecorder will have no output and Aserver has no input. This separation is not that necessary anymore as it will not create confusion.

I made a fix that now Arecorder also uses device and channels as attributes.

Patching 0.3.0 to 0.3.1 is required asap as with the current bug, users will get the wrong recordings if their audio device has > 2 input channels.

This can also be a release for the new backend.

Can't have Aserver() and Arecorder() booted separately when using the same device.

So normally we can instantiate Arecorder and Aserver and boot them separately. But in the case the input and output are from the same index, e.g. external audio interface will have both input and output. Arecorder will returns this error: ||PaMacCore (AUHAL)|| Error on line 2490: err='-50', msg=Unknown Error and stopped running, while Aserver is fine.

from pya import Aserver, Arecorder

recorder = Arecorder(device=3)
player = Aserver(device=3)
player.boot()
recorder.boot()
time.sleep(1)
recorder.quit()
player.quit()

In the above case, recorder will not run.

Aserver-nodes

The current Aserver doesn't allow to free a scheduled or playing signal.
For the branch Aserver-nodes, I propose

  • to add automatic node_id allocation for any played asig
    • this will require the introduction of a src_nodes list
  • Aserver.play() to return that node_id (Asig.play() to store it in its context dict (_['node_id'])
  • service functions to interact with existing nodes, e.g.
    • free_all(): to remove all scheduled and playing entries
    • free(node_id): to remove a specific node_id
    • ... (any other useful suggestion?)
      I have added a threading.Lock(), hope I have secured all critical parts...

A code could be

alongsig1.play()
node = alongsig1._['node_id']  # later with Metadata:  node = alongsig1.info.node_id
alongsig2.play()
s.free(node)  # at a later time
# alongsig2 will keep playing...
  • So far only free_all() works. Please try it and comment.

Asig.cn and Asig.col_names are confusing and unnecessary to have both

Asig.cn is a list of channel names, e.g. ['Left', 'Right', 'Center'...] , and then there is another member call Asig.col_name that is only used in get_item to help create a sublist of self.cn based on channel subset.

I think self.col_name is not necessary or at least it shouldn't be exposed.

Streamlining data access in Arecorder

Arecorder introduces multiple rather simple methods:

    def reset_recordings(self):
        self.recordings = []

    def get_latest_recording(self):
        return self.recordings[-1]

I wonder if these methods considerably improve the user experience to justify their maintenance. If they make it into a release they cannot (or should not) be removed without a deprecation phase. I'd suggest to advocate direct manipulation of self.recording instead:

ar.reset_recordings() => ar.recordings.clear()
ar.get_latest_recording() => ar.recordings[-1]
while ar.recordings:
    process(ar.recordings.pop())  # FILO
    process(ar.recordings.pop(0))  # FIFO

Improve Asig indexing

I found the following problems with the current implementation of Asig channel indexing

  1. Currently Asig indexing does ignore anything that is not instance of list, int, slice, str into the channel index and it will assume slice(None, None, None) because of Asig.py line 260

    pya/pya/Asig.py

    Lines 260 to 262 in cb19208

    else: # if nothing is given, e.g. index = (ridx,) on calling a[:]
    cidx = slice(None, None, None)
    cn_new = self.cn

    • Example of using something that will be ignored

      Code Example:

     a = pya.Asig(np.arange(10, 0, -1).reshape(2, 5), 2)
     a.sig
     # array([[10.,  9.,  8.,  7.,  6.],
     #        [ 5.,  4.,  3.,  2.,  1.]], dtype=float32)
    
     a[:, bytes('This makes no sense', 'utf-8')].sig  # Channel selection with something
     # array([[10.,  9.,  8.,  7.,  6.],
     #        [ 5.,  4.,  3.,  2.,  1.]], dtype=float32)
    • However it would be useful to allow types like numpy.ndarray

      Code Example:

       a = pya.Asig(np.arange(10, 0, -1).reshape(2, 5), 2)
       a.sig
       # array([[10.,  9.,  8.,  7.,  6.],
       #        [ 5.,  4.,  3.,  2.,  1.]], dtype=float32)
      
       order = np.argsort(a.rms())
       # array([4, 3, 2, 1, 0], dtype=int64)
      
       a[:, order].sig
       # array([[10.,  9.,  8.,  7.,  6.],
       #        [ 5.,  4.,  3.,  2.,  1.]], dtype=float32)
       # instead of correctly reversed
      
       order = np.array([4, 3, 2, 1, 0])
       a.sig[:, order]
       # array([[ 6.,  7.,  8.,  9., 10.],
       #        [ 1.,  2.,  3.,  4.,  5.]], dtype=float32)
  2. When using lists with numpy data types like numpy.int32 indexing fails. The current implementation only checks for str, bool and int. cidx will remain unset resulting in UnboundLocalError: local variable 'cidx' referenced before assignment at Asig.py line 264

    pya/pya/Asig.py

    Lines 263 to 264 in cb19208

    # apply ridx and cidx and return result
    sig = self.sig[ridx, cidx] if self.channels > 1 else self.sig[ridx]

    Code Example:

    a = pya.Asig(np.arange(10, 0, -1).reshape(2, 5), 2)
    a.sig
    # array([[10.,  9.,  8.,  7.,  6.],
    #        [ 5.,  4.,  3.,  2.,  1.]], dtype=float32)
    
    order_list = list(np.array([4, 3, 2, 1, 0]))
    a[:, order_list].sig
    # UnboundLocalError: local variable 'cidx' referenced before assignment at Asig.py line 264

Row indexing works because of Asig.py line 227

pya/pya/Asig.py

Lines 227 to 229 in cb19208

else: # Dont think there is a usecase.
ridx = rindex
sr = self.sr

Arecorder track selection and gain.

In feature-recorder-improvement, I added a new method to Arecorder named set_tracks(self, tracks, gains). This allows individual tracks selected to be recorded before calling record(), e.g.:

ar.set_tracks(tracks=[0, 2], gains=[-3, -10]) #gain in db

or just with one channel: `ar.set_tracks(tracks=2, gains=-10)

I then added data_float = data_float[:, self.tracks] * self.gains to the callback to actually subset the channels.

I also added to reset the recording mode back to record all track with no volume adjustment.

def reset(self):
        self.tracks = slice(None)
        self.gains = np.ones(self.channels)

Originally, I was thinking to introduce input channel selection in record() but think it is best for user to refined the desired channels first just like it would be done in a DAW.

Improve type hints

Most methods current have no type hints, we should start doing that. May discuss some bugs along the way. :)

Ugen, numpy sine array result in different precision on different environment

This only break on Python3.7 at Ubuntu latest. With a sine wave generated having a different precision than other environment. Can also reproduce on Fedora 36. This needs to be investigated why difference happened.

Test log:

    def test_sine(self):
        sine = Ugen().sine(freq=200, amp=0.5, dur=1.0, sr=44100 // 2, channels=2)
        self.assertEqual(44100 // 2, sine.sr)
>       self.assertAlmostEqual(0.5, np.max(sine.sig), places=6)
E       AssertionError: 0.5 != 0.49999684 within 6 places (3.159046173095703e-06 difference)

Arecorder uses default device when 0 is used a input

def __init__(self, sr=44100, bs=256, device=None, channels=None, backend=None, **kwargs):

The arecorder has an optional param device that is replaced with the default device if it's None.
self._device = device or self.backend.get_default_input_device_info()['index']

but if device == 0 because I need the device with index 0 the default device will be selected because 0 == False in python.
I think it's needed to check if device is actually None to correct that

Usage of `warning.warn` in pya

I see that @wiccy46 uses warning.warn quite frequently and I wonder if that is the right choice. warning.warn is commonly used to signal usage issues (using deprecated methods, passing incomplete/wrong parameters to a function etc.) whereas logging.warning is used to document/log processes. warning.warn is by default only emitted once and thus not useful to track/debug issues. warning can also be configured to handle warnings as exceptions.

tl;dr: If users should change their code, use warning.warn; for everything else use logging.warning (or better logger_instance.warning)

Publish 0.3.0

Will release when #6 has been resolved.

Release Checklist

  • update version
  • review Changelog
  • update build tools: pip3 install -U setuptools twine wheel
  • run tox: tox
  • clear dist: rm -r build/ dist/ pya.egg-info/
  • build release: python3 setup.py sdist bdist_wheel
  • test publication: twine upload --repository-url https://test.pypi.org/legacy/ dist/*
  • review test publication: https://test.pypi.org/project/pya
  • install test release: pip3 install --index-url https://test.pypi.org/simple/ --extra-index-url https://pypi.org/simple pya
  • publication: twine upload dist/*
  • install release: pip install pya
  • commit and draft release: https://github.com/interactive-sonification/pya/releases

Make class transformation one-way only

The ability to transform class back-and-forth, e.g. Asig.to_aspec() and Aspec.to_asig() seems convenient but in practice I don't see this offers much value. Rather:

  • It creates the problem of circular imports which is quite anti-pattern and difficult to work with.
  • Conversion is not necessary lossless
  • It does not improve code readability in a meaningful way than a simple:
asig = Asig(astft.istft(), sr=astft.sr)  # given astft is already a known object

Currently our imports order has been to be correctly ordered, a simple reordering in init.py (e.g. using reorder-python-imports hook with pre-commit) will create a lot of circular import errors.

I propose we avoid circular imports by making transformation one-way, i.e. only Asig can convert to others since Asig should always be the starting point of a piece of audio signal.:

  • remove to_asig() methods from Aspec, Astft, or raise an exception saying it is deprecated with instruction of a usage if back conversion is desired. I don't see many use cases where to_asig is even needed

Otherwise, I am open to a cleaner suggestion of addressing the circular import issue.

(Binder) notebooks not compatible with recent sanic and jupyter versions

This is related to (at least) two separate issues. First, it appears that Jupyter notebooks are not executed in the projects context any longer and second, the way how sanic servers are instantiated has changed. It seems that the first issue is rather easy to solve because the sys.path contains the folder location as the first entry (at least on my machine and on binder). The second issue requires a more detailed investigation.

Asig.window_op()

window_op() doesn't work with multi-channel signals.
Problem is in scipy.signal.get_window, which throws an "ValueError: operands could not be broadcast together with shapes (1024,2) (1024,) "

Release 0.3.1

  • update version
  • review Changelog
  • update build tools: pip3 install -U setuptools twine wheel
  • run tox: tox
  • clear dist: rm -r build/ dist/ pya.egg-info/
  • build release: python3 setup.py sdist bdist_wheel
  • test publication: twine upload --repository-url https://test.pypi.org/legacy/ dist/*
  • review test publication: https://test.pypi.org/project/pya
  • install test release: pip3 install --index-url https://test.pypi.org/simple/ --extra-index-url https://pypi.org/simple pya
  • publication: twine upload dist/*
  • install release: pip3 install pya
  • commit and draft release: https://github.com/interactive-sonification/pya/releases

Aserver, Arecorder channels argument of the constructor is buggy

  • Aserver has an arbitrary default channels of 2. This will fail for example if you choose a device with only 1 output
  • Arecorder's channel if not set it uses the default input device's channel count, if that doesn't match the actual device , then also fail.

Expected behavior: channels let user define a specific number of channels to use. But if not set, choose the max of whatever Aserver.device or Arecorder.device provide.

Arecorder states

  • To my opinion, Arecorder states should be changed from current words (stop/record/pause) to 'stopped', 'recording', 'paused'.

  • current_state attribute should be refactored to state, and to simply drop the attribute state which currently holds a dict.
    self.current_state = self.state['pause'] --> self.state = 'paused'

  • as by-product, users can more easily read code, e.g. they can write
    if ar.state == "paused":
    pass
    instead of
    if ar.current_state = 2:
    pass

any objection?

Add more data type support.

It is quite limited to only support float32 for now. Should add more data type supported. But this might have a lot of implication.

Maybe we can keep the float32 as default. But allow playback,recording and conversion to other data type such as PCM24. We lose a little precision but gain a much wider usability.

Idea:

  • Make Asig.dtype a property that only return 'float32', it is not editable
  • Aserver.play to convert data to type before hand. This will impact latency.. So may need improvement
  • Add Asig.sig_pcm24 and others as converted to type numpy array

test_mock_arecorder fails in develop

test_mock_arecorder recorder fails in develop since @thomas-hermann commit:

ERROR: test_mock_arecorder (tests.test_arecorder.TestArecorder)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "C:\projects\pya-b7gkx\tests\test_arecorder.py", line 98, in test_mock_arecorder
    ar.get_latest_recording()
AttributeError: 'Arecorder' object has no attribute 'get_latest_recording'
-------------------- >> begin captured logging << --------------------

Automated Testing and Conda Forge Integration

I forked Conda Forge staged-recipes and added pya. Before I create a PR we should check on which platforms pya can actually be built and tested (with conda). I created a feature-ci branch to work on this. I will use travis-ci (for MacOS and Linux) and appveyor for Windows.

  • fork pya
  • add travis support for osx/linux (alex)
  • add appveyor support for windows (alex)
  • configure appveryor for pya (thomas)
  • configure travis for pya (thomas)
  • create pull request for feature-ci to develop (alex)
  • merge pr into develop
  • check integrity with conda pkgs/main and conda-forge (alex)
  • submit pya recipe (alex)

Release 0.3.2

  • update version
  • review Changelog
  • update build tools: pip3 install -U setuptools twine wheel
  • run tox: tox
  • clear dist: rm -r build/ dist/ pya.egg-info/
  • build release: python3 setup.py sdist bdist_wheel
  • test publication: twine upload --repository-url https://test.pypi.org/legacy/ dist/*
  • review test publication: https://test.pypi.org/project/pya
  • install test release: pip3 install --index-url https://test.pypi.org/simple/ --extra-index-url https://pypi.org/simple pya
  • publication: twine upload dist/*
  • install release: pip3 install pya
  • commit and draft release: https://github.com/interactive-sonification/pya/releases
  • generate documentation
  • push to gh-pages

determine_backend error

Hello
I have the sample program pya-examples.ipynb open in a Jupyter Lab window. I exectute the following block:

# If you are using Binder, WebAudio will be chosen.
auto_backend = determine_backend()

And it gives the following error:

<ipython-input-18-011de536e0e6> in <module>
      1 # This will chose the most appropriate backend. If you run this notebook locally, this will be PyAudio.
      2 # If you are using Binder, WebAudio will be chosen.
----> 3 auto_backend = determine_backend()

~/.virtualenvs/son/lib/python3.8/site-packages/pya/helper/backend.py in determine_backend(force_webaudio, port)
     23 def determine_backend(force_webaudio=False, port=8765):
     24     import os
---> 25     hostname = get_server_info()['hostname']
     26     if hostname in ['localhost', '127.0.0.1'] and not force_webaudio:
     27         return None  # use default local backend

TypeError: 'NoneType' object is not subscriptable

Anything obviously wrong at my end?

Convolution

Add convolution functionality to pya

  • Asig method. asig.conv(ir, mode, method)
  • Base on scipy.signal.convolve
  • Or a simpler version. let you input any sr matched impulse response.

decouple Aserver/Arecorder and pyaudio and introduce backends

I have a suggestion concerning Aserver/Arecorder. I would like to add pyaudio-agnostic support to Aserver at least to use pya in a remote Jupyter Notebook via Binder. Right now Aserver and Arecorder are tightly coupled to pyaudio. This also makes testing more complicated than it needs to be.

If both classes had a backend parameter one could provide a collection of different in- and output channels (e.g. PyAudioPlayer, JupyterPlayer, FFmpegWriter, NullWriter, NumpyWriter for Aserver and PyAudioRecorder, JupyterRecorder, FFmpegReader, NumpyReader for Arecorder) and make both classes way more versatile. The default value must be pyaudio of course.

s = Aserver(bs=1024)
f = Aserver(bs=1024, backend=FileWriter(directory='./out', format='wav'))
j = Aserver(bs=1024, backend=JupyterWidget(autoplay=True))
# ...
atone.play(server=s)
# atone sounds AMAZING, save it
atone.play(server=f)
# this creates a playable html audio element right below the executing Jupyter Cell
atone.play(server=j)
# for remote notebooks just do
Aserver.default = j
# now every play will create an interactive Jupyter widget
atone.play()
# testing models with previously recorded files
ar_test = Arecorder(channels=1, backend=FFmpegReader(directory="in/*.wav", realtime=True, split_signals=False)).boot()
# reads samples from globbed files (in realtime or afap), as single Asig (`split_signals=False`; same behaviour as `pyaudio`) or a new Asig for each new file (`split_signals=True`).
ar_test.record()
ar_test.stop()
# process all the recordings
ar = ar_test
while len(ar.recordings):
    magic_model.process(ar.recordings.pop(0))  # process data FIFO 
# model_magic works good, now lets get real
ar_live = Arecorder(channels=1).boot()
ar = ar_live

Release 0.3.3

  • update version
  • review Changelog
  • update build tools: pip3 install -U setuptools twine wheel
  • run tox: tox
  • clear dist: rm -r build/ dist/ pya.egg-info/
  • build release: python3 setup.py sdist bdist_wheel
  • test publication: twine upload --repository-url https://test.pypi.org/legacy/ dist/*
  • review test publication: https://test.pypi.org/project/pya
  • install test release: pip3 install --index-url https://test.pypi.org/simple/ --extra-index-url https://pypi.org/simple pya
  • publication: twine upload dist/*
  • install release: pip3 install pya
  • commit and draft release: https://github.com/interactive-sonification/pya/releases
  • generate documentation
  • push to gh-pages

Handling Stream-based and event-based audio in Aserver / Arecorder

Aserver currently starts a stream at boot() and stops it at quit(). Asig.play(onset=...,...) simply dispatches an audio object for output at given onset time.
However, there are occasions where a more 'sound object'-oriented output would be wished
atone.play(server=f) using a FilewriterBackend to create a file of given format
atone.play(server=j) using a JupyterBackend to create an interactive widget that allows to play and replay and navigate within this asig-sound object
etc.
Even for PyaudioBackend, object based output could be helpful, as it avoids any problems with a chopping stream, which still could happen using the stream-based output.

Proposition to solve this issue:

  1. create a method Asig.playobj(), which plays the object using the specified backend outside the stream-based callback logic (alternative names play1(), or play_event().
    This method would not have an onset, and implementation would basically create&start a stream on each invocation, close it right after and delete it.

  2. create a new class altogether, e.g. AEventPlayer(), which would not have a boot and quit method, but always frame play() invocations as outlined above by stream management ops.

I tend to favor 1., but wonder whether there is a third hidden better solution for this problem. I can start to implement it once we are on the same page it it...

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.