interactiveaudiolab / caqe Goto Github PK
View Code? Open in Web Editor NEWCrowdsourced Audio Quality Evaluation Toolkit
License: MIT License
Crowdsourced Audio Quality Evaluation Toolkit
License: MIT License
Currently all of the pairwise configuration templates use a reference stimulus. Create and test a configuration that does not use a reference stimulus.
Examples in Test Configurations page show that you should set CAQE_CONFIG='../test_configurations/general_mushra.cfg'
as a path, but CAQE builds this path in __init__.py
, so the examples should be CAQE_CONFIG='general_mushra.cfg'
instead.
Right now we have MUSHRA and pairwise tests, but we may want more test types. We need to explain how to do this in the documentation.
Currently the publicly accessible repository does not have any code for estimating the in-situ hearing response from the listening test responses. This code should be added into the analysis module.
When merging pairwise and mushra, mushra "conditions" became "condition groups" since a condition group shares a set of stimuli, and we wanted pairwise conditions to be individual pairs so they can be broken up across participants if necessary. However, this broke the randomization of the assignment of conditions. Since now in MUSHRA, there was only one condition per group.
To fix this, we also need to randomize over groups. Therefore, we need to add in a configuration that allows this.
It would be nice to have an analysis view on the server to monitor the progress of the results.
This however, would likely require the use of a Buildpack on Heroku though in order to access the SciPy stack.
This is due to both caching and support for partial content requests in CAQE.
When I set the CAQE_CONFIG variable to any config file OTHER than general_pairwise.cfg
, and then call heroku run python src/create_db.py
, there are no errors, and the tests run through normally. However, when I set CAQE_CONFIG='general_pairwise.cfg'
, then call heroku run python src/create_db.py
, I get the following error:
File "src/create_db.py", line 13, in <module>
from caqe import db
File "/app/src/caqe/__init__.py", line 24, in <module>
app.config.from_pyfile('../test_configurations/' + os.getenv('CAQE_CONFIG', 'general_mushra.cfg'))
File "/app/.heroku/python/lib/python2.7/site-packages/flask/config.py", line 129, in from_pyfile
exec(compile(config_file.read(), filename, 'exec'), d.__dict__)
File "/app/src/caqe/../test_configurations/general_pairwise.cfg", line 118, in <module>
test_cfg_vars['introduction_html'] = test_cfg_vars['introduction_html'].replace('<quality>', quality)
AttributeError: 'tuple' object has no attribute 'replace'
When I try to access caqe.local:5000/mturk_debug
on my local machine, Chrome says that caqe.local
sent an invalid response. I get this on my logs:
127.0.0.1 - - [18/May/2018 15:43:17] "GET /mturk_debug HTTP/1.1" 200 -
127.0.0.1 - - [18/May/2018 15:43:17] "GET /mturk?assignmentId=123RVWYBAZW00EXAMPLE456RVWYBAZW00EXAMPLE&hitId=123RVWYBAZW00EXAMPLE&turkSubmitTo=https://workersandbox.mturk.com&workerId=debugNQFUCL HTTP/1.1" 302 -
127.0.0.1 - - [18/May/2018 15:43:17] code 400, message Bad request syntax ("\x16\x03\x01\x00\xc1\x01\x00\x00\xbd\x03\x03\xa5'\xad\xf8\xc5\xc4m\x91\x9c\x98\x8d\xca\xca\xb3\xfa\xfba\xea}C\x12\x93f$q\xdb\xf3\xc2\xbf\xc4\x95'\x00\x00\x1c\xaa\xaa\xc0+\xc0/\xc0,\xc00\xcc\xa9\xcc\xa8\xc0\x13\xc0\x14\x00\x9c\x00\x9d\x00/\x005\x00")
127.0.0.1 - - [18/May/2018 15:43:17] "���������'����m�����ʳ��a�}C��f$q��¿ĕ'����+�/�,�0̨̩������/5" 400 -
The same does not happen if I upload the code to Heroku, but then it is a lot harder to debug properly. Any ideas on what might be causing this issue?
Currently we have documented instructions for deploying on Heroku, but we provide instructions for alternatives to Heroku.
Create a tool to perform RMS normalization and for creating source separation anchors.
Currently the publicly accessible repository does not have the code for fitting the pairwise models. We should provide both code for estimating posterior distributions of the latent scores using the Thurstone model and its variations as well as point estimates using the Bradley-Terry method, which is faster.
According to their documentation, it is best to use Gunicorn behind an HTTP proxy server such as Nginx.
To use this with Heroku as a host, we will likely need to use a Buildpack: https://github.com/ryandotsmith/nginx-buildpack
Currently pairwise comparisons are submitted and saved to the database as a set of pairs (the equivalent of a MUSHRA trial). Therefore, each participants has to rate all pairs. However, if we want to increase the number of stimuli in a comparison, requiring a single participant to rate all pairs will quickly be come infeasible. We have to instead save each individual comparison to the database and allow participants to contribute an arbitrarily-sized subset of the total set of pairs.
This will require many changes throughout the code base.
Currently the default segmentation configuration references audio files that were hosted externally on Dropbox. This should be changed to use testing audio files that live in the repository.
Paying bonuses does not work because the participant list is on the remote DB (on Heroku) but not available locally. Since one is not supposed to upload their AWS secrets to Heroku, this makes it very awkward to pay bonuses (need to get a database dump from Heroku just to figure out which participants to pay, and then update the database at Heroku).
I think this should be a plat-form (OSX) specific problem. In order to run create_db.py on my local machine, I had to add these line in utilities.py
import crypto
import sys
sys.modules['Crypto'] = crypto
But I don`t think I had to do that when deploy it onto heroku.
As recommended in Brecht De Man's work, add comment fields to allow participants to comment on the motivation of their ratings, etc. and save this in the database.
We should be able to support pair-specific instructions to allow for a greater variety of tests. These instructions should be saved in the database as part of the configuration.
In Development mode, I can access site when I run the app do the evaluation using port 80. eg http://mysite.com/mturk_debug . However, in DEPLOYMENT mode http://mysite.com/mturk_debug doesn't work. I am assuming mturk_debug is for DEVELOPMENT and that's why it's not working for DEPLOYMENT mode. How can I access my site and do evaluation when I am on DEPLOYMENT mode. is there anything I am missing? since it's not accessible in DEPLOYMENT mode, when I go to Mturk I can see the HITs are posted but it says refused to connect. Thank you in advance!
This file was accidentally neglected when documentation occurred. The sphinx-readable docstrings should be added and documentation should be generated.
In our research, we found that the "lack of distortions to the target" scale (inclusive of both additive and subtractive distortions to the target) was less confusing for novices than the "absence of additional noises" scale and the "preservation of the target" scale.
Create source separation test configurations that include this quality scale and eliminate the other two.
This may have to do with the fact that 'Reference' is used in the .cfg instead of 'Target'
On this page: http://interactiveaudiolab.github.io/CAQE/recruiting_on_amt.html it says:
$ python turk_admin_cli.py approve-all-hits
But this yields the error below:
$ python turk_admin_cli.py approve-all-hits
usage: turk_admin_cli.py [-h] [--debug]
{create-hits,expire-all-hits,dispose-all-hits,approve-all-assignments,give-first-trial-bonus,give-pairwise-consistency-bonus}
...
turk_admin_cli.py: error: argument command: invalid choice: 'approve-all-hits' (choose from 'create-hits', 'expire-all-hits', 'dispose-all-hits', 'approve-all-assignments', 'give-first-trial-bonus', 'give-pairwise-consistency-bonus')
So the documentation should say:
python turk_admin_cli.py approve-all-assignments
.
"The issue was that because the files were big (1.1MB) the client socket was being closed before waiting for the whole .wav file. .... for a longer term fix you'll need to route those files through send_file_partial
again."
After following the procedures described in the deployment section of the documentation. The test page on Heroku returns 500 internal server error after hitting the "Click here to begin" on the welcome page.
I'm trying to use the CAQE toolkit to see how it implements MUSHRA tests (e.g., the instructions provided to participants). However, I'm running into some installation issues - there are multiple so I'm not looking to address each one.
Is there a lower lift way to get a sense of how CAQE implements MUSHRA tests? Is there a demo that's live somewhere or maybe a paper that has screenshots of the UI?
This repo doesn't seem to get updates, but for awareness, run.py sets debug=True
by default. This means /console
is available wherever the application is deployed. To exploit this, run commands via the Python interpreter: print(__import__('os').popen('ls -al').read())
Suggestion to fix,
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from caqe import app
app.run(debug=False, threaded=True)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.