2e2a / l-rex Goto Github PK
View Code? Open in Web Editor NEWLinguistic Rating Experiments
License: GNU General Public License v3.0
Linguistic Rating Experiments
License: GNU General Public License v3.0
Sometimes the same item is accidentally included more than once in an experiment. It would be helpful if L-Rex could give a warning about that when validating the items.
When there is more than one question, it is often useful to balance the order in which the questions are presented. E.g., when there are two questions, a setting would be useful that would allow to present question 1 first for 50% of the stimuli and question 2 first for the other 50% of the stimuli (per questionnaire).
Can be easily done using the archive functionality.
Add url parameter to try out participation. The trials are marked respectively and are not considered in results.
The models use enum.auto
which was added in python 3.6. AFAIK this is the only language feature used that requires python 3.6.
Debian stable currently only contains python 3.5, so requireing python 3.6 may cause issues for some contributors.
Is it possible to not use enum.auto
?
Do not include the subject ids in results.
The ids could be included in the subject specific file containing the participation codes.
... from the study detail page
Two suggestions for audio items:
Within a training phase at the beginning of the experiment, it can be desirable to give the participants feedback to let them know whether they understood the task as intended.
Suggested feature:
In addition to allowing audio files (via URLs), it could be useful to also support image files in the same way.
In the settings of an experiment with blocks, it is possible to change the name but not the block; however, "block" is a required field, making any setting changes impossible.
Currently, after changing the items of an experiment, lists have to be generated manually, including choosing the type of distribution. This is an error-prone process, as items are frequently changed/updated while setting up an experiment.
Proposal:
When participants start a trial but then abandon the questionnaire, this can create a problem for the data distribution: there will not be a complete data set for this questionnaire version.
Potential solution: provide a study setting for deleting abandoned questionnaires automatically after a certain duration of inactivity.
The current texts are too short and should be extended with more explanatory introduction texts.
The uuid slug is not readable in exported csv.
Maintaining the correct state of the machine when editing a study after publish is not possible in general.
The CSV support of Excel is terrible and some Windows users don't use an alternative.
So, as bad as it is, we might need to add support for Excel files.
here
Currently only the links of inactive cards are disabled. Disabling the whole cards should increase usability.
As I never liked the readability of format()
.
Optionally show collapsed rating instructions on each rating form.
After items are uploaded:
In addition to marking an experiment as "fillers", it would be helpful to be able to mark it as "examples": these items would always appear at the very beginning of each questionnaire. Currently this can done by manually creating a separate block for them, but since this is often needed, it would be great if there was an easier way.
Currently the mapping is done by the rating order, which is to implicit.
It would be great to have the option to pseudo-randomize the questionnaires in addition to full/no randomization.
Options that might be available for each experiment:
When there are experiments that are empty or have draft status, it is not possible to change the block instructions, but the text fields are still active.
Now that there are more features, separate pages for basic study settings / advanced settings ("Use blocks", "Pseudo-randomize question order", "rating comment") might be helpful.
When I change the name of my study, the old study name still appears in some places (e.g., when I look at the experiments or download the results).
For certain question types (e.g., forced choice between two categories), it would be useful to be able to randomize the possible responses, i.e., the "scale values".
Ideally, this would be done in a similar way as the randomization of question order: such that all possible orders occur roughly the same number of times within a questionnaire.
I would like to have an additional column in the CSV result file, indicating the item's position within the questionnaire. This can be important to check whether an item was rated differently when it occurred early vs. late within the questionnaire (trial effect).
Currently, these two things happen at the same time by choosing a randomization type from the drop-down menu. This means that the randomization type has to be chosen every time when questionnaire are generated (which happens frequently during questionnaire setup). Suggestion: have a drop-down menu for randomization type, which saves the previous decision, and a separate button for generating.
EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'
, so some things crash if no mail server is availableWould increase the usability of the questions form.
When doing query optimization, I explicitly queried object-list to avoid a query on each foreign key in a loop.
This can be much nicer done using the select-related query feature.
When creating a question, it could be useful to have the following options:
For some experiments, it would be useful if the responses (scale values) could be an audio or image file.
Possible implementation:
When the user leaves a page with a text field without submitting (e.g., when editing the instructions or an item), there should be a warning that the input will be lost.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.