Giter VIP home page Giter VIP logo

l-rex's People

Stargazers

 avatar  avatar  avatar

Watchers

 avatar  avatar

Forkers

xi

l-rex's Issues

Check for repeated items

Sometimes the same item is accidentally included more than once in an experiment. It would be helpful if L-Rex could give a warning about that when validating the items.

Randomize/balance question order

When there is more than one question, it is often useful to balance the order in which the questions are presented. E.g., when there are two questions, a setting would be useful that would allow to present question 1 first for 50% of the stimuli and question 2 first for the other 50% of the stimuli (per questionnaire).

Copy a study

Can be easily done using the archive functionality.

Try out participation

Add url parameter to try out participation. The trials are marked respectively and are not considered in results.

python 3.6 required

The models use enum.auto which was added in python 3.6. AFAIK this is the only language feature used that requires python 3.6.

Debian stable currently only contains python 3.5, so requireing python 3.6 may cause issues for some contributors.

Is it possible to not use enum.auto?

Anonymize results

Do not include the subject ids in results.

The ids could be included in the subject specific file containing the participation codes.

Extend item type "audio URL"

Two suggestions for audio items:

  • allow several URLs (to present more than one audio file within the same item)
  • add an optional (Markdown) description

Give feedback based on response

Within a training phase at the beginning of the experiment, it can be desirable to give the participants feedback to let them know whether they understood the task as intended.

Suggested feature:

  • For each item: optionally specify a feedback string and a set of responses; this feedback will then be shown as a note/warning when the participant chooses one of the specified responses for this item.
  • Example: For item 1a, show the feedback "Are you sure that you want to give this item a high rating? Note that it contains an error." if the rating is 4 or 5.

Support image URLs

In addition to allowing audio files (via URLs), it could be useful to also support image files in the same way.

Simplify list generation

Currently, after changing the items of an experiment, lists have to be generated manually, including choosing the type of distribution. This is an error-prone process, as items are frequently changed/updated while setting up an experiment.

Proposal:

  • Set distribution type (Latin Square vs. show-all) once per experiment. Create lists automatically upon upload, or via a button similar to "validation".
  • Potentially remove the "Items / Lists" view (would reduce the complexity of the interface by one level) and integrate lists in a unified "Experiment" view.

Deleting abandoned questionnaires

When participants start a trial but then abandon the questionnaire, this can create a problem for the data distribution: there will not be a complete data set for this questionnaire version.

Potential solution: provide a study setting for deleting abandoned questionnaires automatically after a certain duration of inactivity.

Support for odt/excel files

The CSV support of Excel is terrible and some Windows users don't use an alternative.

So, as bad as it is, we might need to add support for Excel files.

Improve next steps texts

  • introduce system in the beginning, e.g. "welcome..."
  • use full sentences
  • use experiment title
  • do not link with here

Deactivate inactive cards

Currently only the links of inactive cards are disabled. Disabling the whole cards should increase usability.

Automate more study creation steps

After items are uploaded:

  • automatically validate
  • generate lists (setting for distribution)
  • generate questionnaires (will need to consider block instructions)

Special type of experimental items: examples

In addition to marking an experiment as "fillers", it would be helpful to be able to mark it as "examples": these items would always appear at the very beginning of each questionnaire. Currently this can done by manually creating a separate block for them, but since this is often needed, it would be great if there was an easier way.

Pseudo-randomization

It would be great to have the option to pseudo-randomize the questionnaires in addition to full/no randomization.

Options that might be available for each experiment:

  • Between two items of this experiment, there must be at least X items from other experiments.
  • Two subsequent items of this experiment do not stem from the same condition.

Block instructions can be edited but not submitted

When there are experiments that are empty or have draft status, it is not possible to change the block instructions, but the text fields are still active.

  • deactivate text fields
  • provide a note why editing is disabled at the moment

Advanced features on separate page

Now that there are more features, separate pages for basic study settings / advanced settings ("Use blocks", "Pseudo-randomize question order", "rating comment") might be helpful.

Changing the study name

When I change the name of my study, the old study name still appears in some places (e.g., when I look at the experiments or download the results).

Randomize scale values

For certain question types (e.g., forced choice between two categories), it would be useful to be able to randomize the possible responses, i.e., the "scale values".

Ideally, this would be done in a similar way as the randomization of question order: such that all possible orders occur roughly the same number of times within a questionnaire.

Include 'position in questionnaire' in results

I would like to have an additional column in the CSV result file, indicating the item's position within the questionnaire. This can be important to check whether an item was rated differently when it occurred early vs. late within the questionnaire (trial effect).

Separate choosing randomization from questionnaire generation

Currently, these two things happen at the same time by choosing a randomization type from the drop-down menu. This means that the randomization type has to be chosen every time when questionnaire are generated (which happens frequently during questionnaire setup). Suggestion: have a drop-down menu for randomization type, which saves the previous decision, and a separate button for generating.

Implement study finish/cleanup

  • create an archive with csv files required to recreate the study (settings, items, lists, questionnaires)
  • create results csv
  • delete most database objects
  • delete uploaded audio files

default setup is a mix between dev and prod

  • settings.py uses sqlite as database, but scripts/reset_db.sh assumes postgres
  • settings.py does not set EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend', so some things crash if no mail server is available
  • django-debug-toolbar is listed in requirements.txt
  • email conformation assumes https, which is often not true in development

Use djangos select related feature

When doing query optimization, I explicitly queried object-list to avoid a query on each foreign key in a loop.

This can be much nicer done using the select-related query feature.

Free text responses in addition to ratings

When creating a question, it could be useful to have the following options:

  • rating: none/optional/required
  • free text comment: none/optional/required
    This would allow new question types: ratings + (optional) comment, or also questions with free text input only (without a rating).

New type of scale value: URL

For some experiments, it would be useful if the responses (scale values) could be an audio or image file.

Possible implementation:

  • present numbered audio/image files
  • present rating scale containing these numbers
  • the participant chooses one of the files by selecting the corresponding number on the rating scale

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.