Giter VIP home page Giter VIP logo

lookit-docs's Introduction

lookit-docs

Documentation for Lookit project, including use of platform and lookit-api and ember-lookit-frameplayer repos.

Lookit platform: https://lookit.readthedocs.io/en/develop/

Ember Lookit Frameplayer (experiment runner): https://lookit.readthedocs.io/projects/frameplayer/en/latest/

Installation

Set up your environment so that you can build the HTML files and review them.

First, create a local copy of this repository1.

/ $ git clone https://github.com/lookit/lookit-docs.git
/ $ cd lookit-docs

You will need Python 3.8 installed (Note: versions of Python 3.9+ have a dependency issue that prevents building these HTML files; you can install 3.8 alongside any other Python versions you might have.)

If necessary, install Python 3.8 now:

brew install [email protected]

Create a virtual environment using Python 3.8, and then install the project dependencies:

/lookit-docs $ virtualenv denv --python=/opt/homebrew/bin/python3.8
/lookit-docs $ source denv/bin/activate
(denv) /lookit-docs $ pip install -r docs/requirements.txt

If you see an error about needing to install sphinx RTD theme, then run the following:

pip install sphinx_rtd_theme

You should now be able to build the documentation files using the make html command from inside the docs subdirectory:

(denv) /lookit-docs $ cd docs/
(denv) /docs $ make html

The above command should create (or overwrite) HTML files in docs/build/html.

Editing the documentation

To make changes to the documentation pages, start from the develop branch.

(denv) /lookit-docs $ git checkout develop
(denv) /lookit-docs $ git pull origin develop

In most cases you will make your changes on a separate branch, rather than on develop:

(denv) /lookit-docs $ git checkout -b my-branch-name

Edit the documentation files and review the rendered HTML files by running the make html command from inside the docs subdirectory. When finished editing, commit your changes and push to your branch on the remote repository:

(denv) /lookit-docs $ git commit -m 'my commit message'
(denv) /lookit-docs $ git push origin my-branch-name

Finally, create a Github pull request from your branch into develop for others to review.

Contributing

Have you spotted an error in the documentation, or have other suggestions for improvement? We'd love your contributions! If you're not sure whether we'll want to incorporate your changes, please open an Issue in this repository so that we can discuss it first.

If you are not part of the Lookit team and would like to contribute changes to the Lookit documentation, you will follow the same steps listed above except that you will make changes on your own fork of this repository. Here is a summary of the process:

  1. Fork this repository.
  2. Set up your local environment, following the Installation steps above.
  3. Make your changes, commit them, and push to your fork/branch.
  4. Submit a pull request from your fork/branch targeted at the develop branch for this repository.

Footnotes

  1. If you are not part of the core Lookit team then you should fork this repository first (fork button in upper-right corner of this page), then clone your fork.

    / $ git clone https://github.com/<your-username>/lookit-docs.git
    / $ cd lookit-docs
    

lookit-docs's People

Contributors

agbeckner avatar amandaroseyuile avatar becky-gilbert avatar bleonar5 avatar datamance avatar dependabot[bot] avatar gerwinmark avatar hyesunghwang avatar ianchandlercampbell avatar jamesdunlea avatar karensmithphd avatar kaylepark31 avatar kbee2 avatar kbemiste avatar kc3922 avatar kenziefidelak avatar khughes7 avatar kkmckee avatar klingk avatar kmkinn03 avatar kristinehocker avatar kristinezheng avatar kxnguyen01 avatar ladlabcoordinator avatar mccalluc avatar mekline avatar mpelz avatar msheskin avatar okaycj avatar redudley avatar

Stargazers

 avatar

Watchers

 avatar  avatar  avatar  avatar

lookit-docs's Issues

"ready to go" audio instructions

Under the "A study from ground up" --> 5. instructions in the second paragraph underneath the code, there are instructions that read "As in the preview, here are some stimuli you need to add! Browse the audio files here to find an mp3 and ogg version of a “ready to go!”"

The closest audio file is "readygo_HO.mp3" in the given link. I think it would help to either make a new recording that matches the instructions (saying "ready to go!"), or change the instructions to fit the exact wording of the audio file being used (i.e. rather than "an mp3 and ogg version of a “ready to go!”", it could read "an mp3 and ogg version of a “ready...go”".

Standardizing the 100-word preview summaries displayed in study grid

Researchers have many different approaches to writing the "under 100 words" preview summary that appears on the listing page of available studies. That is a field that was added more recently than the "short description" and "purpose" fields, and we have not updated instructions, self-review checklists, or admin review to reflect it.

Thus, researchers have chosen a lot of different approaches. For example, they have different mixes of "what" and "purpose". And some just say "This study will take place on a video call, live with a researcher!" and nothing more. And there are many more variations!

For some useful standardization, one idea is that we require the following template (exceptions allowable for good reasons):
[optional detail about study delivery] + [major detail about what happens] + [optional major detail about purpose]

Below is what it would look like for a few examples. In each example, both optional bits are included, but some studies might not have anything to include in the first bit (especially if it was just a "standard" internal study):

  • "In this scheduled video chat with a researcher, your baby will look at pictures of orange and blue shapes, and we will learn how babies understand numbers."
  • "In this study that requires your child to have paper and crayons, your child will draw pictures of imaginary objects, and we will learn about children's artistic preferences."
  • In this study that involves at least one scheduled session and might include up to four more, your child will chat with a researcher about science demonstrations, and we will learn about children's memory abilities

This should be updated in multiple places:

  • Admin review document (Mark can do this, once we decide on content, such as the above suggestion)
  • The read the docs entry for "Preview summary" here
  • The self-review checkist should mention this

Developer environment setup

In step 4 of this page, the given command threw a permissions error:

git clone [email protected]:lookit/lookit-api.git

However, this command worked just fine:

git clone https://github.com/lookit/lookit-api.git

Is there any problem with having regular users do the latter?

Updating required phrasing of all scheduled studies

Right now, we ask (in admin review) that all scheduled studies say

  • This study will take place on a video call, live with a researcher! Clicking on the “Participate Now!” button will send you to an online calendar where you can select a date and time that works for you.

But this is out of date! The button says "Schedule a time to participate".

There are three things that should likely happen at the same time:

  1. Update the admin review template (Mark can do this). The text about the button just needs to be updated.
  2. Update all of the currently-posted live studies (we could tell researchers we intend to do this to their studies, with a downtime of just seconds per study, and then someone--likely Mark--could do a pass through all studies to make the change.
  3. The setting study details page on readthedocs should be updated to indicate the required wording under "Short description"

Fold in content from CHS researcher resources

This is content from a page at the old CHS webpage to keep and add to docs! From https://childrenhelpingscience.com/researcher-resources

Resources for Researchers Developing Online Studies
Building on previous advances is a core part of every scientific discipline. There are additional advantages to using standard approaches in online child research: some families will choose to participate in studies from multiple researchers, and using standard approaches can lead to a more consistent experience. This can be especially important when trying to minimize “cognitive load” for young children. Below are links to a variety of groups developing approaches and discussing options. Some of these links are starting points that link to other resources (e.g., example PowerPoint or Keynote slides for consent and warmup).

Getting Started
The below two labs have posted materials about how to get started, including working with IRBs, how to do online consent, and ideas for different study designs.

TheChildLab.com at Yale https://www.thechildlab.com/for-researchers

The Social Learning Lab at Stanford https://github.com/sociallearninglab/online_testing_materials

OSF Library of Assessment Designs https://osf.io/2yz6r/

Testing babies online over Zoom (part 1, part 2)

Hosting Studies
Although many online child studies are done over video chat as a scheduled session, others are “asynchronous” and “unmoderated.” Here are two platforms you can investigate to potentially create and host unmoderated studies.

Lookit https://github.com/lookit/research-resources/wiki

PANDA http://discoveriesonline.org/

More Places to Read and Discuss
Publications Using Online Child Data https://childrenhelpingscience.com/publications

Discussion Forum (online adult and child) https://groups.google.com/forum/#!forum/online-experiments

Discussion Forum (SRCD) https://commons.srcd.org/communities/community-home?CommunityKey=bd3d326e-b7db- 49bf-abbb-73642ac0576c

ICIS Webinar about Online Research https://infantstudies.org/2020/04/28/webinar-online-data-collection-stories-of-success-and-challenge-in-transitioning-from-lab-based-to-online-research/

randomization code mix-up in tutorial

During the randomization addition section at the end of the “Setting up your first study” under the “Counterbalance the test question” header, the instructions’ code format differed from the one that I had originally copied for this section (from Lookit Tutorial part 1).

In the template code, the bunnya01 audio and image appear in the same section (audio precedes image), but in the instructions, the shown code displays the reverse. The two different codes are shown below (labeled with header).

I think it could be helpful to display instructions with a matching format to the template code being used during the tutorial, which I provided at the bottom.

Template code (Copy of Lookit Tutorial part 1)
`{

                "audio": "bunnya01",
                "images": [
                    {
                        "id": "storybookIllustration",
                        "src": "bunnya01.png",
                        "top": 0,
                        "left": 10,
                        "width": 80
                    }
                ],
                "doRecording": true

}`

Instructions code excerpt
`{

         "images": [
             {
                 "id": "storybookIllustration",
                 "src": "bunnya01.png", <-- but change this to "QUESTION_IMAGE"
                 "top": "0",
                 "left": "10",
                 "width": "80"
             }
         ],
         "audio": "bunnya01" <-- and change this to "QUESTION_AUDIO"
     },
      ...
       ],
 "commonFrameProperties": { <-- everything in here can stay the same
     ...
 },
     "parameterSets": [ <-- add this section!
    {
        "QUESTION_IMAGE": "bunnya01.png",
        "QUESTION_AUDIO": "bunnya01"
    },
    {
        "QUESTION_IMAGE": "bunnyb01.png",
        "QUESTION_AUDIO": "bunnyb01"
    }
]
     `

Working code (matching excerpt) with similar phrases as Instructions code:
` {

                "audio": "QUESTION_AUDIO", <-- but change this to "QUESTION_AUDIO"
                "images": [
                    {
                        "id": "storybookIllustration",
                        "src": "QUESTION_IMAGE", <-- and change this to "QUESTION_IMAGE"
                        "top": 0,
                        "left": 10,
                        "width": 80
                    }
                ],
                "doRecording": true
            },
             ...
             ],
             "commonFrameProperties": { <-- everything in here can stay the same
             ...
 },
            "parameterSets": [ <-- add this section!
            {
                "QUESTION_AUDIO": "bunnya01",
                "QUESTION_IMAGE": "bunnya01.png"
            },
            {
                "QUESTION_AUDIO": "bunnyb01",
                "QUESTION_IMAGE": "bunnyb01.png"
            }
        ]
        `

Update study form documentation to match study-eligibility changes

We've added the ability to limit study eligibility by previous studies (participated-yes and participated-no), and this occasioned reorganizing the form a little bit to make it conceptually clearer. We should update the corresponding documentation to match the new order and explain the new fields!

Relevant docs page: https://lookit.readthedocs.io/en/develop/researchers-set-study-fields.html#

See PR and issue:
lookit/lookit-api#1047
lookit/lookit-api#625

Add new Experiment Runner config docs

We have plans on moving the experiment runner config to its own view. This should be documented as well as updating the documentation around study edit/creation.

"participants" spelling

This is very, very minor, but I noticed that in the instructions in "A study from the ground up" --> "5. instructions", the word "participants" is spelled incorrectly (second sentence).

Update screenshots and instructions for accessing consent manager in tutorial

As I was going through the tutorial I noticed that in part 6 of the tutorial under the heading 'checking for informed consent and giving feedback', the webpage appears slightly differently than the tutorial explains. On my web browser the consent manager and responses are two different menus rather than one combined. That is, you cannot access consent manager from the view responses tab of Apples & Oranges. Although it is easy enough to figure out, this may need to be updated for accuracy in the tutorial.

improve readme instructions for docs set up

I got a few errors when setting up my local environment for building the docs. We should add some notes about this and more detail to the instructions in the lookit-docs readme.

Update docs on study prior participation criteria

We need to add/clarify a few things in the docs to go along with the changes to the 'must have/not have participated' study eligibility criteria:

Specifically, the docs should clarify the following:

  • How prior study participation is checked: we count any non-empty responses to internal studies as having participated, even if the participant does not finish. We also count any attempt to do external studies (since we have no way of knowing if the child completed an external study).
  • If the researcher wants to prevent a child from doing their study more than once, they can add the study itself to the 'must not have participated' list, keeping in mind that the child will not be considered eligible if they start the study but don't finish. We can point researchers to alternatives if they want to implement the study participation criteria in a different way (e.g. get past sessions from protocol generator to check that they haven't completed the current study).
  • If there are multiple studies in the 'must have participated' list for study eligibility, the child must have participated in ALL of studies in the list. That is, in order to be eligible, they must have participated in study1 AND study2 AND ... studyN.
  • If there are multiple studies in the 'must not have participated' list for study eligibility, the child is only eligible if they have not participated in ANY of the studies in the list.
  • If the child is not eligible due to the study's prior participation requirements, they can still choose to participate in the study, but they will see a 'red text' warning below the 'Participate Now' button when the ineligible child is selected.

Add jsPsych docs

We need to add information about the jsPsych study type to the Lookit docs. This should correspond to the initial release of jsPsych study types on production, and will require new pages/sections. It should include all the basic information needed to set up a jsPsych experiment: what to put in the study details box (JS only, no HTML), and limitations on CHS-jsPsych studies (available plugins, loading custom plugins/css etc.). The docs should focus on things related to running jsPsych experiments on CHS specifically, and should direct people to the jsPsych docs/examples as much as possible (rather than duplicating existing documentation on how to use jsPsych).

See this issue for more details on what should go in the docs for the first MVP: lookit/lookit-api#1324

After the jsPsych docs are live, we may need to replace any placeholder links in the lookit-api with the real link (e.g. in the JSPsychForm help text).

While we're editing the docs on the study create/edit forms, if it's not too much extra work, we might also consider updating the other outdated parts of the docs/screenshots in these sections (for instance, study create/edit used to be one large form but has since been split into two: #322, #327).

Add "how to make a minimal example"/how to ask for help page

Researchers needing tech help on slack often reach out with a problem in the context of a larger experiment, and our first step is often to ask them to prepare a minimal example, or otherwise guide the request toward more info that can help us solve the problem. It would be nice to have a page on how to do this!

Improve documentation on ineligibility edge cases

TL;DR

We now provide researchers with a column telling them whether and why a specific child who participated is eligible/ineligible for a study. In some edge cases, this information is confusing or difficult to interpret because of permissions issues. That is, researchers see that a child is marked as e.g. "Ineligible_Participation" but can't verify exactly why this is so, because they don't have permission to view the data.

The interactions with permissions/privacy and difficulty of setting 'participated' criteria mean that we probably need to 'solve' this with better documentation of the current behavior rather than code upgrades.

Narrative

As a researcher, I want to know exactly why each participant is ineligible for my study. But as CHS Administrator, I want to abide by privacy restrictions and provide consistent behavior onsite across all researchers, in some cases even when this departs from a specific researcher's desired workflow.

In some cases this lack of info is straightforwardly the best option even though it's confusing for the researcher. For instance, researchers can blacklist one or more studies from another lab that's too similar to their own, but cannot directly verify that a specific person participated in a specific other study (because that would violate the privacy of that other participant/researcher pair.)

In other cases, there is some wiggle room over what the right course of action should be, but for a well-functioning platform we should NOT allow researchers to tune this at will. For instance, consider a case where a researcher self-blacklists a study to avoid repeat participation. A participant who has their consent statement rejected on try #1 will be marked as ineligible on try #2, because they will have an existing study session object (albeit a short one). This is compounded by the fact that this rejected participant becomes mostly invisible (PENDING: Details from Tiffany D. about 1/3/24 case) to the researcher, since their information appropriately is not included in the resulting dataset!

We could start elaborating the rules for what 'counts' as having participated, to e.g. allow participated-but-consent-rejected children to count as NOT having done a study, but this gets extremely messy extremely quickly (c.f. survey consent issues as they interact with emails!!!), and it would be a major effort to define a logic that (a) successfully covers all edge cases (b) does so in a logically consistent fashion across the entire codebase (c) does so in a logically consistent fashion that is easy for researchers to intuitively understand.

The alternative is to leave functionality as-is but improve documentation. Researchers should be prepared for the fact that the blacklist/whitelist criteria are very strict, get a good intuition about some of the edge cases that can come up, and be made aware of 'softer' alternatives. (For instance, leaving a study off of its own blacklist, but then using the available child object info to check the details previous participation and warn or re-route participants exactly as desired.)

Acceptance Criteria

  • Verify the edge cases in question - right now we have researcher reports, but should review these for clarity/consistency
  • Documentation has been added, either to an existing page or a new one
  • Documentation has been reviewed by a few researchers to make sure it's comprehensible

Implementation Notes

Background on "has participated"

Since the current team (mid 2021 forward) has been touching relevant parts of the codebase, we have consistently defined "participated" to mean "a response object exist" for eligibility (aka black/white listing) and I think some other cases as well. This is consistent with most other site behavior we know about, but not consistent with email, which seems to have stricter criteria, resulting in participants getting occasionally re-emailed e.g. in cases where the video consent process is not used. (For external studies, we did define participated=response object exists, and this seems to be intuitively clear to researchers, since all subsequent info past the initial click is recorded offsite.)

A large and annoying piece of technical debt to pay down would be a thorough front-end AND back-end review of all possible workflows onsite that concern participation (ie from Admin, Researcher, Participant perspectives, including emails, study display, study eligibility, study eligibility marking in datasets, my study history, etc.) and resolution to a single standard. Assuming that we don't decide to immanently pay this down, the best course of action seems to be to continue using the "participated=response object" in a consistent manner, and make sure researchers know this fact as well as their options for softer implementations they may want to make use of.

Add docs about RecordRTC vs Pipe

Even though researchers shouldn't have to know anything in particular about our switch from Pipe to RecordRTC for the Experiment Runner (lookit/ember-lookit-frameplayer#307), it might be a good idea to write some documentation for researchers about it. In particular we should (a) note the commit SHA for the last Pipe version, in case researchers need to switch back to Pipe, and (b) document any changes to the data logging (either content of the logs or interpretations of values such as timestamps and stream times).

Improve docs on combining Lookit consent with external study

Currently the documentation gives some explanations of internal vs external study types here:
https://lookit.readthedocs.io/en/develop/index.html?highlight=internal#what-is-lookit
and here:
https://lookit.readthedocs.io/en/develop/researchers-set-study-fields.html?highlight=query#external

However, researchers might find it helpful if we explain some more nuanced options for combining internal and (async) external study types (thanks Lillian Xu for suggesting this on slack!). For example, we could point out that researchers can:

  • embed an externally-hosted study (e.g. Qualtrics) within a Lookit internal study
  • start the session as an internal study (e.g. to get a consent video) and later redirect to an externally-hosted portion

Given these different options, it might also help researchers if we give our thoughts/guidance on making decisions about internal vs external studies, including the important considerations from our point of view such as family experience and data privacy.

WRT implementation of combined internal/external studies, we could provide one or more example studies showing exactly how to do this (thanks @msheskin for the suggestion!).

A related issue is that, while looking into this, I found it a little hard to pull out the parts of the documentation that are relevant to internal vs external studies. Right now I think the beginning of the documentation is mostly related to both study types, but then they diverge at "Protocol specification" (2nd page under "Creating Your Study" section). We might be able to make the documentation easier for researchers to navigate by flagging various sections/pages as only being related to internal vs external study types.

Ideas for addressing the issues above:

  • Add a new page under the "Overview" or "Using Lookit" section that goes into more detail about internal vs external study types, as well as options for combining them and passing session/ID info between platforms. We could include some example code here if it makes sense.
  • Add a new "Tutorial" section for (1) embedding external platforms in internal studies, and/or (2) redirecting from an internal study to an external platform.
  • Add a new combined internal/external example study to the platform that researchers can access and reference.
  • Clarify which parts of the documentation are specific to either internal or external study types. Maybe we could use Internal/External subheadings under "Creating Your Study", or split this section into two or more sections: "Creating Your Study - Internal", "Creating Your Study - External", "Creating Your Study - Combining Internal and External".

If anyone has thoughts on this, please feel free to add them to this issue thread!

Update out-of-date information on Features page

The following updates are needed for https://lookit.readthedocs.io/en/develop/features.html:

(1) Under Common Resources, "A time-tested approach to consenting families online that has been approved by review boards at 10+ institutions." --> is now actually 50+ institutions!

(2) Under Limitations: Please remove these bullet points, as we now support these kinds of studies!!

  • Interactive, synchronous studies where an experimenter talks with the family over audio or video chat, or where participants interact with each other. Lookit does not yet have these capabilities.

  • Studies you have already implemented for adults and want to run with older children and teens as well. (You could run a study like this on Lookit, but if you already have a robust setup for collecting data from adults, it will probably be easier to stick with that!)

updates

some images in the tutorial are out of date

Open Link in New Tab

One issue I had while working through the tutorial was that I kept losing my place because I would click on a link and it would open in the tab I was currently in. It wasn't a huge issue because I could right click and "open link in new tab," but I did lose my spot a couple times when I wasn't paying attention. It would be helpful for the link to automatically open up a new tab.

Revamping documentation

TL;DR: Revamping the documentation

Narrative
As a developer, I'd like to be able to learn about the workings of the API and set it up quickly. The current documentation does not allow for this, thus the revamping. I want to add new sections before taking out the old ones.

Acceptance Criteria
A precise description of how/when this issue will be considered closed, e.g.:

  • Rico sign-off

Add "Requirements" for inclusion/exclusion from old CHS into docs

A page on the old CHS website (https://childrenhelpingscience.com/requirements) listed our requirements for using inclusion/exclusion criteria. This information should be included in our docs. Importantly, a "standard comment" from the admin review template links to it, because it gets used a lot. We might also add information about this to a self-review checklist. Here is the text from the old page:

Information about Inclusion/Exclusion Criteria
Our website strongly encourages inclusive practices for families that want to participate in research studies. Our goals are to support great experiences for all families and wide-reaching science outreach. Importantly, there can be a difference between (1) who you allow to experience your study and (2) who you include in your final dataset.

Thus, we suggest that you have inclusion criteria that are as broad as possible. In general:

  • Most studies on our website should only have age as a criteria (or, rarely, grade in school)
  • Due to logistics (e.g., the geographic scope of some ethics approvals), some studies may list country requirements
  • When a study is about language in particular then it is allowable to include language requirements (e.g., “Because this study is about native language learning, we are only recruiting monolingual English speakers” or “Because this study is about knowing exactly two languages, we are only recruiting bilingual English and French speakers”) — but studies should not list language in any other cases, and it is assumed that visitors to our site will know enough English to have a good experience with English-language studies. (Likewise, we have some studies where the study is listed in another language—such as Spanish—and then it is assumed that families signing up for such a study will speak Spanish well enough to participate!)

A guiding principle you can use is to consider what percent of your potential participants a criteria would exclude. For example, you might want to exclude children with “History of head injury” but this can almost certainly be done in your dataset AFTER participation has occurred; you would allow children to participate without any mention of this, but then at the end ask a question about head injury (and any other similar questions) and exclude the data from your dataset in whatever way you pre-planned. In contrast, imagine that you were specifically recruiting children WITH a history of head injury. This would be appropriate to list as an inclusion criterion, because to recruit your sample without stating this you would need to recruit a massive sample and then exclude the vast majority of the children who had participated (and who did not have a history of head injury).

In short, you will only be able to list criteria on our website when you make a strong case for it, including that it would be impractical/impossible to allow families to participate and then exclude their data from your dataset at the point of data analysis.

All of this means that you might choose to not list your study on our website, if you are going to turn away many families rather than allowing them to experience your study.

Test tutorial instructions for advanced docs modification

Re: https://lookit.readthedocs.io/en/develop/contribute-edit-docs.html

This page gives instructions for setting up a github fork to make not just changes to individual pages, but more extensive changes like adding or rearranging pages on the website.

This can be approached as 2 tasks -

(1) Attempt to add a new page to the documentation, preview these changes locally, and make a pull request for this new page (which Melissa will review & then reject!) Confer with Melissa whenever stuck to identify missing tutorial information!

(2) In a separate branch, update these instructions to clarify any areas of difficult discovered during the process.

Fold in content from CHS social media guidelines

Content from the old CHS site to fold into documentation, https://childrenhelpingscience.com/social-media-guidelines

This page has guidelines for researchers posting on social media about this website, and about studies on this website.
Here are the most important points:
If you have already posted a study on ChildrenHelpingScience.com, you can advertise your study directly with a link to your study’s page on the website. This way, parents can not only participate in your study but also help science by participating in other studies on ChildHelpingScience.com

When you advertise your study and the website, consider using our slogan “Fun for families, serious for science”. Both parts of that are good messages: we hope families enjoy these activities, and families should know that they are making a difference to developmental science!

There are already many studies available on the website from many universities: this is an exciting development for the entire field worldwide, rather than something about any individual study or university.

There are a wide variety of study topics, and opportunities for nearly any child!

If you post on Facebook or Twitter, feel free to share/retweet our posts, or to mention our accounts twitter.com/helping_science and facebook.com/ChildrenHelpingScience.

You can also use the hashtag #ChildrenHelpingScience (especially on twitter).

If you have a study on ChildrenHelpingScience.com here are examples of how you might post about it:

Sample Twitter Post
Screen Shot 2023-05-18 at 8 23 48 AM

Sample Facebook Post
Screen Shot 2023-05-18 at 8 23 54 AM

"test-trials" in "sequence"

In the "set up counterbalancing" , as I was doing the tutorial study from ground up, I found it to be a little unclear whether or not I should put the "test-trials" frame in the sequence. I think an added sentence, before the next header, explaining that "test-trials" needs to be added into the sequence would have allowed the tutorial to go more smoothly for me. I had to do some fiddling around to finally get it to come out in the study at the right time, and with the randomization fully functioning.

Add docs about citing and referring to the combined platform

Modify from this Slack announcement:

Because we've recently done a platform merger, people have been asking how to refer the project when writing about it. I will add this to the documentation as well, but as a summary, if you are writing a grant that will collect data on the platform, please do something like the following:

  • The first time you refer to the platform, please include something like the following: "...In 2023, these platforms (Lookit and Children Helping Science) merged to form a single platform running on the existing Lookit architecture. The platform is now advertised to participating families as Children Helping Science (CHS)."
  • In subsequent text, please refer to the platform that you will run/advertise studies on as Children Helping Science or CHS.
  • You can continue to use Lookit to refer to previous studies that were done on the platform (or CHS to refer to studies done on that website!)
  • When talking about technical details of implementing an internal study, you can also refer to the "Lookit experiment builder."

Please don't stress about it if you've recently submitted something that doesn't do this...academia moves slowly and it'll be a while of ambiguity, but if everyone moves in a more-or-less consistent direction, there will be less chance for people to get confused or lose track of what the platform is.

(This also goes for writing papers - if you completed data collection prior to 2023, we will really appreciate a note in your methods section along these same lines, e.g. "We collected this on Lookit, which is now called CHS", if you are working now and planning to write something, "We collected this on Children Helping Science, a platform for remote data collection created by the 2023 merger of Lookit (Scott & Schulz, 2017) and Children Helping Science (Sheskin et al. 2020)")

(Hopefully in not too long we'll have a citation for the merged platform, which has lots of features that aren't referenced in either of those!)

Add Demos page

The production website has a "demos" lab that's designed for sharing example studies and templates, but people generally don't find it on their own. Add a page to the main docs directing people there and listing some examples of what they can find.

add docs related to URL query strings for external studies

I don't think the docs mention the fact that URL query parameters are added to external study links, so that researchers can automatically get/record child IDs on their external study platform (Qualtrics, etc.). Please let me know if I've missed this somewhere! If not we should add it.

Add documentation of all email templates for researcher reference

Researchers who have questions about the emails that get sent by the Lookit system don't have a good way to see the exact text! It would be good to have a page that documents (1) When/under what conditions each email is sent and (2) the text of the email (with parameters marked)

Project setup using invoke

The Basic installation docs using invoke need to include pip install semantic_version or invoke setup doesn't work.

Traceback (most recent call last):
  File "/Users/cj/.local/share/virtualenvs/lookit-api-Zqxh0ooj/bin/invoke", line 8, in <module>
    sys.exit(program.run())
  File "/Users/cj/.local/share/virtualenvs/lookit-api-Zqxh0ooj/lib/python3.8/site-packages/invoke/program.py", line 373, in run
    self.parse_collection()
  File "/Users/cj/.local/share/virtualenvs/lookit-api-Zqxh0ooj/lib/python3.8/site-packages/invoke/program.py", line 465, in parse_collection
    self.load_collection()
  File "/Users/cj/.local/share/virtualenvs/lookit-api-Zqxh0ooj/lib/python3.8/site-packages/invoke/program.py", line 696, in load_collection
    module, parent = loader.load(coll_name)
  File "/Users/cj/.local/share/virtualenvs/lookit-api-Zqxh0ooj/lib/python3.8/site-packages/invoke/loader.py", line 76, in load
    module = imp.load_module(name, fd, path, desc)
  File "/Users/cj/.asdf/installs/python/3.8.9/lib/python3.8/imp.py", line 234, in load_module
    return load_source(name, filename, file)
  File "/Users/cj/.asdf/installs/python/3.8.9/lib/python3.8/imp.py", line 171, in load_source
    module = _load(spec)
  File "<frozen importlib._bootstrap>", line 702, in _load
  File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 783, in exec_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "/Users/cj/Projects/lookit-api/tasks.py", line 30, in <module>
    import semantic_version
ModuleNotFoundError: No module named 'semantic_version'

Add docs about reviewing survey-consent

We will be flagging responses on the consent ruling and response download pages as having a survey-consent frame (see lookit/lookit-api#1317). We are flagging these because (unlike video consent) the researcher won't be able to see whether the consent is valid without approving it first. Once researchers have access to all of the data from these responses, they should be confirming that the participant actually consented by looking at their responses to the survey-consent questions.

One way to easily see whether the participant has consented is to set up the study in such a way that the participant cannot continue with the study unless they make all of required the survey-consent responses, and redirect any participants who do not consent to a different 'thank you / end study' frame. Then, researchers will be able to see whether or not a participant consented by looking at the frame sequence shown on the response download page.

If there are any responses where the survey-consent frame data does not meet the criteria for consent, the researcher needs to change the consent rulings for these responses from accepted to rejected on the consent ruling page so that the data is not available.

All of this needs to be added to the documentation, so that researchers setting up a study with survey-consent understand CHS expectations and workflows around responses that are consented with survey-consent.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.