lookit / lookit-docs Goto Github PK
View Code? Open in Web Editor NEWDocumentation for Lookit project, including use of platform and lookit-api, ember-lookit-frameplayer, and exp-addons repos.
License: MIT License
Documentation for Lookit project, including use of platform and lookit-api, ember-lookit-frameplayer, and exp-addons repos.
License: MIT License
Need to review this page (https://lookit.readthedocs.io/en/develop/other-learning-materials.html) for dead links and out-of-date resources.
For any dead links discovered, should do a quick google search for a replacement link, or add the text "DEAD LINK" in the PR for Melissa to review identify replacement resources.
This is content from a page at the old CHS webpage to keep and add to docs! From https://childrenhelpingscience.com/researcher-resources
Resources for Researchers Developing Online Studies
Building on previous advances is a core part of every scientific discipline. There are additional advantages to using standard approaches in online child research: some families will choose to participate in studies from multiple researchers, and using standard approaches can lead to a more consistent experience. This can be especially important when trying to minimize “cognitive load” for young children. Below are links to a variety of groups developing approaches and discussing options. Some of these links are starting points that link to other resources (e.g., example PowerPoint or Keynote slides for consent and warmup).
Getting Started
The below two labs have posted materials about how to get started, including working with IRBs, how to do online consent, and ideas for different study designs.
TheChildLab.com at Yale https://www.thechildlab.com/for-researchers
The Social Learning Lab at Stanford https://github.com/sociallearninglab/online_testing_materials
OSF Library of Assessment Designs https://osf.io/2yz6r/
Testing babies online over Zoom (part 1, part 2)
Hosting Studies
Although many online child studies are done over video chat as a scheduled session, others are “asynchronous” and “unmoderated.” Here are two platforms you can investigate to potentially create and host unmoderated studies.
Lookit https://github.com/lookit/research-resources/wiki
PANDA http://discoveriesonline.org/
More Places to Read and Discuss
Publications Using Online Child Data https://childrenhelpingscience.com/publications
Discussion Forum (online adult and child) https://groups.google.com/forum/#!forum/online-experiments
Discussion Forum (SRCD) https://commons.srcd.org/communities/community-home?CommunityKey=bd3d326e-b7db- 49bf-abbb-73642ac0576c
ICIS Webinar about Online Research https://infantstudies.org/2020/04/28/webinar-online-data-collection-stories-of-success-and-challenge-in-transitioning-from-lab-based-to-online-research/
TL;DR: Revamping the documentation
Narrative
As a developer, I'd like to be able to learn about the workings of the API and set it up quickly. The current documentation does not allow for this, thus the revamping. I want to add new sections before taking out the old ones.
Acceptance Criteria
A precise description of how/when this issue will be considered closed, e.g.:
We need to update our lookit-docs README (and any other internal or contributor docs) with installation instructions and troubleshooting tips for Windows.
Make a note on the expand-assets docs page to remind people to use the raw.githubusercontent.com link when using the expand-assets method.
See slack discussion: https://lookit-mit.slack.com/archives/CHG0M1LA1/p1659984109952199
And lookit-stimuli-template readme: https://github.com/lookit/lookit-stimuli-template#accessing-your-stimuli-and-telling-lookit-where-to-find-them-or-what-do-i-put-for-basedir
Re: https://lookit.readthedocs.io/en/develop/contribute-edit-docs.html
This page gives instructions for setting up a github fork to make not just changes to individual pages, but more extensive changes like adding or rearranging pages on the website.
This can be approached as 2 tasks -
(1) Attempt to add a new page to the documentation, preview these changes locally, and make a pull request for this new page (which Melissa will review & then reject!) Confer with Melissa whenever stuck to identify missing tutorial information!
(2) In a separate branch, update these instructions to clarify any areas of difficult discovered during the process.
We now provide researchers with a column telling them whether and why a specific child who participated is eligible/ineligible for a study. In some edge cases, this information is confusing or difficult to interpret because of permissions issues. That is, researchers see that a child is marked as e.g. "Ineligible_Participation" but can't verify exactly why this is so, because they don't have permission to view the data.
The interactions with permissions/privacy and difficulty of setting 'participated' criteria mean that we probably need to 'solve' this with better documentation of the current behavior rather than code upgrades.
As a researcher, I want to know exactly why each participant is ineligible for my study. But as CHS Administrator, I want to abide by privacy restrictions and provide consistent behavior onsite across all researchers, in some cases even when this departs from a specific researcher's desired workflow.
In some cases this lack of info is straightforwardly the best option even though it's confusing for the researcher. For instance, researchers can blacklist one or more studies from another lab that's too similar to their own, but cannot directly verify that a specific person participated in a specific other study (because that would violate the privacy of that other participant/researcher pair.)
In other cases, there is some wiggle room over what the right course of action should be, but for a well-functioning platform we should NOT allow researchers to tune this at will. For instance, consider a case where a researcher self-blacklists a study to avoid repeat participation. A participant who has their consent statement rejected on try #1 will be marked as ineligible on try #2, because they will have an existing study session object (albeit a short one). This is compounded by the fact that this rejected participant becomes mostly invisible (PENDING: Details from Tiffany D. about 1/3/24 case) to the researcher, since their information appropriately is not included in the resulting dataset!
We could start elaborating the rules for what 'counts' as having participated, to e.g. allow participated-but-consent-rejected children to count as NOT having done a study, but this gets extremely messy extremely quickly (c.f. survey consent issues as they interact with emails!!!), and it would be a major effort to define a logic that (a) successfully covers all edge cases (b) does so in a logically consistent fashion across the entire codebase (c) does so in a logically consistent fashion that is easy for researchers to intuitively understand.
The alternative is to leave functionality as-is but improve documentation. Researchers should be prepared for the fact that the blacklist/whitelist criteria are very strict, get a good intuition about some of the edge cases that can come up, and be made aware of 'softer' alternatives. (For instance, leaving a study off of its own blacklist, but then using the available child object info to check the details previous participation and warn or re-route participants exactly as desired.)
Background on "has participated"
Since the current team (mid 2021 forward) has been touching relevant parts of the codebase, we have consistently defined "participated" to mean "a response object exist" for eligibility (aka black/white listing) and I think some other cases as well. This is consistent with most other site behavior we know about, but not consistent with email, which seems to have stricter criteria, resulting in participants getting occasionally re-emailed e.g. in cases where the video consent process is not used. (For external studies, we did define participated=response object exists, and this seems to be intuitively clear to researchers, since all subsequent info past the initial click is recorded offsite.)
A large and annoying piece of technical debt to pay down would be a thorough front-end AND back-end review of all possible workflows onsite that concern participation (ie from Admin, Researcher, Participant perspectives, including emails, study display, study eligibility, study eligibility marking in datasets, my study history, etc.) and resolution to a single standard. Assuming that we don't decide to immanently pay this down, the best course of action seems to be to continue using the "participated=response object" in a consistent manner, and make sure researchers know this fact as well as their options for softer implementations they may want to make use of.
We have plans on moving the experiment runner config to its own view. This should be documented as well as updating the documentation around study edit/creation.
The production website has a "demos" lab that's designed for sharing example studies and templates, but people generally don't find it on their own. Add a page to the main docs directing people there and listing some examples of what they can find.
Please add me to the tutorial participant list.
We've added the ability to limit study eligibility by previous studies (participated-yes and participated-no), and this occasioned reorganizing the form a little bit to make it conceptually clearer. We should update the corresponding documentation to match the new order and explain the new fields!
Relevant docs page: https://lookit.readthedocs.io/en/develop/researchers-set-study-fields.html#
See PR and issue:
lookit/lookit-api#1047
lookit/lookit-api#625
The current instructions are confusing because they use MIT-hosted stimuli, which don't model the issue with Github URLs. Instead, use the sample_video files already contained in the GH stimuli repository!
(In short links should be formatted as:
https://raw.githubusercontent.com/user/repo/branch
not
We are adding a new 'eligibility' field to the response data available to researchers. We should add this to the documentation, and be sure to (a) explain what the different levels mean, and (b) explain when/how these values are determined.
Here are the relevant changes:
lookit/lookit-api#1292
lookit/lookit-api#1300
During the randomization addition section at the end of the “Setting up your first study” under the “Counterbalance the test question” header, the instructions’ code format differed from the one that I had originally copied for this section (from Lookit Tutorial part 1).
In the template code, the bunnya01 audio and image appear in the same section (audio precedes image), but in the instructions, the shown code displays the reverse. The two different codes are shown below (labeled with header).
I think it could be helpful to display instructions with a matching format to the template code being used during the tutorial, which I provided at the bottom.
Template code (Copy of Lookit Tutorial part 1)
`{
"audio": "bunnya01",
"images": [
{
"id": "storybookIllustration",
"src": "bunnya01.png",
"top": 0,
"left": 10,
"width": 80
}
],
"doRecording": true
}`
Instructions code excerpt
`{
"images": [
{
"id": "storybookIllustration",
"src": "bunnya01.png", <-- but change this to "QUESTION_IMAGE"
"top": "0",
"left": "10",
"width": "80"
}
],
"audio": "bunnya01" <-- and change this to "QUESTION_AUDIO"
},
...
],
"commonFrameProperties": { <-- everything in here can stay the same
...
},
"parameterSets": [ <-- add this section!
{
"QUESTION_IMAGE": "bunnya01.png",
"QUESTION_AUDIO": "bunnya01"
},
{
"QUESTION_IMAGE": "bunnyb01.png",
"QUESTION_AUDIO": "bunnyb01"
}
]
`
Working code (matching excerpt) with similar phrases as Instructions code:
` {
"audio": "QUESTION_AUDIO", <-- but change this to "QUESTION_AUDIO"
"images": [
{
"id": "storybookIllustration",
"src": "QUESTION_IMAGE", <-- and change this to "QUESTION_IMAGE"
"top": 0,
"left": 10,
"width": 80
}
],
"doRecording": true
},
...
],
"commonFrameProperties": { <-- everything in here can stay the same
...
},
"parameterSets": [ <-- add this section!
{
"QUESTION_AUDIO": "bunnya01",
"QUESTION_IMAGE": "bunnya01.png"
},
{
"QUESTION_AUDIO": "bunnyb01",
"QUESTION_IMAGE": "bunnyb01.png"
}
]
`
some images in the tutorial are out of date
We need a section of docs that will walk them through creating a PR.
I got a few errors when setting up my local environment for building the docs. We should add some notes about this and more detail to the instructions in the lookit-docs readme.
Researchers who have questions about the emails that get sent by the Lookit system don't have a good way to see the exact text! It would be good to have a page that documents (1) When/under what conditions each email is sent and (2) the text of the email (with parameters marked)
We've added a new feature - ordering studies on a lab's custom URL page. We should add documentation about how this works (with screenshots probably) to the documentation!
See issue [#]1024(lookit/lookit-api#1024) and PR lookit/lookit-api#1044
As part of our goal of supporting jsPsych and other experiment runners (see lookit/lookit-api#1236), we may need to make some changes to the lookit-api architecture. After implementing jsPsych support, we should update the Lookit architecture documentation page to reflect any changes.
I was trying to find out about how to submit an institutional agreement to add a new PI and lab for an institution that already has an access agreement set up, but I noticed that the instructions cut off mid sentence.
https://lookit.readthedocs.io/en/develop/community-irb-and-legal-information.html
(Under How to get it signed: "Checking in about an existing agreement also allows you to-")
Thanks!
Right now, we ask (in admin review) that all scheduled studies say
But this is out of date! The button says "Schedule a time to participate".
There are three things that should likely happen at the same time:
doRecording
default value to exp-looki-images-audio
page: https://lookit.readthedocs.io/projects/frameplayer/en/latest/components/exp-lookit-images-audio/doc.html#parametersEven though researchers shouldn't have to know anything in particular about our switch from Pipe to RecordRTC for the Experiment Runner (lookit/ember-lookit-frameplayer#307), it might be a good idea to write some documentation for researchers about it. In particular we should (a) note the commit SHA for the last Pipe version, in case researchers need to switch back to Pipe, and (b) document any changes to the data logging (either content of the logs or interpretations of values such as timestamps and stream times).
The documentation still contains some references to Kim as the primary contact person; most of these should be replaced with Melissa Kline Struhl (though the actual contact is often unchanged, e.g. [email protected]).
The following updates are needed for https://lookit.readthedocs.io/en/develop/features.html:
(1) Under Common Resources, "A time-tested approach to consenting families online that has been approved by review boards at 10+ institutions." --> is now actually 50+ institutions!
(2) Under Limitations: Please remove these bullet points, as we now support these kinds of studies!!
Interactive, synchronous studies where an experimenter talks with the family over audio or video chat, or where participants interact with each other. Lookit does not yet have these capabilities.
Studies you have already implemented for adults and want to run with older children and teens as well. (You could run a study like this on Lookit, but if you already have a robust setup for collecting data from adults, it will probably be easier to stick with that!)
Researchers have many different approaches to writing the "under 100 words" preview summary that appears on the listing page of available studies. That is a field that was added more recently than the "short description" and "purpose" fields, and we have not updated instructions, self-review checklists, or admin review to reflect it.
Thus, researchers have chosen a lot of different approaches. For example, they have different mixes of "what" and "purpose". And some just say "This study will take place on a video call, live with a researcher!" and nothing more. And there are many more variations!
For some useful standardization, one idea is that we require the following template (exceptions allowable for good reasons):
[optional detail about study delivery] + [major detail about what happens] + [optional major detail about purpose]
Below is what it would look like for a few examples. In each example, both optional bits are included, but some studies might not have anything to include in the first bit (especially if it was just a "standard" internal study):
This should be updated in multiple places:
The Basic installation docs using invoke need to include pip install semantic_version
or invoke setup
doesn't work.
Traceback (most recent call last):
File "/Users/cj/.local/share/virtualenvs/lookit-api-Zqxh0ooj/bin/invoke", line 8, in <module>
sys.exit(program.run())
File "/Users/cj/.local/share/virtualenvs/lookit-api-Zqxh0ooj/lib/python3.8/site-packages/invoke/program.py", line 373, in run
self.parse_collection()
File "/Users/cj/.local/share/virtualenvs/lookit-api-Zqxh0ooj/lib/python3.8/site-packages/invoke/program.py", line 465, in parse_collection
self.load_collection()
File "/Users/cj/.local/share/virtualenvs/lookit-api-Zqxh0ooj/lib/python3.8/site-packages/invoke/program.py", line 696, in load_collection
module, parent = loader.load(coll_name)
File "/Users/cj/.local/share/virtualenvs/lookit-api-Zqxh0ooj/lib/python3.8/site-packages/invoke/loader.py", line 76, in load
module = imp.load_module(name, fd, path, desc)
File "/Users/cj/.asdf/installs/python/3.8.9/lib/python3.8/imp.py", line 234, in load_module
return load_source(name, filename, file)
File "/Users/cj/.asdf/installs/python/3.8.9/lib/python3.8/imp.py", line 171, in load_source
module = _load(spec)
File "<frozen importlib._bootstrap>", line 702, in _load
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 783, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/Users/cj/Projects/lookit-api/tasks.py", line 30, in <module>
import semantic_version
ModuleNotFoundError: No module named 'semantic_version'
Content from the old CHS site to fold into documentation, https://childrenhelpingscience.com/social-media-guidelines
This page has guidelines for researchers posting on social media about this website, and about studies on this website.
Here are the most important points:
If you have already posted a study on ChildrenHelpingScience.com, you can advertise your study directly with a link to your study’s page on the website. This way, parents can not only participate in your study but also help science by participating in other studies on ChildHelpingScience.com
When you advertise your study and the website, consider using our slogan “Fun for families, serious for science”. Both parts of that are good messages: we hope families enjoy these activities, and families should know that they are making a difference to developmental science!
There are already many studies available on the website from many universities: this is an exciting development for the entire field worldwide, rather than something about any individual study or university.
There are a wide variety of study topics, and opportunities for nearly any child!
If you post on Facebook or Twitter, feel free to share/retweet our posts, or to mention our accounts twitter.com/helping_science and facebook.com/ChildrenHelpingScience.
You can also use the hashtag #ChildrenHelpingScience (especially on twitter).
If you have a study on ChildrenHelpingScience.com here are examples of how you might post about it:
One issue I had while working through the tutorial was that I kept losing my place because I would click on a link and it would open in the tab I was currently in. It wasn't a huge issue because I could right click and "open link in new tab," but I did lose my spot a couple times when I wasn't paying attention. It would be helpful for the link to automatically open up a new tab.
bug issue
This is very, very minor, but I noticed that in the instructions in "A study from the ground up" --> "5. instructions", the word "participants" is spelled incorrectly (second sentence).
We need to add information about the jsPsych study type to the Lookit docs. This should correspond to the initial release of jsPsych study types on production, and will require new pages/sections. It should include all the basic information needed to set up a jsPsych experiment: what to put in the study details box (JS only, no HTML), and limitations on CHS-jsPsych studies (available plugins, loading custom plugins/css etc.). The docs should focus on things related to running jsPsych experiments on CHS specifically, and should direct people to the jsPsych docs/examples as much as possible (rather than duplicating existing documentation on how to use jsPsych).
See this issue for more details on what should go in the docs for the first MVP: lookit/lookit-api#1324
After the jsPsych docs are live, we may need to replace any placeholder links in the lookit-api with the real link (e.g. in the JSPsychForm
help text).
While we're editing the docs on the study create/edit forms, if it's not too much extra work, we might also consider updating the other outdated parts of the docs/screenshots in these sections (for instance, study create/edit used to be one large form but has since been split into two: #322, #327).
As I was going through the tutorial I noticed that in part 6 of the tutorial under the heading 'checking for informed consent and giving feedback', the webpage appears slightly differently than the tutorial explains. On my web browser the consent manager and responses are two different menus rather than one combined. That is, you cannot access consent manager from the view responses tab of Apples & Oranges. Although it is easy enough to figure out, this may need to be updated for accuracy in the tutorial.
In step 4 of this page, the given command threw a permissions error:
git clone [email protected]:lookit/lookit-api.git
However, this command worked just fine:
git clone https://github.com/lookit/lookit-api.git
Is there any problem with having regular users do the latter?
We need to add/clarify a few things in the docs to go along with the changes to the 'must have/not have participated' study eligibility criteria:
Specifically, the docs should clarify the following:
I don't think the docs mention the fact that URL query parameters are added to external study links, so that researchers can automatically get/record child IDs on their external study platform (Qualtrics, etc.). Please let me know if I've missed this somewhere! If not we should add it.
We will be flagging responses on the consent ruling and response download pages as having a survey-consent
frame (see lookit/lookit-api#1317). We are flagging these because (unlike video consent) the researcher won't be able to see whether the consent is valid without approving it first. Once researchers have access to all of the data from these responses, they should be confirming that the participant actually consented by looking at their responses to the survey-consent
questions.
One way to easily see whether the participant has consented is to set up the study in such a way that the participant cannot continue with the study unless they make all of required the survey-consent
responses, and redirect any participants who do not consent to a different 'thank you / end study' frame. Then, researchers will be able to see whether or not a participant consented by looking at the frame sequence shown on the response download page.
If there are any responses where the survey-consent
frame data does not meet the criteria for consent, the researcher needs to change the consent rulings for these responses from accepted
to rejected
on the consent ruling page so that the data is not available.
All of this needs to be added to the documentation, so that researchers setting up a study with survey-consent
understand CHS expectations and workflows around responses that are consented with survey-consent
.
Under the "A study from ground up" --> 5. instructions in the second paragraph underneath the code, there are instructions that read "As in the preview, here are some stimuli you need to add! Browse the audio files here to find an mp3 and ogg version of a “ready to go!”"
The closest audio file is "readygo_HO.mp3" in the given link. I think it would help to either make a new recording that matches the instructions (saying "ready to go!"), or change the instructions to fit the exact wording of the audio file being used (i.e. rather than "an mp3 and ogg version of a “ready to go!”", it could read "an mp3 and ogg version of a “ready...go”".
A page on the old CHS website (https://childrenhelpingscience.com/requirements) listed our requirements for using inclusion/exclusion criteria. This information should be included in our docs. Importantly, a "standard comment" from the admin review template links to it, because it gets used a lot. We might also add information about this to a self-review checklist. Here is the text from the old page:
Information about Inclusion/Exclusion Criteria
Our website strongly encourages inclusive practices for families that want to participate in research studies. Our goals are to support great experiences for all families and wide-reaching science outreach. Importantly, there can be a difference between (1) who you allow to experience your study and (2) who you include in your final dataset.
Thus, we suggest that you have inclusion criteria that are as broad as possible. In general:
A guiding principle you can use is to consider what percent of your potential participants a criteria would exclude. For example, you might want to exclude children with “History of head injury” but this can almost certainly be done in your dataset AFTER participation has occurred; you would allow children to participate without any mention of this, but then at the end ask a question about head injury (and any other similar questions) and exclude the data from your dataset in whatever way you pre-planned. In contrast, imagine that you were specifically recruiting children WITH a history of head injury. This would be appropriate to list as an inclusion criterion, because to recruit your sample without stating this you would need to recruit a massive sample and then exclude the vast majority of the children who had participated (and who did not have a history of head injury).
In short, you will only be able to list criteria on our website when you make a strong case for it, including that it would be impractical/impossible to allow families to participate and then exclude their data from your dataset at the point of data analysis.
All of this means that you might choose to not list your study on our website, if you are going to turn away many families rather than allowing them to experience your study.
for clarity when researchers are getting started
Currently the documentation gives some explanations of internal vs external study types here:
https://lookit.readthedocs.io/en/develop/index.html?highlight=internal#what-is-lookit
and here:
https://lookit.readthedocs.io/en/develop/researchers-set-study-fields.html?highlight=query#external
However, researchers might find it helpful if we explain some more nuanced options for combining internal and (async) external study types (thanks Lillian Xu for suggesting this on slack!). For example, we could point out that researchers can:
Given these different options, it might also help researchers if we give our thoughts/guidance on making decisions about internal vs external studies, including the important considerations from our point of view such as family experience and data privacy.
WRT implementation of combined internal/external studies, we could provide one or more example studies showing exactly how to do this (thanks @msheskin for the suggestion!).
A related issue is that, while looking into this, I found it a little hard to pull out the parts of the documentation that are relevant to internal vs external studies. Right now I think the beginning of the documentation is mostly related to both study types, but then they diverge at "Protocol specification" (2nd page under "Creating Your Study" section). We might be able to make the documentation easier for researchers to navigate by flagging various sections/pages as only being related to internal vs external study types.
Ideas for addressing the issues above:
If anyone has thoughts on this, please feel free to add them to this issue thread!
The page describing the study approval process is out of date regarding external studies -- it says they are in pilot mode and to contact Melissa. Found at this link, and a screenshot is attached. https://lookit.readthedocs.io/en/develop/community-study-approval-process.html#study-approval
Modify from this Slack announcement:
Because we've recently done a platform merger, people have been asking how to refer the project when writing about it. I will add this to the documentation as well, but as a summary, if you are writing a grant that will collect data on the platform, please do something like the following:
Please don't stress about it if you've recently submitted something that doesn't do this...academia moves slowly and it'll be a while of ambiguity, but if everyone moves in a more-or-less consistent direction, there will be less chance for people to get confused or lose track of what the platform is.
(This also goes for writing papers - if you completed data collection prior to 2023, we will really appreciate a note in your methods section along these same lines, e.g. "We collected this on Lookit, which is now called CHS", if you are working now and planning to write something, "We collected this on Children Helping Science, a platform for remote data collection created by the 2023 merger of Lookit (Scott & Schulz, 2017) and Children Helping Science (Sheskin et al. 2020)")
(Hopefully in not too long we'll have a citation for the merged platform, which has lots of features that aren't referenced in either of those!)
Add parameter & definition for the new spoken consent statement; implement Becky's idea to include the text of the current statements. From #314
In the "set up counterbalancing" , as I was doing the tutorial study from ground up, I found it to be a little unclear whether or not I should put the "test-trials" frame in the sequence. I think an added sentence, before the next header, explaining that "test-trials" needs to be added into the sequence would have allowed the tutorial to go more smoothly for me. I had to do some fiddling around to finally get it to come out in the study at the right time, and with the randomization fully functioning.
Researchers needing tech help on slack often reach out with a problem in the context of a larger experiment, and our first step is often to ask them to prepare a minimal example, or otherwise guide the request toward more info that can help us solve the problem. It would be nice to have a page on how to do this!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.