bioinformatics-research-network / bioinformed-skill-app Goto Github PK
View Code? Open in Web Editor NEWRepository for the Bioinformed Skill Assessment App
Home Page: https://skill.bioinformed.app
License: MIT License
Repository for the Bioinformed Skill Assessment App
Home Page: https://skill.bioinformed.app
License: MIT License
We should set up dependency caching for all workflows in skill assessments.
This would probably make the testing process a bit faster, especially for R-based assessments.
This repo needs a README with detailed developer notes so that others can contribute to it. Depends on #18
Currently, the ghbot
and crud
services are both deployed with elastic beanstalk as full-time web servers. This is probably unnecessary. ghbot
at least could probably be deployed as an AWS lambda function instead -- thus reducing costs and overhead.
However, this may not work for other reasons -- and this will require research to determine whether this should be done.
To prevent double-posting during development / testing, the ghbot should be modified so that it can only respond to messages in repos which it already has in its database. This will prevent the production bot from responding to repos made during local development / testing.
However, it may also be important to prevent the development database from having a copy of open repos in production database. The reason is that the local bot may start responding to trainee activity in addition to the production bot.
Reviewers cannot be assigned to an assessment unless they are already members of the github org for that assessment.
This means a reviewer request will fail unless the reviewer has already been invited to the org as an owner & has accepted their invite already.
This means that the process of onboarding reviewers needs to involve them be invited to the org by the GH bot and then accepting it BEFORE we start assigning them to review things.
Platform docs should include a tutorial / quickstart using the Python/R tutorial assessments
For every assessment, the WebUI should check whether the pre-requisite has been completed by the trainee. If not, the assessment should appear "locked" (greyed out with a lock symbol that says which prerequisite is still required).
Badge images load slowly in the webui.
This is probably because all badge images are obtained directly from Badgr via a URL redirect, thus they will always be very slow. Instead, they should be stored locally or in S3.
Create a form for changing the email in the settings -- will also require warnings for the user as they will lose their badges and will have to re-verify their account.. We could also implement a system to reissue badges to the user under the new email address after it is verified.
The badges for each service are not updating if the tests for that service were not executed in the latest commit.
The issue is in the workflow yml files -- they will only execute when a change occurs in that particular service directory.
We can remove this, but it would be good to fix #19 first so that we don't introduce too much additional computational burden.
In the production version of this, the CRUD and GHbot apps should only listen to requests coming from specific IP addresses of the other production services.
CRUD app will send a POST request to the Slack Bot when a reviewer is requested. This will probably be the structure:
{
'valid_reviewers': [123, 456, 029],
'trainee_name': "john smith",
'assessment_name': "R Programming I"
}
The valid_reviewers
will be a list of slack IDs that you can use to locate these potential reviewers.
Users should be able to publicly share their profile on our platform.
Therefore, there should be a view in the webui that doesn't require authentication, but which can display certain information from the user, such as their badges and some of their profile info.
Ideally, they should also be able to customize this and make it look nice!
The business cases:
This feature would allow users to request a second reviewer (possibly via the webui or ghbot) -- and appeal the first reviewer's decision
This would be useful in cases where a reviewer is being unreasonable -- this system would allow users to request a second set of eyes on the assessment
Need to go through each deployment workflow and ensure it is still working correctly. Need to then verify that production app is working as expected.
Rather than having to enter the latest release tag into airtable, we should instead make the github bot automatically detect the latest release and use it. This will simplify the process of creating and maintaining skill assessments.
Likely impacted:
ghbot
sync
Currently the ghbot test calls the CRUD app in its dev configuration, so it returns a random reviewer. Instead, it should always return only brnbot
as the reviewer.
Lambda sync has encountered errors for the last several runs. Examine and fix.
Upon successful completion of an assessment, a badgr "assertion" is created (this issues the badge to the user). However, the metadata is currently not customized for the user, so the public page for the assertion is not very useful / descriptive.
See an example here.
Minimally, we should include the link to the user's assessment repo as part of the "Evidence".
Ideally, we could include other metadata, such as the user's information (their name and possibly a link to their public profile on our platform, if we have public profiles yet).
We could also include statistics on how many people have completed this assessment successfully in the past (if we already have stats).
The badges should each have verbose descriptions which indicate:
The sync
service should manage the task of adding these descriptions automatically to the badgr badge entry via the badgr api (main docs here).
Noticed today that we only have response schemas for the /api/init
endpoint -- it would be nice to develop them for the other endpoints as well!
Given the workflow as currently envisioned here, we need to develop a series of utility functions to facilitate communication with assigned reviewers on Slack. The basic workflow is:
app.utils.assign_reviewer()
app.utils.create_reviewer_link()
which creates a static HTML page with two links that contain HTTP calls to the /api/confirm-reviewer
endpoint. This is uploaded to AWS S3 for access by reviewer.app.utils.notify_reviewer_slack()
which calls the slack API and sends a notification to the member on Slack with the link to the HTML page on AWS where they can confirm or deny the review request.app.utils.confirm_reviewer()
which calls bot.reviewer_status()
and app.crud.update_reviewer()
-- these actions lead to the reviewer being assigned on GitHub and in our database.app.utils.deny_reviewer()
which calls app.utils.assign_reviewer()
again to repeat the process with a new reviewer.We need to launch a production DB for this platform. It should probably run on AWS RDS via a MySQL server and have minimal resources required for production + backup capabilities.
It should include an account that will be used by services to update / read / delete / create entries in the DB. This account must have the minimal permissions required by these services.
All .env
files will need to be updated with the new connection details of this database and access credentials for the service account.
Codecov fails when checking coverage for full project -- we don't need this check, so it should be removed.
The workflow will be:
If the user does fill out the form, they should be prompted again for each visit to the channel (@jmsdao is this feasible?)
Currently we deploy 3 services separately (ghbot
, crud
, and webui
). However this is expensive and unnecessary for the stage we are at today with this app. Instead, we should develop a push-button deploy workflow for all three services to the same elastic beanstalk instance. This could use docker compose in order to orchestrate them within each EC2.
The structure could be like this:
EB-Docker-Image:
docker-compose:
crud:
image: crud-image on ECR
ghbot:
image: ghbot-image on ECR
webui:
image: webui-image on ECR
Currently, the production github bot is acting on the same repos during local testing of all services
It is always acting on repos in this org: https://github.com/brn-test-assessment
We need a way of randomly creating repos which are distinct whenever the bot is acting on repos in that org (but not any others).
One way to accomplish this would be to introduce a random string into the repo names, but only if the ghbot is acting on that org. This may introduce other complications and will need significant testing.
We need endpoints for:
Our testing workflows take > 3 minutes at present because all images in the docker compose are being fully built for each one. Instead, we should look into using an action which can cache the docker images between GHA runs.
Possibly can use this: https://github.com/marketplace/actions/build-docker-images-using-cache
test
The trust relationship for GHA <-> AWS only covers the main branch, not PRs on the main branch.
Title says it all -- needs to be fixed.
Currently, if a user who is not logged in attempts to access a protected resources (e.g., /profile
), they receive an error page. Instead, they should simply be redirected to the home/login page (/
).
This issue was raised by @jmsdao -- apparently one 1920px screens (such as his laptop) we are still seeing that the media size is "medium" instead of large or xl.
Add environment variables for smee for /ghbot
and /crud
. Make necessary changes to run docker with them.
Currently, we are using only the one GitHub bot (https://github.com/apps/brn-bot) for testing, development, and production ENVs. This is a problem because testing and development require using smee.io as the webhook URL. This is unsuitable for production.
We need some way to specify a production bot which only uses the production webhook URL, rather than smee.io's URL.
Might require being able to specify the app ID, app key, etc in the .prod.env
file and having logic in the ghbot
code to correctly switch when in production.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.