hackillinois / api Goto Github PK
View Code? Open in Web Editor NEWThe Official API supporting HackIllinois
Home Page: https://api.hackillinois.org
License: Other
The Official API supporting HackIllinois
Home Page: https://api.hackillinois.org
License: Other
Right now adding the first Admin
user to the API requires manually modifying the database. We should have a config variable that allows an Admin
or Superuser
role to be granted to a single user. It would also be useful to automatically grant the Staff
role to all users with an email at a specified domain.
This needs to avoid the potential security issue of an OAuth provider allowing an email without verification allowing anyone to obtain admin rights. We need to verify that the user actually owns the email we are getting from the OAuth provider. One solution to this is to send a verification email to the user with a link that triggers a granting of the role.
The current documentation for the API is currently barely adequate and it is important that documentation is available in an easy to read format for API consumers. This documentation should be accessible from outside this repository at an external site. Setting a solution that can build static documentation files which can be uploaded to an s3 bucket and served as a website would be a good solution.
Controller function at:
Location of proposed Route:
Line 34 in 5ef2301
This should be restricted to Admin
s.
Each service that generates stats should have an endpoint for GET
ing the stats. These services should be registered with the stat service.
This endpoint should return a list of all mailing lists stored in the database.
The endpoint should be GET /mail/list/
.
A rough response struct would be:
{
"mailList": [
"mailList1",
"mailList3",
"mailList3"
]
}
GET /mail/list/
endpointRight now we use presigned urls for downloading of files. We should also use them for uploading files. This would eliminate the need to move large files though our gateway and upload services greatly reducing load on the API.
Super cool this project has gotten to the point where a build system is required! You guys have put in a ton of work and the contributions to arbor are appreciated. I'd like to caution against using make
just from an accessibility and maintainability perspective. Using something that is a bit easier to add and modify as the project changes will make it easier for others to contribute.
A few recommendations:
Since the api is a monorepo a tool like bazel
or please
would make sense. Id probably recommend please (https://please.build/index.html) for this project since it doesnt add a jdk dependency and has the same nice monorepo behavior as bazel
but if you are considering bazel
take a look at dazel
(https://github.com/nadirizr/dazel)
Mage
is also nice if you want something closer to makefiles (https://github.com/magefile/mage)
Current the API does not have any method of sending out notifications to user except via email. Prior versions of the API had an endpoint which clients could GET
periodically for notifications. A better solution would be to push notifications to mobile and web clients. We don't want to code 3 separate code paths for iOS, Android, and Web so we should look into external service that will allow us to send them our notification and have it relayed to all 3 front ends. AWS provides a service that does this, but we should look into all our options.
notification
Currently the Staff role is not actually used in the gateway for authenticating requests. There are a number of endpoints that are currently restricted to Admin users that can be opened up to Staff as well.
Refactor InitMongoDatabase to Connect in database.go, so it is a method of the database struct.
The auth middleware should come before the identification middleware. The identification middleware attempts to decode the user's id from the user's JWT. It makes more sense to check if the token is valid before we try to decode it.
The stat service currently makes GET
requests to each registered service in serial. This can be greatly sped up by making these requests in parallel.
Currently the registration service ignores the response from the mail service. It should check if the reponse code is 200 to ensure that the mail service was able to initiate a confirmation email to the attendee.
This is line where the response should be captured:
The ideal implementation of this would be add the caching to the database methods defined the common
package. This would allow all service to continue using the database methods they currently use and get the benefits of caching. Different caching models will need to be discussed to decide how we write to and invalidate the cache.
Caching should be done with redis
. It is fast, simple to run locally, and can be run in a managed deployment on AWS.
All ACCEPTED
decisions should expire after a defined amount of time. Once an ACCEPTED
decision has expired users should not be allowed to rsvp for the event.
Create enum for roles and use that instead of string representations.
Currently we do not check the event start and end times when checking attendees into events. Attendees should only be able to check-in to an event only if is currently occurring.
We lack utilities for managing the user base of the api, things such as role assignment. Basic querying of statistics and event management shouldn't need to be done at the bare bones level.
We can to give new contributors clear instructions on how they can set up their developer environment and start working on issues.
Currently when setting up a development environment, all mongo collections start empty making it difficult to quickly test endpoints. Developers must first add the appropriate data to the the collections by making other requests. We should have a data generator that creates some fake data and manually inserts it into the mongo collections, bypassing the API. This will allow new developers to get started working immediately. This data generator should also be designed to be easy to update with changing schemes.
utilities/datagen/
Currently we only test the service level functionality within each microservice. As a result all the controller logic in the API has only been manually tested. We should have integration tests to the controller logic of each microservice. These integration tests will either have to spin up the entire API or mock the services that are required for the service that is being tested.
Experience wants to be able to view registrations in excel so we should have some scripts that pulls all the registrations from the API and then convert the json response into a csv file.
utilities/registrationcsv/
IS_PRODUCTION
to determine if running in productionPUT
requests should upload files to the specified s3 bucket and GET
requests should return a presigned url authorizing the downloading of the file from the s3 bucketPUT
requests should write the file to disk in a location on /tmp and GET
requests should return the local path of the fileThe logic for determining if running in production should be done at the service level. I recommend having separate functions for the local file handling and having that code path get executed instead of the current s3 code path when not running in production.
Since OAuth is the only login mechanism supported in the API we need a way to login to the API when developing locally. The token generator provides a method for generating tokens that can be included for authenticated requests to the API. However we also need to insert the user info associated with the ID generated in the token into the database.
utilities/accountgen/
make setup
to insert an admin account into the databaseWe don't want to let users upload arbitrarily large files to our s3 buckets. The gateway does cap all incoming requests at 16mb
, but that is still larger than we need to allow. All uploads are currently resumes and there isn't any reason a single resume should be over 4mb
. We could probably lower the limit to 2mb
and be okay as well.
4mb
bytesCurrently Rsvps only hold a yes/no value signifying if an applicant has chosen to attend the event. It looks like we will want to gather more information from the attendee if they indicate they will be attending the event. This would be like a second mini registration used to determine project interests. The exact fields still need to be defined.
Currently the auth service makes a seperate http request for each element of user data fetches from the oauth provider (ie. name, email, etc.). We should be able to get all of the user's information in a single http request and return a struct with all of the user's information to the user.
The auth service also is a good example of where go's interfaces can be used. Every oauth provider should conform to an OauthProvider interface that exposes the methods for getting the authorization url, obtain an oauth token, and retrieving user information.
Currently the logging in the API is the default logging provided by arbor. It logs the incoming requests to the gateway and the outgoing responses to these requests. More logging information is needed for logs to useful in production.
We should add a logging package to the common
package, which takes different logging levels such as ERROR
, WARN
, INFO
, DEBUG
. A log level should be configurable so that DEBUG
logs can be disabled in production.
Microservices should use the logging package to log any errors that are encountered. It might also be useful to log other information as well. What exactly to log still needs to be defined.
common
One possible implementation of this would be to have a go routine dedicated to sending delayed messages. When a delayed message is created is would be passed to the dedicated go routine where the mail would be sent at the time specified. Note that this go routine should not busy-wait, since we don't want to use up CPU resources unnecessarily. Channels will likely be very useful in implementing this.
The delayed mail should generate the list of users and their substitution based on the state of the mailing list at the time of sending (as opposed to when the delay send request is created).
Currently the upload service does not sent the content type for S3 uploads which creates issues when later retrieving the file.
See ReflectionsProjections/api#14 for the fix.
The gateway is missing the route for filtering users in the user service. That should be added and restricted to Admin
users.
The first url does not work, but the second does. The difference is the ordering of url parameters.
The issue is in the parsing of the redirect_uri
parameter when there is a #
in the url. We should definitely put the redirect_uri
parameter last in the url and we should also use a url building library instead of string concatenation.
As the number of people contributing the API increases it is important that we define specific contribution guidelines for members to make tackling issues simple for new contributers.
When a service is down the gateway should return a 503 (or maybe 504) to the user along with a json body containing an error message. There is an open issue upstream for this, arbor-dev/arbor#35. We should consider implementing this functionality there.
If a user is given an override to check-in they should have the same permissions in the API as anyone else who checked in.
Attendee
to a user upon checkin overrideThis endpoint should be using arbor's RAW
format instead of the default JSON
format we use everywhere else.
api/gateway/services/upload.go
Line 41 in 5ef2301
Currently we have QR Code generation in the checkin service, but it makes more sense for it to be in the user service since it is used in many places and it is a way to identify a user.
userid
, email
, etc.This package was originally designed to be generic, but over time mongo specific types became a part of the interface. The Database
interface should be refactored to not return *mgo.ChangeInfo
. Instead we should return our own struct that is similar to *mgo.ChangeInfo
. Structs which implement the interface will need to populate this custom ChangeInfo like struct. Additionally while the Database
interface does return a generic error
type, users of the interface currently import mgo.ErrNotFound
to check if the returned error was a not found error. We should have our own Not Found error that can be returned and checked against instead. We may also need to provide other common error types as well. We will also need to find a way to abstract away the need to pass in bson.M
structs into the query. We can use map[string]interface{}
instead.
Database
interfacebson.M
Currently when testing the API, developers usually must manually create a token. It would be much easier if developers could generate tokens with specified information encoded into them.
Admin
token for developersCurrently the auth service has endpoints for retrieving the roles of a user as a list and setting the list of roles the user has. The endpoint which sets the roles of a user to a given list should be removed. Instead two new endpoints should be added. One which takes a single role and adds it to the user and one which takes a single role and removes it from a user.
PUT /auth/roles/
PUT /auth/roles/add/
PUT /auth/roles/remove/
Note that this is an interface breaking change and will require updating other services for the new interface.
Currently there are many http.Get
, http.Post
, etc calls where we never close the request body causing the the socket for the connection to remain open. Eventually services run out of available sockets and start rejecting connections.
The best long term fix is to use a higher level requests library that manages these issues for us or to add wrapper functions to our common library which defer closing the socket.
Currently we only add users to mailing lists once we finalize a decision for them in the decision service. It would be helpful to also have a mailing list of all registered users.
registered
mailing list upon registration completionCurrently the only way to revoke a user's token is to change the secret used to generate and validate the token, which has the result of invalidating everyone's token. We need a way to blacklist specific tokens. One potential solution would be to maintain a set of blacklisted token in a redis cache. This would give us higher performance that querying the database for every request, and allow us to invalidate individual tokens. An issue with this implementation is that it requires us to know the exact token we want to revoke. It would be better if we could revoke all token for a user generated before a certain time. This could still be stored in redis for high performance.
Currently sending a GET /
request to the gateway just returns a simple string in response. Instead it should return the health of each microservice. If the service is available it would be considered healthy, otherwise it is unhealthy.
GET /
requestsWe should consider moving the common library into a separate repo, and then just treat it as a go dependancy in the main api.This should help with future versioning.
We want to allow users to mark events are "favorite" so that mobile apps can display these events at the top of the event list for users.
On July 5th, 2018, gopkg.in/mgo.v2
was marked as unmaintained. github.com/globalsign/mgo
is now the primary active fork. No features seem to have been removed from the interface. So we should be able to just change the imports and be okay.
API integration for consumers would be made easier if error handling was unified across the codebase and errors were presented with a hierarchical order e.g. ApiError
indicates there is an unhanded error, UnprocessableRequestError
suggests that the request a user had made is not valid, etc..
My proposed solution for this is that:
error.New
and usage of library errors are replaced with our own error typestype
field to allow for easy checking of what error was returned by the APIsource
fieldA declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.