Giter VIP home page Giter VIP logo

dtdb's Introduction

dtdb

A potential future replacement for imdt: the archive of past shows put on by DramaTech Theatre in Atlanta, GA

Development

Reference Material

The backend for this site is written primarily in rust. It uses:

Environment Setup

The most direct way to get set up is to ensure you have docker installed and run

docker compose up

This will start containers for the database and the application server, initialialize and migrate the database, and start the application server on port 64741. It will also watch the current directory, and will rebuild and restart the application server on almost any change2 to the application.

I want to set this up without using containers

First, get postgres running and accessible from your development environment in whatever manner works best for you. Note the connection details, then

echo "DATABASE_URL=postgres://user:pass@localhost:5432/dtdb_dev" >.env

where the credentials, host, and port are appropriate for your setup. The database name dtdb_dev is recommended, but you can name the database whatever you want. If it doesn't exist yet, it'll be created later as long as the user has permission to create a database.

Next, make sure you have rust installed, then run the following to migrate the DB:

# Install the CLI for the database abstraction library
cargo install diesel_cli --no-default-features --features postgres

# Create the database if it doesn't exist and run all the migrations
diesel setup

Finally, make sure everything's working:

cargo build
cargo test

Now, to launch the application server, you can run

cargo run -- --config config.json

Footnotes

  1. chosen because it doesn't belong to any other popular application and "\x64\x74" is "dt"

  2. It may miss changes that only impact views, static files, or migrations. If you notice that happening, docker compose restart website should force a reload.

dtdb's People

Contributors

davidhollis avatar c1tadel avatar

Watchers

 avatar

Forkers

c1tadel

dtdb's Issues

Write serializer functions for `chrono` types

Write a serializer function of type fn<S>(&T, S) -> Result<S::Ok, S::Error> where S: Serializer for each of:

  • chrono::DateTime<chrono::Utc> and
  • chrono::NaiveDate.

They should either live in data or in their own submodule of data.

Implement google openid connect

Add the ability to log in with google. Use the Account model to represent a dtdb account, with oidc_subject being the stable identifier we use to associate an oauth2 claim with an account (according to Google's docs, sub is unique to a single google account and will never change).

Add some rudimentary tests

I don't think we need to fully TDD it up, but a couple basic things like "we can load all the templates", "we can build the routes", "we can start the app server", etc.

And as a norm, we should probably add tests when we start fixing bugs, but that's for the future.

Add tera helpers for formatting dates and times

Create a set of tera helpers for formatting dates and times. Something like

format_date(d=, format=)
    takes a date in YYYY-mm-dd and formats it according to strftime

format_datetime(dt=, format=)
    same, but for a datetime in ISO-8601 with time zone

relative_datetime(dt=[,granularity=][,relative_to=])
    takes in an ISO-8601 datetime with time zone, returns a string
    like "4 hours ago" or "2 weeks from now". Uses the specified
    granularity, or the largest one that doesn't round to zero.
    Relative to the given date if specified, or to the current UTC time
    if not

Create dev importer for old imdt data

Write a script that takes in the imdt data dump (not public for PII reasons), transforms it into the form dtdb expects, and loads it into the dev database for testing purposes.

Set up GitHub CI

Make sure merging a PR blocks on cargo check, cargo fmt, and cargo test passing.

Blocks on #16

Register a url helper function with tera

Write a helper function with the signature fn (&HashMap<String, serde_json::Value>) -> tera::Result<serde_json::Value> that takes a serialized model object as one of its args and returns a url for that model. If convenient, we may wish to support other args to help constructing other kinds of urls (edit the model in question, etc.)

Figure out how to differentiate between error types when coercing errors into responses

This one's a bit open-ended.

We have a struct called HandlerError that acts as a bridge between anyhow::Error and IntoResponse. The former is a general error that any E: std::error::Error can be converted into, which allows us to ? any Result type whose Err branch conforms to that trait, in any function that returns an anyhow::Result. The latter is a trait marking types that axum can convert into an HTTP response. HandlerError only exists because we can't write a trait implementation when neither the trains nor the implementing type are defined in our crate.

The current IntoResponse implementation for HandlerError is a bit rudimentary—it just converts everything into an HTTP 500 response. Things like not finding an object by id should probably 404, for example, and when we go to add authn and authz we'll probably want to be able to 401. We should look into the error itself a bit and handle it appropriately.

My suspicion is the cleanest way to do that is to:

  1. convert HandlerError into an enum,
  2. have its From<E> impl choose a variant based on what kind of (known, expected) error it is, with a fallback case for "unknown, just 500 it", and
  3. have its IntoResponse impl create an appropriate response

Figure out how we want to handle static assets

In development, we probably want an Option<String> config key that points to the location of the assets and routes /static to a tower_http::services::ServeDir pointed at that directory.

In production, that key can be None, we skip adding the /static route, we ship the statics to some directory on the server and point our reverse proxy at it. e.g., for Caddy we might do

file_server /static/* {
    root /deploy/dtdb/current/static
}

reverse_proxy {
    to unix//var/run/dtdb.sock
}

Add authz policy modeling with Oso

  1. Add in the Oso authorization library (the one that evaluates policies locally, not the cloud one),
  2. create a Polar file that describes our authz policies (which can just be "allow everything for now"),
  3. use include_file! to bundle the policy file into the application binary as a constant, and
  4. initialize Oso with the bundled policy file in Application::from_config and keep a public reference to it in Application (named authz or something like it)

Write serializers for models

Blocked on #5

For models which support it cleanly, add Serialize to their derive() list. In most cases, this will require adding an

#[serde(serialize_with = "data::whatever::serialize_naive_date")]

annotation to any date properties of the model (where the qualified name there is replaced with whatever we actually called the appropriate function for the property's type).

In some cases, included structs or enums will need their own #[derive(Serialize)].

And in some cases, like data::models::Season, we'll need a custom impl Serialize because we want to inject a few custom attributes for the view layer to use.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.