Giter VIP home page Giter VIP logo

daphne's Introduction

Daphne

Daphne is a Rust implementation of the Distributed Aggregation Protocol (DAP) standard. DAP is under active development in the PPM working group of the IETF.

Daphne currently implements:

  • draft-ietf-ppm-dap-09
    • VDAF: draft-irtf-cfrg-vdaf-08
    • Taskprov extension: draft-wang-ppm-dap-taskprov-06
    • Interop test API: draft-dcook-ppm-dap-interop-test-design-07
  • draft-ietf-ppm-dap-10 (work-in-progress)

This software is intended to support experimental DAP deployments and is not yet suitable for use in production. Daphne will evolve along with the DAP draft: Backwards compatibility with previous drafts won't be guaranteed until the draft itself begins to stabilize. API-breaking changes between releases should also be expected.

The repository contains three crates:

  • daphne (aka "Daphne") -- Implementation of the core DAP protocol logic for Clients, Aggregators, and Collectors. This crate does not provide the complete, end-to-end functionality of any party. Instead, it defines traits for the functionalities that a concrete instantantiation of the protocol is required to implement. We call these functionalities "roles".

  • daphne-worker (aka "Daphne-Worker") -- Implements a backend for the Aggregator roles based on Cloudflare Workers. This crate also implements the various HTTP endpoints defined in the DAP spec.

  • daphne-worker-test -- Defines a deployment of Daphne-Worker for testing changes locally. It also implements integration tests for Daphne and Daphne-Worker.

Testing

The daphne crate relies on unit tests. The daphne-worker crate relies mostly on integration tests implemented in daphne-worker-test. See the README in that directory for instructions on running Daphne-Worker locally.

Integration tests can be run via docker-compose.

docker-compose up --build --abort-on-container-exit --exit-code-from test

For integration tests with Janus, see the DAP Interop Test Runner.

Acknowledgements

Thanks to Yoshimichi Nakatsuka who contributed significantly to Daphne during his internship at Cloudflare Research. Thanks to Brandon Pitman and David Cook for testing, reporting bugs, and sending patches.

The name "Daphne" is credited to Cloudflare Research interns Tim Alberdingk Thijm and James Larisch, who came up with the name independently.

daphne's People

Contributors

armfazh avatar bhalleycf avatar branlwyd avatar chris-wood avatar cjpatton avatar dependabot[bot] avatar divergentdave avatar lpardue avatar mendess avatar mgalicer avatar nakatsuka-y avatar oliy avatar thibmeu avatar tholop avatar will118 avatar xtuc avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

daphne's Issues

editorial: Use underscores in enums instead of dashes

We use dashes in enums:

  enum {
    batch-collected(0),                                                             
    report-replayed(1),
    report-dropped(2),                                                              
    hpke-unknown-config-id(3),
    hpke-decrypt-error(4),
    vdaf-prep-error(5),
    batch-saturated(6),
  } ReportShareError;

I'm not sure how got on this kick (might be my fault!), but I would prefer underscores:

  enum {
    batch_collected(0),                                                             
    report_replayed(1),
    report_dropped(2),                                                              
    hpke_unknown_config_id(3),
    hpke_decrypt_error(4),
    vdaf_prep_error(5),
    batch_saturated(6),
  } ReportShareError;

It's pretty immaterial, but I have a weak preference for underscores, mainly for consistency with RFC 8446.

  1. Does anyone have a strong preference for dashes?
  2. Does anyone object to landing this change in DAP-02?

Add Prometheus metrics

Prometheus metrics need to be added in order to monitor request latency, error rates, and backlog size. We would also like to replace DapLeaderProcessTelemetry, which is used for end-to-end tests.

Support flexible report grouping

DAP currently groups reports based on the time when they are generated.

There are several proposals for making this more flexible:

  1. #273: Grouping reports into "fixed-sized chunks"
  2. #183: Grouping reports by "client properties"

Additionally, PR #297 proposes a change in the DAP spec to support the "fixed-sized chunk" case.

Following the proposals, Daphne should add the following changes:

  • The generalized concept of a query
  • Methods that filter reports based on the query

Daphne must also prevent intersection attacks launched by malicious Collectors over this flexible grouping:

  • Implement intersection attack prevention mechanisms for all query types

Note that this prevention will abide by the DAP-01 spec, meaning that no report or aggregate share shall be collected twice.
However, as stated in #183, this renders the "client property" grouping and many other use cases (e.g., drilling) useless.
This will be targeted for future implementation.

Daphne: Expose a "mock" backend in a public module

MockAggregator in daphne/src/roles_test.rs, once finished, will be a useful tool for demonstrating the functionality the Daphne backend is supposed to provide. This should be put in a public module for others to copy-paste or refer to as needed.

While at, let's write tests that excercise the protocol logic with the mock backend.

Migrate test deployment to wrangler2

We currently use wrangler 1.19 and miniflare 2.5.1 for setting up the test deployment. Since wrangler2, it's now possible to run miniflare directly. The task for this issue is to re-generate the test deployment in align closer with more modern Workers deployments.

`janus_helper` test not working

janus_helper, one of the 2 janus tests implemented in the daphne_worker_test set, fails during my testing with the following error message: thread 'main' panicked at 'failed to start container'.

Handle requests to "Collect URIs" properly

When a GET request to a Collect URI is successful, Daphne-Worker deletes the associated state. I believe this is incorrect, since subsequent GET requests are supposed to return the same results, per HTTP semantics.

Further, collect URIs are issued only to authorized senders (i.e., the Collector that presents the right API token), however polling the URI does not require authorization, per the spec. This means an unauthenticated endpoint can change the Leader's state, which is problematic.

The short term solution is to change Daphne-Worker so that it handles these GET requests properly by storeing collect responses indefinitely. In the long term, we might consider changing the protocol so that the Leader is free to drop this state after a period of time.

JsValue::into_serde() is deprecated in wasm-bindgen 0.2.83

I got the following warning when publishing daphne_worker 0.1.2 (I can't seem to reproduce when building locally):

warning: use of deprecated associated function `worker::wasm_bindgen::JsValue::into_serde`: causes dependency cycles, use `serde-wasm-bindgen` or `gloo_utils::format::JsValueSerdeExt` instead
  --> src/durable/garbage_collector.rs:85:38
   |
85 |                         item.value().into_serde()?;
   |                                      ^^^^^^^^^^
   |
   = note: `#[warn(deprecated)]` on by default

warning: use of deprecated associated function `worker::wasm_bindgen::JsValue::into_serde`: causes dependency cycles, use `serde-wasm-bindgen` or `gloo_utils::format::JsValueSerdeExt` instead
  --> src/durable/leader_agg_job_queue.rs:88:38
   |
88 |                         item.value().into_serde()?;
   |                                      ^^^^^^^^^^

warning: use of deprecated associated function `worker::wasm_bindgen::JsValue::into_serde`: causes dependency cycles, use `serde-wasm-bindgen` or `gloo_utils::format::JsValueSerdeExt` instead
   --> src/durable/leader_col_job_queue.rs:125:38
    |
125 |                         item.value().into_serde()?;
    |                                      ^^^^^^^^^^

warning: use of deprecated associated function `worker::wasm_bindgen::JsValue::into_serde`: causes dependency cycles, use `serde-wasm-bindgen` or `gloo_utils::format::JsValueSerdeExt` instead
   --> src/durable/report_store.rs:117:76
    |
117 |                     let (key, report_hex): (String, String) = item.value().into_serde()?;
    |                                                                            ^^^^^^^^^^

warning: `daphne_worker` (lib) generated 4 warnings
    Finished dev [unoptimized + debuginfo] target(s) in 49.95s
   Uploading daphne_worker v0.1.2 (/Users/chris/github.com/cloudflare/daphne/daphne_worker)
warning: aborting upload due to dry run

Happily, serde-wasm-bindgen is maintained by Cloudflare, so it may be a matter of integrating into workers-rs: https://github.com/cloudflare/serde-wasm-bindgen

Upgrade to hpke 0.10.0

We just landed a change in Janus to upgrade to hpke 0.10.0, which removed a too-restrictive dependency bound on zeroize from a transitive dependency. I'm filing this issue to let you know that you'll need to update this as well when bumping your Janus git dependency. (in fact, it's possible bumping Janus may unblock #119, if that's failing to resolve dependencies for the same reason)

Implement draft-wang-ppm-dap-taskprov-00

See https://github.com/wangshan/draft-wang-ppm-dap-taskprov.

  • Compute task ID (#146)
  • daphne::roles: Figure out how to store the pre-shared secret and compute the VDAF verify key (what changes are needed to the traits)
  • daphne::roles: Figure out how to store the collector's HPKE config
  • daphne::messages: Add Taskprov variant to Extension enum.
  • daphne: Add a flag to DapGlobalConfig for enabling taskprov. We should safely ignore the extension if when disabled.
  • daphne::vdaf: Use taskprov to compute task config while processing a set reports.
  • daphne_worker: Implement taskprov
  • Add callback for checking deciding whether to "opt-out"

cc/ @wangshan

Upgrade workers-rs version and remove `worker-build` workaround

Our Cargo.toml currently checks out worker-rs at a head that's a ahead of the last release (0.0.9):

 worker = { git = "https://github.com/cloudflare/workers-rs", rev = "eacdadd0c0b7d8e74963656e44b3e9c150a5a7d9", package = "worker" }

In order for the code to build, it is necessary to install the matching version of worker-build. This is accomplished in the following later of miniflare.Dockerfile:

RUN git clone https://github.com/cloudflare/workers-rs /tmp/worker-rs && \
    cd /tmp/worker-rs && \
    git checkout eacdadd0c0b7d8e74963656e44b3e9c150a5a7d9 && \
    cargo install --path ./worker-build --force

Note that, in the future, if it becomes necessary to checkout code that's ahead of the current release, we will need to add this workaround back.

Implement draft-ietf-ppm-dap-01

The next DAP draft will be 01. This issue tracks the work that needs to be done for Daphne to be in compliance.

  • Upgrade to prio 0.8.0 (implements draft-irtf-cfrg-vdaf-01)
  • Remove HMAC from agg flow requests/responses
  • Remove agg flow message variants
  • Add bearer token to agg flow requests
  • Add bearer token to collect flow requests
  • Update info/aad strings
  • Update media types
  • Add unrecognziedAggregationJob abort type (ietf-wg-ppm/draft-ietf-ppm-dap#285)
  • WON'T FIX (see #44) Make timestamps more granular (ietf-wg-ppm/draft-ietf-ppm-dap#281)
  • WON'T FIX (see #45) Conservatively constrain collect requests (ietf-wg-ppm/draft-ietf-ppm-dap#277)
  • Increase Nonce.rand from 8 bytes to 16
  • Update Janus integration tests

Changes for DAP-02

  • agg flow: Increase message lengths from 16 to 32 bits in various places
  • Remove Time from Nonce
  • Define ReportMetadata
  • Define Query and BatchSelector
  • Add report count to CollectResp
  • Add public share to Report and ReportShare
  • Upgrade to VDAF-03
  • Update AAD input to HPKE
  • "insufficientBatchSize" -> "invalidBatchSize"
  • Drop task ID from /hpke_config
  • Fixed-size queries.
  • Implement max task lifetime ietf-wg-ppm/draft-ietf-ppm-dap#304
  • Update references to DAP-02
  • ietf-wg-ppm/draft-ietf-ppm-dap#354
  • ietf-wg-ppm/draft-ietf-ppm-dap#356
  • Add test for leader rejecting expired reports: #137 (comment)
  • WON'T FIX (bumped to #165) Reject reports for epochs too far in the past or future, check for replays in all "active" epochs.
  • WON'T FIX (bumped to #177) Client: truncate the timestamp.
  • WON'T FIX "The leader MUST remove a collect job's results when the collector sends an HTTP DELETE request to the collect job URI. The leader responds with HTTP status 204 No Content for requests to a collect job URI whose results have been removed."

Helper in daphne_worker_test may lose reports in AggregateStore

We are testing Janus against Daphne in an integration test, and we have noticed an intermittent failure. (I can more reliably trigger the failure when my CPU has high load) We're testing against commit e1b503e, but I think the issue has not yet been addressed on main. When it receives a merge request, AggregateStore tries to read from the "agg_share" key, uses an empty aggregate share if it's not yet present, updates the share, and writes it back. I updated this method as follows with some logging, and I saw the following leading up to a test failure.

            (DURABLE_AGGREGATE_STORE_MERGE, Method::Post) => {
                let mut agg_share: DapAggregateShare =
                    state_get_or_default(&self.state, "agg_share").await?;
                let count_before = agg_share.report_count();
                let agg_share_delta = req.json().await?;
                agg_share.merge(agg_share_delta).map_err(int_err)?;
                let count_after = agg_share.report_count();
                self.state.storage().put("agg_share", agg_share).await?;
                console_log!(
                    "aggregate store DO merge, updated from {} reports to {} reports",
                    count_before,
                    count_after
                );
                Response::from_json(&String::new())
            }
aggregate store DO merge, updated from 0 reports to 12 reports
...
aggregate store DO merge, updated from 0 reports to 13 reports
...
aggregate store DO merge, updated from 12 reports to 33 reports

Later, the leader sends an aggregate share request for 46 reports, but the Daphne helper returns an error due to a batch mismatch, as it is expecting 33 reports.

Clearly there is some issue with the transactionality of this DO's storage accesses, but it's hard for me to say whether it's a bug in Daphne or Miniflare, i.e. whether the same would occur when running on Cloudflare's real runtime. The DO API documentation says that "a series of reads followed by a series of writes (with no other intervening I/O) are automatically atomic and behave like a transaction." When the DO tries to read "agg_share" and gets an error with "No such value in storage.", does that count as a read or not for the purposes of blocking other concurrent event? In other words, does the transactional storage API handle phantom reads correctly? There is a transaction() method available, which takes a callback, and runs it in an explicit transaction. It's possible the phantom read behavior is different with that. (again, this could differ on Miniflare and Cloudflare)

Clean up: Remove timestamps from unit tests

Currently, the get_reports unit tests implemented in roles_test.rs are required to specify a timestamp to get reports that fall within that time interval.

We should be able to remove this by implementing get_reports in a way that the function sweeps the ReportStore and automatically fetches the requested number of reports. See below.

You simply iterate over the buckets the corresponding to the task ID:

         let mut reports = Vec::new();
         for bucket_info in report_store.keys() {
             if &bucket_info.task_id == task_id {
                 // drain report_store.get(bucket_info),  break the `for` loop if `reports.len() ==
                 // agg_rate`.`
             }
         }

You might also consider doing it in sorted order by bucket_info.batch_interval.start. so that the oldest data is prioritized for aggregation.

Originally posted by @cjpatton in #51 (review)

Performance: Concurrency improvements for Helper

  1. When handling AggregateInitializeReq in http_post_aggregate(), the helper calls mark_aggregated(), awaits the result, then processes the reports via handle_agg_init_req(). The first call is used to filter out any reports from the request that need to be rejected, e.g., for anti-replay protection. This sequencing is fairly wasteful, since mark_aggregated() needs to wait for a response from backend storage. It would be better to run mark_aggregated() in parallel with handle_agg_init_req(), which constitutes the bulk of the heavy VDAF/HPKE operations. Once we get the result from mark_aggregated(), we would remove the rejected reports from the AggregateResp before sending to the Leader.
  2. The current implementation of DapAggregator::put_out_shares() for DaphneWorker serializes requests to different instances of the AggregateStore DO. These could be issued in parallel instead. Similarly for get_agg_share() and mark_collected().

Task provisioning

  • Each deployment has a bearer-token authenticated endpoint for configuring tasks:
    • POST /add_task: Stores task ID, task config, and bearer token to accept for the task.
      • Validate task config
      • (Optional) Encrypt secrets
    • POST /delete_task?task_id=<task_id>: Deletes a task.
  • Any changes should be a proper subset of David's interop test plan.

Securely managing secrets

Currently all secrets are stored in environment variables and are therefore not secure. All secrets should be moved to KV. Doing so will require changes to Daphne and the Daphne-Worker backend itself.

Changes to Daphne:

  • Make HpkeDecrypter::decrypt() async so that it can be implemented as an RPC.
  • De-couple VDAF verify key from task config and fetch it using an async callback.
  • Merge HpkeSecretKey functionality into HpkeReciverConfig, which also contains the corresponding HpkeConfig.

Changes to Daphne-Worker:

  • Define endpoint for rotating HPKE secrets.
  • Fetch bearer tokens from KV (per @armfazh's suggestion in #11).
  • Make bucket key and count into a single JSON blob.

Leader: Check for replays/collected before scheduling report aggregation

Currently our Leader checks if a report pertains to a collected batch when deciding whether to accept an upload request. The following sequence of events would lead to the leader attempting to process reports in an already collected batch, which is illegal:

  1. min_batch_size+1 reports are uploaded
  2. min_batch_size reports are aggregated
  3. Aggregate is collected
  4. 1 report is aggregated (for the same batch)

Daphne-Worker: Store `HpkeReceiverConfig` list in KV

Currently the Aggregator's HPKE receiver config list is stored as a secret:
https://github.com/cloudflare/daphne/blob/main/daphne_worker/src/config.rs#L69-L73

What we want to do instead is store the list in KV. The HPKE secret keys should be encrypted under a symmetric key, which in turn is stored as a secret. Requirements:

  • Encrypt each HpkeReceiverConfig individually. Ideally only the HPKE secret key would be encrypted. That way you can retrieve an HPKE config without decrypting.
  • Store each HpkeReceiverConfig separately. Use config ID for the look-up key.
  • When a Client requests an HPKE config, always return the first config in the list.
  • We will use the same set of HPKE configs for many tasks.
  • Use https://docs.rs/chacha20poly1305/0.2.1/chacha20poly1305/struct.XChaCha20Poly1305.html with a random nonce.
  • HpkeDecrypter::hpke_decrypt() needs to be async. (And maybe other methods on this trait, too.)
  • For now, we only need to generate and store one HpkeRecevierConfig at a time. If the config does not exist in KV, it should be created as needed.

Performance: Asynchronous garbage collection

Whenever a DO instance is created, the name of the instance is sent to GarbageCollector so that the instance can be scheduled for deletion.

Centralizing garbage collection in this fashion has two major downsides:

  1. DO->DO communication incurs noticeable latency for the request that initiated it
  2. There is a risk that that the garbage collector won't keep up with the rate at which garbage is created.

@oliy suggests a different approach, which ought to scale better. Once cloudflare/workers-rs#189 lands, we can have new DO instances schedule an alarm to delete themselves when they're no longer needed:

  • HelperStateStore is needed only for the aggregation sub-protocol. This would be safe to expire after, say, an hour.
  • ReportStore is probably safe to expire after, say, a few days. We'll have to coordinate expiration with the Leader/Helper behavior, since we'll need to reject reports that correspond to expired ReportStores.
  • AggregateStore will require some thought. Certainly this should not retire before ReportStore. It may be safe to keep these around for months, given the relatively low volume. Perhaps it's acceptable to garbage collect these as usual?
  • LeaderAggregationJobQueue, LeaderCollectionJobQueue -- We should probably leave as-is until we've thought through #25.

Ideas for serialization rework

We currently use libprio's codec::Decode trait for decoding all messages in the DAP protocol. Currently all fields are owned by the struct, i.e., no fields reference data somewhere else. This means that, when decoding a message from the wire, it is necessary to copy bytes into the struct fields. This is fine for short messages, but some messages are quite long. In particular, we would like to be able to decode aggregate requests and responses and have the corresponding structs hold a reference to the underlying data.

Clean up: Awkward tension between task config version and leader endpoint

We want that the serialized DapTaskConfig matches more or less what is shared between the Aggregators. In particular, our endpoint URL should include the DAP draft in the path. However, the leader also needs the version itself in the task config so that it can produce a DapRequest with a version properly.

One way to resolve this might be to make the Version field in the DapRequest optional.

Versioning the endpoints

We're anticipating the need to support multiple DAP versions simultaneously. In particular, #65 will introduce changes that are incompatible with DAP-01. Thus, we'll need to gate these changes for DAP-02.

As discussed on the mailing list [1], we could version our leader and helper URL endpoints so that the client (or collector) opts-in to a particular version by picking a particular endpoint. For example, to upload a report for DAP-02, the client POST to /02/upload instead of /upload.

Some additional requirements:

  • We would need to plumb the draft version through DapRequest. We could add a field whose value is
enum DapVersion {
    Draft01,
    Draft02,
}
  • We probably should lock each task to a particular version. (Not sure if this requires a spec change.) To do so we could add the DapVersion to the DapTaskConfig structure.

@nakatsuka-y what do you think of this as a first step towards solving #65?

[1] https://mailarchive.ietf.org/arch/msg/ppm/svv73HxLvTMdOqk2fLyuWmZRU6Q/

Audit DOs for atomacity issues

@divergentdave pointed out in #112 at least one instance in which DO storage transaction atomicity was broken by an await. Now that we understand this better, we need to come through the code to see if there are any other issues like this.

Note that #73 is relevant here as well.

Janus interop: Leader hangs on collect requests (flaky)

@branlwyd is working on Daphne <-> Janus interop tests. One test exercises a flaky bug in Daphne's Leader that causes collect requests to hang indefinitely:

To reproduce, checkout https://github.com/divviup/janus/tree/bran/revise-integration-test and run

cargo test -p monolithic_integration_test daphne_janus -- --nocapture

Note that you may need to update monolothic_integration_test/artificats/daphne_compiled. This can be done by running worker-build in our daphne_worker_test directory and copying the contents of build/worker.

The bug is likely to happen after about 50 trials or so. You should see something like this:

thread 'daphne_janus' panicked at 'assertion failed: Instant::now() < collect_job_poll_timeout', monolithic_integration_test/tests/common/mod.rs:220:9
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Thu Aug 04 2022 12:26:53 GMT-0700 (Pacific Daylight Time) - [/internal/process], located at: (30.2713, -97.7426), within: Texas
DAP_ENV: Hostname override applied
agg job queue: []

Notes:

  • We noticed that in cases where the bug occurs, not all of the reports get aggregated. We consistently see 45, but expect 46 in the batch.
  • Something to investigatge is whether calls to DaphneWorkerConfig::durable_obejct are deadlocking. In fact a mutex here may not be appropriate.
  • The bug may be an artifact of miniflare and might not manifest in the live deployment.

Scaling up the Leader

Aggregation and collect jobs are driven by a call to DapLeader::process(). This method is quite simple: It starts a single aggregation job, waits for it to finish, then completes any collect job waiting in the queue that is ready to complete. In general, it's not safe to run multiple instances of process() simultaneously for a given task. (It can be made safe by choosing the value of the ReportSelector carefully.)

The process() method is aimed primarily at driving tests. If used in a large deployment, it's likely to lead to a bottleneck. This can be avoided by running multiple aggregation jobs simultaneously. However, DapLeader will have to be refactored to support this.

taskprov: Compute task ID from task config

As described in Section 3.1. To start, it will be necessary to create a new module in the daphne crate for the new extension. The new module will initially contain:

  1. Definition of the VdafConfig struct defined in the extension draft, including enccoding/decoding code (see the messages module)
  2. Code for deriving the task Id from the (encoded) VdafConfig. Let's use ring for HKDF, since we already use ring for HMAC.

Leader: Consider flushing all reports before responding to CollectReq

This came up when testing against Janus. Their tests expect that the Leader waits to respond to a collect request until are reports pertaining to that request are processed. While not required by spec, this might be a useful thing to consider implementing. One thing to consider is that reports pertaining to the batch may be uploaded at the same time, so we'll need to figure out a cut-off mechanism.

Setup continuous integration

We currently have a test script daphne_workertests/backend/tests.sh that runs unit tests and integration tests (via docker-compose). Figure out how to have GitHub run these for each PR.

Running tests via docker-compose takes a long time

The culprit is cargo install worker-build, which is run on line RUN wrangler build in miniflare.Dockerfile. We should build/install worker-build prior to copying the source files so that we don't need to re-run this layer each time the files change.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.