Giter VIP home page Giter VIP logo

cachepot's Introduction

⚠ 📦️ARCHIVED ⚠ 📦

Cachepot has been archived/superseded in favor of uniting the efforts under upstream sccache.

If you're still using cachepot, at this point, all of the relevant changes should be backported to sccache or are in progress. Feel free to submit a porting request at sccache referencing a PR in cachepot.

Thank you for using cachepot!

The former cachepot maintainers 🌻️


cachepot maskot image

cachepot - Shared Compilation Cache

cachepot is a ccache-like compiler caching tool. It is used as a compiler wrapper and avoids compilation when possible, storing cached results either on local disk or in one of several cloud storage backends.

It's also a fork of sccache with improved security properties and improvements all-around the code base. We upstream as much as we can back upstream, but the goals might not be a 100% match.

cachepot includes support for caching the compilation of C/C++ code, Rust, as well as NVIDIA's CUDA using nvcc.

cachepot also provides icecream-style distributed compilation (automatic packaging of local toolchains) for all supported compilers (including Rust). The distributed compilation system includes several security features that icecream lacks such as authentication, transport layer encryption, and sandboxed compiler execution on build servers. See the distributed quickstart guide for more information.


Table of Contents (ToC)


Installation

There are prebuilt x86-64 binaries available for Windows, Linux (a portable binary compiled against musl), and macOS on the releases page.

If you have a Rust toolchain installed you can install cachepot using cargo. Note that this will compile cachepot from source which is fairly resource-intensive. For CI purposes you should use prebuilt binary packages.

cargo install --git https://github.com/paritytech/cachepot

Usage

Running cachepot is like running ccache: prefix your compilation commands with it, like so:

cachepot gcc -o foo.o -c foo.c

If you want to use cachepot for caching Rust builds you can define build.rustc-wrapper in the cargo configuration file. For example, you can set it globally in $HOME/.cargo/config by adding:

[build]
rustc-wrapper = "/path/to/cachepot"

Note that you need to use cargo 1.40 or newer for this to work.

Alternatively you can use the environment variable RUSTC_WRAPPER:

RUSTC_WRAPPER=/path/to/cachepot cargo build

cachepot supports gcc, clang, MSVC, rustc, NVCC, and Wind River's diab compiler.

If you don't specify otherwise, cachepot will use a local disk cache.

cachepot works using a client-server model, where the server (which we refer to as "coordinator") runs locally on the same machine as the client. The client-server model allows the server/coordinator to be more efficient by keeping some state in memory. The cachepot command will spawn a coordinator process if one is not already running, or you can run cachepot --start-coordinator to start the background server process without performing any compilation.

You can run cachepot --stop-coordinator to terminate the coordinator. It will also terminate after (by default) 10 minutes of inactivity.

Running cachepot --show-stats will print a summary of cache statistics.

Some notes about using cachepot with Jenkins exist.

To use cachepot with cmake, provide the following command line arguments to cmake >= 3.4:

-DCMAKE_C_COMPILER_LAUNCHER=cachepot
-DCMAKE_CXX_COMPILER_LAUNCHER=cachepot

Build Requirements

cachepot is a Rust program. Building it requires cargo (and thus rustc). cachepot currently requires Rust 1.56.1. We recommend you install Rust via Rustup.

Build

If you are building cachepot for non-development purposes make sure you use cargo build --release to get optimized binaries:

cargo build --release [--no-default-features --features=s3|redis|gcs|memcached|azure]

By default, cachepot builds with support for all storage backends, but individual backends may be disabled by resetting the list of features and enabling all the other backends. Refer the Cargo Documentation for details on how to select features with Cargo.

Linux

No native dependencies.

Build with cargo and use ldd to check that the resulting binary does not depend on OpenSSL anymore.

Linux and Podman

Also you can build the repo with Parity CI Docker image:

podman run --rm -it -w /shellhere/cachepot \
                    -v "$(pwd)":/shellhere/cachepot:Z \
                    -u $(id -u):$(id -g) \
                    --userns=keep-id \
                    docker.io/paritytech/cachepot-ci:staging cargo build --locked --release
#artifacts can be found in ./target/release

If you want to reproduce other steps of CI process you can use the following guide.

macOS

No native dependencies.

Build with cargo and use otool -L to check that the resulting binary does not depend on OpenSSL anymore.

Windows

On Windows, the binary might also depend on a few MSVC CRT DLLs that are not available on older Windows versions.

It is possible to statically link against the CRT using a .cargo/config file with the following contents.

[target.x86_64-pc-windows-msvc]
rustflags = ["-Ctarget-feature=+crt-static"]

Build with cargo and use dumpbin /dependents to check that the resulting binary does not depend on MSVC CRT DLLs anymore.


Storage Options

Local

cachepot defaults to using local disk storage. You can set the CACHEPOT_DIR environment variable to change the disk cache location. By default it will use a sensible location for the current platform: ~/.cache/cachepot on Linux, %LOCALAPPDATA%\Parity\cachepot on Windows, and ~/Library/Caches/Parity.cachepot on MacOS.

The default cache size is 10 gigabytes. To change this, set CACHEPOT_CACHE_SIZE, for example CACHEPOT_CACHE_SIZE="1G".

S3

If you want to use S3 storage for the cachepot cache, you need to set the CACHEPOT_BUCKET environment variable to the name of the S3 bucket to use.

You can use AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY to set the S3 credentials. Alternately, you can set AWS_IAM_CREDENTIALS_URL to a URL that returns credentials in the format supported by the EC2 metadata service, and credentials will be fetched from that location as needed. In the absence of either of these options, credentials for the instance's IAM role will be fetched from the EC2 metadata service directly.

If you need to override the default endpoint you can set CACHEPOT_ENDPOINT. To connect to a minio storage for example you can set CACHEPOT_ENDPOINT=<ip>:<port>. If your endpoint requires TLS, set CACHEPOT_S3_USE_SSL=true.

You can also define a prefix that will be prepended to the keys of all cache objects created and read within the S3 bucket, effectively creating a scope. To do that use the CACHEPOT_S3_KEY_PREFIX environment variable. This can be useful when sharing a bucket with another application.

Redis

Set CACHEPOT_REDIS to a Redis url in format redis://[:<passwd>@]<hostname>[:port][/<db>] to store the cache in a Redis instance. Redis can be configured as a LRU (least recently used) cache with a fixed maximum cache size. Set maxmemory and maxmemory-policy according to the Redis documentation. The allkeys-lru policy which discards the least recently accessed or modified key fits well for the cachepot use case.

Redis over TLS is supported. Use the rediss:// url scheme (note rediss vs redis). Append #insecure the the url to disable hostname verification and accept self-signed certificates (dangerous!). Note that this also disables SNI.

Memcached

Set CACHEPOT_MEMCACHED to a Memcached url in format tcp://<hostname>:<port> ... to store the cache in a Memcached instance.

Google Cloud Storage

To use Google Cloud Storage, you need to set the CACHEPOT_GCS_BUCKET environment variable to the name of the GCS bucket. If you're using authentication, either set CACHEPOT_GCS_KEY_PATH to the location of your JSON service account credentials or CACHEPOT_GCS_CREDENTIALS_URL with a URL that returns the oauth token. By default, CACHEPOT on GCS will be read-only. To change this, set CACHEPOT_GCS_RW_MODE to either READ_ONLY or READ_WRITE.

Azure

To use Azure Blob Storage, you'll need your Azure connection string and an existing Blob Storage container name. Set the CACHEPOT_AZURE_CONNECTION_STRING environment variable to your connection string, and CACHEPOT_AZURE_BLOB_CONTAINER to the name of the container to use. Note that cachepot will not create the container for you - you'll need to do that yourself.

Important: The environment variables are only taken into account when the server starts, i.e. only on the first run.


Overwriting the cache

In situations where the cache contains broken build artifacts, it can be necessary to overwrite the contents in the cache. That can be achieved by setting the CACHEPOT_RECACHE environment variable.


Debugging

You can set the CACHEPOT_ERROR_LOG environment variable to a path and set CACHEPOT_LOG to get the server process to redirect its logging there (including the output of unhandled panics, since the server sets RUST_BACKTRACE=1 internally).

CACHEPOT_ERROR_LOG=/tmp/cachepot_log.txt CACHEPOT_LOG=debug cachepot

You can also set these environment variables for your build system, for example

CACHEPOT_ERROR_LOG=/tmp/cachepot_log.txt CACHEPOT_LOG=debug cmake --build /path/to/cmake/build/directory

Alternatively, if you are compiling locally, you can run the server manually in foreground mode by running CACHEPOT_START_SERVER=1 CACHEPOT_NO_DAEMON=1 cachepot, and send logging to stderr by setting the CACHEPOT_LOG environment variable for example. This method is not suitable for CI services because you need to compile in another shell at the same time.

CACHEPOT_LOG=debug CACHEPOT_START_SERVER=1 CACHEPOT_NO_DAEMON=1 cachepot

Interaction with GNU make jobserver

cachepot provides support for a GNU make jobserver. When the server is started from a process that provides a jobserver, cachepot will use that jobserver and provide it to any processes it spawns. (If you are running cachepot from a GNU make recipe, you will need to prefix the command with + to get this behavior.) If the cachepot server is started without a jobserver present it will create its own with the number of slots equal to the number of available CPU cores.

This is most useful when using cachepot for Rust compilation, as rustc supports using a jobserver for parallel codegen, so this ensures that rustc will not overwhelm the system with codegen tasks. Cargo implements its own jobserver (see the information on NUM_JOBS in the cargo documentation) for rustc to use, so using cachepot for Rust compilation in cargo via RUSTC_WRAPPER should do the right thing automatically.


Known Caveats

General

  • Absolute paths to files must match to get a cache hit. This means that even if you are using a shared cache, everyone will have to build at the same absolute path (i.e. not in $HOME) in order to benefit each other. In Rust this includes the source for third party crates which are stored in $HOME/.cargo/registry/cache by default.

Rust

  • Crates that invoke the system linker cannot be cached. This includes bin, dylib, cdylib, and proc-macro crates. You may be able to improve compilation time of large bin crates by converting them to a lib crate with a thin bin wrapper.
  • Incrementally compiled crates cannot be cached. By default, in the debug profile Cargo will use incremental compilation for workspace members and path dependencies. You can disable incremental compilation.

More details on Rust caveats

cachepot's People

Contributors

aidanhs avatar alexcrichton avatar barrbrain avatar chmanchester avatar cramertj avatar dholbert avatar drahnr avatar dushistov avatar dylan-dpc avatar emilio avatar f3real avatar flxo avatar froydnj avatar fxb avatar georgehahn avatar gittup avatar glandium avatar kornelski avatar luser avatar marwes avatar mathstuf avatar montekki avatar nnethercote avatar orium avatar ry avatar sergejparity avatar sigiesec avatar tesuji avatar tripleight avatar xanewok avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cachepot's Issues

excessive RAM usage

A large set of C dependencies is compiled, essentially bogged down a 16 core machine / 48 gigs of mem which is running my personal cachepot-server worker. My metrics were down for the last few days, so I am not sure what was the limiting factor or if it was related, TBD.

cargo install --git https://github.com/jpochyla/psst.git psst-gui

which has a lot C of dependencies, other projects work as well.

Define test projects for unit tests

  • cargo
  • conan
  • cmake
  • ninja
  • waf

covering the following compilers

  • rust +nightly
  • rust +stable
  • clang
  • gcc
  • nvcc

which have to link

  • static
  • dynamic

targeting a target that is itself

  • bin dynamic
  • bin static
  • lib dynamic
  • lib static

**Note that these are multiple dimensions, so the number of combinations is n^k (roughly) **

Fails to handle environment variables changes from build.rs

Issue

When defining environment variables in a build script within a workspace, they are not correctly handle by the cache. This is visible in non incremental build (in release with default config).

Setup:

  • Have a build script defining an environment variable from an outside input (a file, with rerun on it) and use it in a crate
  • Use that crate in a binary crate
  • Build and run in release
  • Edit the outside input
  • Build and run in release (the build script is triggered due to the rerun)
  • => The execution should use the new value but did not: cache issue, environment variable changes not seen

Minimal reproducer

# extract archive, cd into extracted folder
> RUSTC_WRAPPER=cachepot cargo r --release
# should output: "Value is: something else"

# Edit the parameter read by the build script
> echo "anything else" > parameter.txt

> RUSTC_WRAPPER=cachepot cargo r --release
# should output: "Value is: anything else" but output the same as last time

cachepot-env-vars.tar.gz

dist: Refactor user namespace overlayfs build mode to not use `fork()`

Introduced in #128.

The original reason for that is purely technical - it is required to call unshare(CLONE_NEWUSER) in the main thread of a program. Since cachepot-dist is multi-threaded and the build is executed on a new thread, we used a hack in which we fork() and call unshare in the forked child (fork() forks the calling thread into a main thread of a forked child process).

Using fork carries a lot of subtle details we need to be aware of; the current implementation uses some async-signal-unsafe syscalls, which IIUC can hand our process in a signal handler - not ideal!

An initial idea is to provide yet another entrypoint (think cachepot-dist sandbox) to the binary that handles the sandbox setup and runs the actual build; we will have full control and can call everything on the main thread of a program.

Revamp wording

Currently we are having trouble / overlapping meaning of server. After all the local cachepot launches a command server. In dist mode we also have scheduler and server.

I propose to rename as follows:

scheduler -> scheduler
(client) server -> connector
(dist) scheduler -> scheduler
(dist) server -> worker

This would simplify onboarding people and detaching the functional semantics from the process naming and simplify writing coherent documentation.

Don't require absolute paths for cache hits when compiling Rust

The more broad issue is tracked upstream at mozilla/sccache#35.

We're mostly interested in supporting only Rust for now and it seems that groundwork has been laid already via --remap-path-prefix. This issue should probably be addressed before we can deploy cachepot as the cache hit rate will be user-specific and thus fairly low initially.

integration tests fail to setup overlay within container launch

During the impl of #9 I ran into issues trying to create a overlay mount from withing another container, which is part of the unit test harness.

This https://github.com/paritytech/sccache/blob/bernhard-podman/src/bin/sccache-dist/build.rs#L273-L307 piece of code errors out with the following error:

(shortened uuids to d9629, added newlines for readbility)

 WARN 2020-11-17T09:31:07Z: sccache::dist::http::server: Res 2 error: run build failed,
caused by: Compilation execution failed,
caused by: Failed to mount overlay FS: 
overlayfs "/sccache-bits/build-dir/toolchains/d9629",
upperdir="/sccache-bits/build-dir/builds/d9629-1/upper",
workdir="/sccache-bits/build-dir/builds/d9629-1/work" -> "/sccache-bits/build-dir/builds/d9629-1/target":
Operation not permitted (os error 1) (
"/sccache-bits/build-dir/toolchains/d9629": exists,
upperdir: exists,
workdir: exists,
same-fs,
target: exists,
mapped-root)

To reproduce:

cargo t --features dist-tests test_dist_basic -- --nocapture

in branch bernhard-podman.

Context

The outer container is a rootless podman container.

podman has configurable backends, overlay, vfs, btrfs - the first and last were attempted without any effect.
Adding --privileged or --add-cap CAP_SYS_ADMIN were also attempted for either backend without effect.
Relevant code: https://github.com/paritytech/sccache/blob/bernhard-podman/tests/harness/mod.rs#L354-L387

Enforce running dist test suite in CI when unprivileged

After #128 will be merged, we can run the dist test suite in a new user Linux namespace, effectively gaining capabilities to run bubblewrap but still isolated from the parent namespace.

It'd be good to test that against both our GHA and GitLab test suite.

Couldn't set up a build environment for bubblewrap: Failed writing to gid_map
Couldn't set up a build environment for bubblewrap: Failed to mount overlay FS: (...) Operation not permitted (os error 1)

This, however, will probably be mitigated once we either migrate to fuse-overlayfs or upgrade to kernel 5.15.x series (link)

Assure dwarf debug info stays correct in dist builds

With the recently landed PR #116 we have basic support.

One of the remaining questions, is the effect on debug info paths.

If they are intact, that's awesome! If not, well.. parsing the return object file and parsing the paths and manually remapping them by usage of i.e. gimli might be an option.

Merge all of our improvements upstream

Since we forked, there has been some improvements made here but also we missed out on some of the PRs made against sccache (after the project got more action since the fork). This will be beneficial by getting new features/bug fixes but also will facilitate upstreaming some of our improvements as well.

Can't build stripped down S3 flavor of cachepot

I want a no-frills cachepot for my S3-powered CI environment but it won't build. My environment:

% rustc --version
rustc 1.61.0-nightly (52b34550a 2022-03-15)
% cargo --version
cargo 1.61.0-nightly (65c826642 2022-03-09)

The error:

% cargo install cachepot --version 0.1.0-rc.1 --no-default-features --features=s3
[[[ SNIP ]]]
error[E0432]: unresolved import `crate::config::WorkerUrl`
  --> /Users/xlange/.cargo/registry/src/github.com-1ecc6299db9ec823/cachepot-0.1.0-rc.1/src/dist/mod.rs:16:5
   |
16 | use crate::config::WorkerUrl;
   |     ^^^^^^^^^^^^^^^^^^^^^^^^ no `WorkerUrl` in `config`

error[E0412]: cannot find type `WorkerUrl` in module `crate::config`
   --> /Users/xlange/.cargo/registry/src/github.com-1ecc6299db9ec823/cachepot-0.1.0-rc.1/src/compiler/compiler.rs:725:23
    |
725 |     Ok(crate::config::WorkerUrl),
    |                       ^^^^^^^^^ not found in `crate::config`

Some errors have detailed explanations: E0412, E0432.
For more information about an error, try `rustc --explain E0412`.
error: could not compile `cachepot` due to 2 previous errors
warning: build failed, waiting for other jobs to finish...
error: failed to compile `cachepot v0.1.0-rc.1`, intermediate artifacts can be found at `/var/folders/mc/3pj1vp8s239fn610rbv56f440000gp/T/cargo-installTYYXXo`

Caused by:
  build failed

dist: Improve checking prerequisites for overlayfs builders

#128 introduces a rootless support (which also removes the bail-if-not-root check) at the cost of having to support unprivileged user namespaces (which can be disabled at run-time or not compiled out from the kernel). Rather than letting the build die directly, we should check the support for the relevant prerequisites before.

Unit tests failing due to different clang/env vars

Some unit tests depend on the particular version of clang.

I.e. 2db8e28 is necessary since clang 7.x in an old ubuntu container exposes different behaviour than the clang 11.x shipped with fedora 33. This might also be partly due to the default env flags.

TODO:

  • investigate the true reason for the different behaviour
    • compare env vars
    • test behviour with identical env vars on different clang version

Custom build command for `ring` fails

   Compiling jsonrpc-derive v18.0.0
error: failed to run custom build command for `ring v0.16.20`

Caused by:
  process didn't exit successfully: `/tmp/cargo-installeBJJXk/release/build/ring-f607334dffc6617b/build-script-build` (exit status: 101)
  --- stdout
  OPT_LEVEL = Some("3")
  TARGET = Some("x86_64-unknown-linux-gnu")
  HOST = Some("x86_64-unknown-linux-gnu")
  CC_x86_64-unknown-linux-gnu = None
  CC_x86_64_unknown_linux_gnu = None
  HOST_CC = None
  CC = Some("cachepot clang")
  CFLAGS_x86_64-unknown-linux-gnu = None
  CFLAGS_x86_64_unknown_linux_gnu = None
  HOST_CFLAGS = None
  CFLAGS = None
  CRATE_CC_NO_DEFAULTS = None
  DEBUG = Some("false")
  CARGO_CFG_TARGET_FEATURE = Some("fxsr,sse,sse2")

  --- stderr
  running "/home/bernhard/.cargo/bin/cachepot" "cachepot" "clang" "-O3" "-ffunction-sections" "-fdata-sections" "-fPIC" "-m64" "-I" "include" "-Wall" "-Wextra" "-pedantic" "-pedantic-errors" "-Wall" "-Wextra" "-Wcast-align" "-Wcast-qual" "-Wconversion" "-Wenum-compare" "-Wfloat-equal" "-Wformat=2" "-Winline" "-Winvalid-pch" "-Wmissing-field-initializers" "-Wmissing-include-dirs" "-Wredundant-decls" "-Wshadow" "-Wsign-compare" "-Wsign-conversion" "-Wundef" "-Wuninitialized" "-Wwrite-strings" "-fno-strict-aliasing" "-fvisibility=hidden" "-fstack-protector" "-g3" "-DNDEBUG" "-c" "-o/tmp/cargo-installeBJJXk/release/build/ring-8c06859d85267280/out/aesni-x86_64-elf.o" "/home/bernhard/.cargo/registry/src/github.com-1ecc6299db9ec823/ring-0.16.20/pregenerated/aesni-x86_64-elf.S"
  cachepot: error: failed to execute compile
  cachepot: caused by: Compiler not supported: "error: Found argument \'-E\' which wasn\'t expected, or isn\'t valid in this context\n\nUSAGE:\n    cachepot [FLAGS] [OPTIONS] [cmd]...\n\nFor more information try --help\n"
  thread 'main' panicked at 'execution failed', /home/bernhard/.cargo/registry/src/github.com-1ecc6299db9ec823/ring-0.16.20/build.rs:656:9
  note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
warning: build failed, waiting for other jobs to finish...
The following warnings were emitted during compilation:

warning: cachepot: error: failed to execute compile
warning: cachepot: caused by: Compiler not supported: "error: Found argument \'-E\' which wasn\'t expected, or isn\'t valid in this context\n\nUSAGE:\n    cachepot [FLAGS] [OPTIONS] [cmd]...\n\nFor more information try --help\n"
warning: cachepot: error: failed to execute compile
warning: cachepot: caused by: Compiler not supported: "error: Found argument \'-E\' which wasn\'t expected, or isn\'t valid in this context\n\nUSAGE:\n    cachepot [FLAGS] [OPTIONS] [cmd]...\n\nFor more information try --help\n"

error: failed to compile `subkey v2.0.1 (ssh://[email protected]/paritytech/substrate.git#4f8e0cf6)`, intermediate artifacts can be found at `/tmp/cargo-installeBJJXk`

Caused by:
  build failed

`master` contains a single failing test

rev a70346110554f9f37601b9dcc1e19a3e71ede889 fails consistently for me with one test case:

RUST_BACKTRACE=1 cargo t --test sccache_cargo
    Finished test [unoptimized + debuginfo] target(s) in 1.01s
     Running target/debug/deps/sccache_cargo-8d71709abea60091

running 1 test
test test_rust_cargo ... FAILED

failures:

---- test_rust_cargo stdout ----
thread 'test_rust_cargo' panicked at 'Unexpected failure.
code-2
stderr=\`\`\`sccache: error: Server startup failed: Address in use
\`\`\`
command=`"/media/supersonic1t/projects/sccache/target/debug/sccache" "--start-server"`
code=2
stdout=\`\`\`sccache: Starting the server...
\`\`\`
stderr=\`\`\`sccache: error: Server startup failed: Address in use
\`\`\`
', /home/bernhard/.cargo/registry/src/github.com-1ecc6299db9ec823/assert_cmd-1.0.2/src/assert.rs:158:13
stack backtrace:
   0: rust_begin_unwind
             at /rustc/2fd73fabe469357a12c2c974c140f67e7cdd76d0/library/std/src/panicking.rs:493:5
   1: std::panicking::begin_panic_fmt
             at /rustc/2fd73fabe469357a12c2c974c140f67e7cdd76d0/library/std/src/panicking.rs:435:5
   2: assert_cmd::assert::Assert::success
             at /home/bernhard/.cargo/registry/src/github.com-1ecc6299db9ec823/assert_cmd-1.0.2/src/assert.rs:158:13
   3: sccache_cargo::test_rust_cargo_cmd
             at ./tests/sccache_cargo.rs:79:5
   4: sccache_cargo::test_rust_cargo
             at ./tests/sccache_cargo.rs:17:5
   5: sccache_cargo::test_rust_cargo::{{closure}}
             at ./tests/sccache_cargo.rs:16:1
   6: core::ops::function::FnOnce::call_once
             at /home/bernhard/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/ops/function.rs:227:5
   7: core::ops::function::FnOnce::call_once
             at /rustc/2fd73fabe469357a12c2c974c140f67e7cdd76d0/library/core/src/ops/function.rs:227:5
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.


failures:
    test_rust_cargo

test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 4.53s

error: test failed, to rerun pass '--test sccache_cargo'

move away from ` rouille`

Currently rouille is used which is heavily thread based, there is an upstream effort to move to actix-web, but I am not sure this is what we want. mozilla/sccache#742 (comment)

Evaluate alternative crates to replace rouille:

  • actix-web
  • hyper
  • tide
  • ...

auth0 support

Currently we have some https://github.com/mozilla specific auth bits in our code base.

It'd be preferable to not break usability for mozilla-folks in case we ever wanted to re-unite the efforts, but we also should be wary of dragging on untested code.

Proposal:

Expand the mozilla specifc auth of the client to something that is more generalized, akin to become a generalized auth0 backend.

KVM based sandboxes

For improved hardening, an improved sandboxing mechanism might be mandated for PR based usage.

  • firecracker
  • quark
  • katacontainers

Check that Substrate/Polkadot fully compiles using distributed compilation

The whole point of this PoC/fork is to speed up the Substrate/Polkadot builds, so we need to ensure that those project builds using sccache first.

Substrate compilation is blocked on #51 and on the following patch to the libp2p-wasm-ext crate:

diff --git a/transports/wasm-ext/src/lib.rs b/transports/wasm-ext/src/lib.rs
index 8c0e5012..f5be31f4 100644
--- a/transports/wasm-ext/src/lib.rs
+++ b/transports/wasm-ext/src/lib.rs
@@ -136,6 +136,11 @@ pub mod ffi {
     }
 }
 
+// Include the file in the dep-info, so that build accelerators such as sccache
+// can correctly prepare the required input files in order to compile the crate.
+#[cfg(feature = "websocket")]
+const _: &'static str = include_str!(concat!(env!("CARGO_MANIFEST_DIR"), "/src/websockets.js"));
+
 /// Implementation of `Transport` whose implementation is handled by some FFI.
 pub struct ExtTransport {
     inner: SendWrapper<ffi::Transport>,

In general, we need a mechanism to track required files in the dep-info directly (rust-lang/rust#84029) but the diff above works around that by using an include_str hack.

Future: not transmit files that are already on server (at least reduce how many)

Simmilar to :

but in our case could be even more efficient, as if main objective is to speedup Substrate and Polkadot builds, then we can generate a bloom filter, that could be downloaded from instance with sscache server to client, and before sending sth client can check with bloom filter if may be there.

Example flow:

  • client has local copy of bloom filter earlier fetched
  • client checks wth bloom filter if file "may be on remote server'.
    • if bloom filter check says "file is not available on remote server for sure", we know what to do -> send it
    • otherwise it may be with high probability (depending how we configure bloom filter) on remote server, so instead we send blake3hash. If server will turn out to not know the file, will request it back and we will have to provide it

Alternatively whole procedure can be simplified/speedup by downloading database of all hashes of all files on server, but for speed up of checking , keeping in memory bloom filter to make fast checks with small RAM memory footprint if "file may be there" vs "file for sure is not there" , before reffering to bigger database of all hashes.

Also There has to be heuristic, as for small files it may just be more beneficial to just keep sending them for two reasons:

  • processing time -> sending small file straight away may turn out to be negligible cost
  • database of available hashes size -> we can lower footprint of that one, so we only care about bigger files (in both : bloom filter and db/file with hashes

Integration auth layer

Currently there is very little in the way of cache poisoning. This requires some invasive re-architecting:

  • user based authentification
  • only allow toolchain upload explcitly by certain users
  • duplicate artifact upload logic from the sccache-client to the dist-server, to retain a trusted artifact pool

use serial test

For the dist-tests, the ci currently asserts that there is one thread doing the testing. We should instead use something along the lines of serial_test which assures this by design and not by convention.

Tags / Releases page don't contain binaries

Hello,

It is mentioned in the README file to use prebuilt binaries for CI. However, the artifacts created with github actions for a specific commit / tag / release are not attached to the release in the releases page.
Therefore, it's currently only possible to use cachepot by installing it via cargo install --git.

Are prebuilt binaries going to be provided soon?

Future: parallel local and remote build plus ev. heuristics

For very small jobs it may be faster to build locally then upload, remotely and download result.

Idea is that it may be started in parallel local and remote build workflows, and whichever finishes faster, that one "wins" and stops the other.

Also there could be heuristics to determine which builds to not even try to build in that way or which not even try to build remotely / locally.

Running cc-rs when setting CXX=cachepot fails

RUSTC_WRAPPER=cachepot
CXX=cachepot clang++
CC=cachepot clang

fails to build with:

   Compiling unic-langid v0.9.0
The following warnings were emitted during compilation:

warning: cachepot: error: failed to execute compile
warning: cachepot: caused by: Compiler not supported: "error: Found argument \'-E\' which wasn\'t expected, or isn\'t valid in this context\n\nUSAGE:\n    cachepot [FLAGS] [OPTIONS] [cmd]...\n\nFor more information try --help\n"

error: failed to run custom build command for `minivorbis-sys v0.1.0 (/home/bernhard/.cargo/git/checkouts/psst-d28722582de7a544/105f6cc/minivorbis-sys)`

Caused by:
  process didn't exit successfully: `/tmp/cargo-installlSBzES/release/build/minivorbis-sys-685472a3e5a77a2f/build-script-build` (exit status: 1)
  --- stdout
  TARGET = Some("x86_64-unknown-linux-gnu")
  OPT_LEVEL = Some("3")
  HOST = Some("x86_64-unknown-linux-gnu")
  CC_x86_64-unknown-linux-gnu = None
  CC_x86_64_unknown_linux_gnu = None
  HOST_CC = None
  CC = Some("cachepot clang")
  CFLAGS_x86_64-unknown-linux-gnu = None
  CFLAGS_x86_64_unknown_linux_gnu = None
  HOST_CFLAGS = None
  CFLAGS = None
  CRATE_CC_NO_DEFAULTS = None
  DEBUG = Some("false")
  CARGO_CFG_TARGET_FEATURE = Some("fxsr,sse,sse2")
  running: "/home/bernhard/.cargo/bin/cachepot" "cachepot" "clang" "-O3" "-ffunction-sections" "-fdata-sections" "-fPIC" "-m64" "-I" "./minivorbis" "-Wall" "-Wextra" "-o" "/tmp/cargo-installlSBzES/release/build/minivorbis-sys-af54e9bf1ae4ae9b/out/./minivorbis.o" "-c" "./minivorbis.c"
  cargo:warning=cachepot: error: failed to execute compile
  cargo:warning=cachepot: caused by: Compiler not supported: "error: Found argument \'-E\' which wasn\'t expected, or isn\'t valid in this context\n\nUSAGE:\n    cachepot [FLAGS] [OPTIONS] [cmd]...\n\nFor more information try --help\n"
  exit status: 2

  --- stderr


  error occurred: Command "/home/bernhard/.cargo/bin/cachepot" "cachepot" "clang" "-O3" "-ffunction-sections" "-fdata-sections" "-fPIC" "-m64" "-I" "./minivorbis" "-Wall" "-Wextra" "-o" "/tmp/cargo-installlSBzES/release/build/minivorbis-sys-af54e9bf1ae4ae9b/out/./minivorbis.o" "-c" "./minivorbis.c" with args "cachepot" did not execute successfully (status code exit status: 2).


warning: build failed, waiting for other jobs to finish...
error: failed to compile `psst-gui v0.1.0 (https://github.com/jpochyla/psst.git#105f6cc2)`, intermediate artifacts can be found at `/tmp/cargo-installlSBzES`

Caused by:
  build failed

Slowdown when local fs lru cache fills up

Once the server side file cache hits the limit, compilation slows down significantly.

Compile requests                     503
Compile requests executed            383
Cache hits                           359
Cache hits (C/C++)                    15
Cache hits (Rust)                    344
Cache misses                          14
Cache misses (Rust)                   14
Cache timeouts                         1
Cache read errors                      0
Forced recaches                        0
Cache write errors                     0
Compilation failures                   2
Cache errors                           0
Non-cacheable compilations             0
Non-cacheable calls                  118
Non-compilation calls                  2
Unsupported compiler calls             0
Average cache write                6.028 s <<<<<<<<
Average cache read miss           29.963 s <<<<<<<<
Average cache read hit             3.808 s <<<<<<<<
Failed distributed compilations        4

Successful distributed compiles
  XXXXXXXXXXXXXX                10

Non-cacheable reasons:
crate-type                            98
unknown source language               15
-                                      4
-E                                     1

Cache location                  Redis: XXXXXXX
Cache size                            16 GiB
Max cache size                        16 GiB

MSRV policy

As of writing, we decided to enforce the same CI suite that's used by mozilla/sccache to facilitate possible patch upstreaming. We're using Rust 1.43 (a badge was added in #86) but as we slowly drift away from the upstream, it might be worth revisiting this decision and either come up with our own policy or leave the things as-is for now.

One concrete example where a newer MSRV might help is #84 (comment) (strip_prefix). That's probably not enough of a reason to bump the MSRV and we should gather more cases like that before we decide to do the bump.

Future: Optimise amount of connections from client to server/scheduler

As client is swapn by Cargo for each Rustc invocation,
it generates a lot of parallel instances of client, each may want to have connection to scheduler/server.

In case of remote setup with encrypted connections it may be not optimal.

Instead we would like some local proxy , to which clients are connecting, that multiplex connections to remote server/scheduler - that way to reduce overhead of creation of encrypted connections.

Prune openssl

There was an attempt to remove openssl in favour of pure Rust implementation at #67, however we still include it via rouille > tiny_http implementation. We track moving off rouille at #39.

The benefit, as I understand it, is to make the build and distribution easier or more portable, however we do build and ship the binary by using the openssl/vendored feature already in Actions and GitLab, so I'm not sure how critical this issue is.

@drahnr @TriplEight is it something that we want to resolve in the nearest future?

Future plans: prefilling local cache

Mechanism to prefill local cache.

Use case: instance with sscache server may have a lot of artifacts alredy compiled over night. Why not make it possible to precache it ,e.g. during making morning tea? For that torrent protocol could be used, so once all users start fetching at same day, it fetches even faster :).

[dist] Introduce Content Addressable Storage - CAS

Currently a lot of data is transferred between the client and the server, for each compilation request. A lot of this is redundent and could be avoided by using CAS, where only hashes are checked. This could be combined with a local replica of the remote set of files that are already known / cached.

Currently I see the following work items:

  • eval existing CAS crates
  • define for which data we want to use CAS
  • find a good mock-able abstraction

Improve argument parsing

Currently the argument parsing is handled by Clap and is far from optimal.

Consider removing to something that reduces the clutter would be preferable, i.e. structopt.

vetted toolchains only mode

Allow a mode, where only vetted toolchains are supported on the backend, and the current logic of uploading additional toolchains is prohibited on the scheduler side.

tokio context issue on failed compilation unit

Attempting to compile github.com/drahnr/honggfuzz-rs @ 99fec3373f8399b4664cda47f6b120a951e405be in dist mode with the following client config:

[dist]
scheduler_url = "https://internal.yada.youknow"
toolchains = []
toolchain_cache_size = 5368709120

[dist.auth]
type = "token"
token = "s3cr3tt0ken"

leads to

RUST_BACKTRACE=1 RUST_LOG=sccache=debug SCCACHE_START_SERVER=1 SCCACHE_NO_DAEMON=1 sccache
thread 'tokio-runtime-worker' panicked at 'Cannot start a runtime from within a runtime. This happens because a function (like `block_on`) attempted to block the current thread while the thread is being used to drive asynchronous tasks.', /home/bernhard/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-0.2.25/src/runtime/enter.rs:38:5
stack backtrace:
   0: std::panicking::begin_panic
   1: tokio::runtime::enter::enter
   2: tokio::runtime::context::enter
   3: sccache::server::DistClientContainer::create_state
   4: sccache::server::DistClientContainer::get_client
   5: sccache::server::SccacheService<C>::handle_compile::{{closure}}
   6: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
   7: <futures_util::stream::try_stream::try_flatten::TryFlatten<St> as futures_core::stream::Stream>::poll_next
   8: <futures_util::stream::stream::forward::Forward<St,Si,Item> as core::future::future::Future>::poll
   9: <futures_util::future::future::map::Map<Fut,F> as core::future::future::Future>::poll
  10: tokio::runtime::task::core::Core<T,S>::poll
  11: <std::panic::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once
  12: tokio::runtime::task::harness::Harness<T,S>::poll
  13: std::thread::local::LocalKey<T>::with
  14: tokio::runtime::thread_pool::worker::Context::run_task
  15: tokio::runtime::thread_pool::worker::Context::run
  16: tokio::macros::scoped_tls::ScopedKey<T>::set
  17: tokio::runtime::thread_pool::worker::run
  18: tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut
  19: <std::panic::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once
  20: tokio::runtime::task::harness::Harness<T,S>::poll
  21: tokio::runtime::blocking::pool::Inner::run
  22: tokio::runtime::context::enter
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
thread 'tokio-runtime-worker' panicked at 'called `Result::unwrap()` on an `Err` value: "PoisonError { inner: .. }"', src/server.rs:272:36
stack backtrace:
   0: rust_begin_unwind
             at /rustc/88f19c6dab716c6281af7602e30f413e809c5974/library/std/src/panicking.rs:493:5
   1: core::panicking::panic_fmt
             at /rustc/88f19c6dab716c6281af7602e30f413e809c5974/library/core/src/panicking.rs:92:14
   2: core::option::expect_none_failed
             at /rustc/88f19c6dab716c6281af7602e30f413e809c5974/library/core/src/option.rs:1329:5
   3: sccache::server::DistClientContainer::get_client
   4: sccache::server::SccacheService<C>::handle_compile::{{closure}}
   5: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
   6: <futures_util::stream::try_stream::try_flatten::TryFlatten<St> as futures_core::stream::Stream>::poll_next
   7: <futures_util::stream::stream::forward::Forward<St,Si,Item> as core::future::future::Future>::poll
   8: <futures_util::future::future::map::Map<Fut,F> as core::future::future::Future>::poll
   9: tokio::runtime::task::core::Core<T,S>::poll
  10: <std::panic::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once
  11: tokio::runtime::task::harness::Harness<T,S>::poll
  12: std::thread::local::LocalKey<T>::with
  13: tokio::runtime::thread_pool::worker::Context::run_task
  14: tokio::runtime::thread_pool::worker::Context::run
  15: tokio::macros::scoped_tls::ScopedKey<T>::set
  16: tokio::runtime::thread_pool::worker::run
  17: tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut
  18: <std::panic::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once
  19: tokio::runtime::task::harness::Harness<T,S>::poll
  20: tokio::runtime::blocking::pool::Inner::run
  21: tokio::runtime::context::enter
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.

Cachepot fails to start server unless run as root

Hey!
I've been trying to introduce cachepot into our CI pipelines (CircleCI) and I am having some issues that I believe to be related to permissions.
If cachepot --start-server is executed without using sudo, the command always fails complaining that it timed out waiting for the server to start up.
If we run it with sudo, the server starts up correctly but we then run into issues when interacting with cargo.

What could be needing elevated permissions that causes cachepot --start-server to fail (unfortunately without emitting any useful log even at trace level)?

Allow build scripts to fork, nspawn processes

  running "/home/bernhard/.cargo/bin/sccache" "cc" "-O0" "-ffunction-sections" "-fdata-sections" "-fPIC" "-g" "-fno-omit-frame-pointer" "-m64" "-I" "include" "-Wall" "-Wextra" "-std=c1x" "-Wbad-function-cast" "-Wnested-externs" "-Wstrict-prototypes" "-pedantic" "-pedantic-errors" "-Wall" "-Wextra" "-Wcast-align" "-Wcast-qual" "-Wconversion" "-Wenum-compare" "-Wfloat-equal" "-Wformat=2" "-Winline" "-Winvalid-pch" "-Wmissing-field-initializers" "-Wmissing-include-dirs" "-Wredundant-decls" "-Wshadow" "-Wsign-compare" "-Wsign-conversion" "-Wundef" "-Wuninitialized" "-Wwrite-strings" "-fno-strict-aliasing" "-fvisibility=hidden" "-fstack-protector" "-g3" "-DNDEBUG" "-c" "-o/media/supersonic1t/projects/sccache/target/debug/build/ring-8671a635501af205/out/aes_nohw.o" "crypto/fipsmodule/aes/aes_nohw.c"
  ccache: error: Failed to fork: Resource temporarily unavailable

Why fork?

Why is cachepot a fork? Isn't security a core concern for sccache too? Or is this fork mostly about the ability to develop independently without the sccache maintainers' approval?

sccache test failure when ccache is installed

Whenever ccache is present in fedora, running a C or C++ unittest, it fails consistently. I assume CC is set, and for testcases we should probably unset it - that interferes with sccache.

Not sure if we need detection logic for that or not.

sccache: Starting the server...
test test_sccache_command ... FAILED

failures:

---- test_sccache_command stdout ----
[2021-05-05T06:50:28Z TRACE system::harness] sccache --stop-server
[2021-05-05T06:50:28Z TRACE system] start server
[2021-05-05T06:50:28Z TRACE system] run_sccache_command_test: gcc
[2021-05-05T06:50:28Z TRACE system] fs::copy("tests/test.c", "/tmp/sccache_system_testUPZyWN/test.c")
[2021-05-05T06:50:28Z TRACE system] fs::copy("tests/test_err.c", "/tmp/sccache_system_testUPZyWN/test_err.c")
[2021-05-05T06:50:28Z TRACE system] compile
thread 'test_sccache_command' panicked at 'Unexpected failure.
code-2
stderr=```sccache: error: failed to execute compile
sccache: caused by: Compiler not supported: "/usr/bin/ccache: invalid option -- \'E\'\nUsage:\n    ccache [options]\n    ccache compiler [compiler options]\n    compiler [compiler options]          (via symbolic link)\n\nCommon options:\n    -c, --cleanup              delete old files and recalculate size counters\n                               (normally not needed as this is done\n                               automatically)\n    -C, --clear                clear the cache completely (except configuration)\n        --config-path PATH     operate on configuration file PATH instead of the\n                               default\n    -d, --directory PATH       operate on cache directory PATH instead of the\n                               default\n        --evict-older-than AGE remove files older than AGE (unsigned integer\n                               with a d (days) or s (seconds) suffix)\n    -F, --max-files NUM        set maximum number of files in cache to NUM (use\n                               0 for no limit)\n    -M, --max-size SIZE        set maximum size of cache to SIZE (use 0 for no\n                               limit); available suffixes: k, M, G, T (decimal)\n                               and Ki, Mi, Gi, Ti (binary); default suffix: G\n    -X, --recompress LEVEL     recompress the cache to level LEVEL (integer or\n                               \"uncompressed\") using the Zstandard algorithm;\n                               see \"Cache compression\" in the manual for details\n    -o, --set-config KEY=VAL   set configuration item KEY to value VAL\n    -x, --show-compression     show compression statistics\n    -p, --show-config          show current configuration options in\n                               human-readable format\n    -s, --show-stats           show summary of configuration and statistics\n                               counters in human-readable format\n    -z, --zero-stats           zero statistics counters\n\n    -h, --help                 print this help text\n    -V, --version              print version and copyright information\n\nOptions for scripting or debugging:\n        --checksum-file PATH   print the checksum (64 bit XXH3) of the file at\n                               PATH\n        --dump-manifest PATH   dump manifest file at PATH in text format\n        --dump-result PATH     dump result file at PATH in text format\n        --extract-result PATH  extract data stored in result file at PATH to the\n                               current working directory\n    -k, --get-config KEY       print the value of configuration key KEY\n        --hash-file PATH       print the hash (160 bit BLAKE3) of the file at\n                               PATH\n        --print-stats          print statistics counter IDs and corresponding\n                               values in machine-parsable format\n\nSee also the manual on <https://ccache.dev/documentation.html>.\n"

command="/media/supersonic1t/projects/sccache/target/debug/sccache" "/usr/bin/ccache" "-c" "test.c" "-o" "test.o"
code=2
stdout=``````
stderr=```sccache: error: failed to execute compile
sccache: caused by: Compiler not supported: "/usr/bin/ccache: invalid option -- 'E'\nUsage:\n ccache [options]\n ccache compiler [compiler options]\n compiler [compiler options] (via symbolic link)\n\nCommon options:\n -c, --cleanup delete old files and recalculate size counters\n (normally not needed as this is done\n automatically)\n -C, --clear clear the cache completely (except configuration)\n --config-path PATH operate on configuration file PATH instead of the\n default\n -d, --directory PATH operate on cache directory PATH instead of the\n default\n --evict-older-than AGE remove files older than AGE (unsigned integer\n with a d (days) or s (seconds) suffix)\n -F, --max-files NUM set maximum number of files in cache to NUM (use\n 0 for no limit)\n -M, --max-size SIZE set maximum size of cache to SIZE (use 0 for no\n limit); available suffixes: k, M, G, T (decimal)\n and Ki, Mi, Gi, Ti (binary); default suffix: G\n -X, --recompress LEVEL recompress the cache to level LEVEL (integer or\n "uncompressed") using the Zstandard algorithm;\n see "Cache compression" in the manual for details\n -o, --set-config KEY=VAL set configuration item KEY to value VAL\n -x, --show-compression show compression statistics\n -p, --show-config show current configuration options in\n human-readable format\n -s, --show-stats show summary of configuration and statistics\n counters in human-readable format\n -z, --zero-stats zero statistics counters\n\n -h, --help print this help text\n -V, --version print version and copyright information\n\nOptions for scripting or debugging:\n --checksum-file PATH print the checksum (64 bit XXH3) of the file at\n PATH\n --dump-manifest PATH dump manifest file at PATH in text format\n --dump-result PATH dump result file at PATH in text format\n --extract-result PATH extract data stored in result file at PATH to the\n current working directory\n -k, --get-config KEY print the value of configuration key KEY\n --hash-file PATH print the hash (160 bit BLAKE3) of the file at\n PATH\n --print-stats print statistics counter IDs and corresponding\n values in machine-parsable format\n\nSee also the manual on https://ccache.dev/documentation.html.\n"

', /home/bernhard/.cargo/registry/src/github.com-1ecc6299db9ec823/assert_cmd-1.0.2/src/assert.rs:158:13
stack backtrace:
   0: rust_begin_unwind
             at /rustc/2fd73fabe469357a12c2c974c140f67e7cdd76d0/library/std/src/panicking.rs:493:5
   1: std::panicking::begin_panic_fmt
             at /rustc/2fd73fabe469357a12c2c974c140f67e7cdd76d0/library/std/src/panicking.rs:435:5
   2: assert_cmd::assert::Assert::success
             at /home/bernhard/.cargo/registry/src/github.com-1ecc6299db9ec823/assert_cmd-1.0.2/src/assert.rs:158:13
   3: system::test_basic_compile
             at ./tests/system.rs:109:5
   4: system::run_sccache_command_tests
             at ./tests/system.rs:363:5
   5: system::test_sccache_command
             at ./tests/system.rs:440:13
   6: system::test_sccache_command::{{closure}}
             at ./tests/system.rs:417:1
   7: core::ops::function::FnOnce::call_once
             at /home/bernhard/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/ops/function.rs:227:5
   8: core::ops::function::FnOnce::call_once
             at /rustc/2fd73fabe469357a12c2c974c140f67e7cdd76d0/library/core/src/ops/function.rs:227:5
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.


failures:
    test_sccache_command

cargo build fails with "cachepot: error: Timed out waiting for coordinator startup"

I get this error when running cargo build --release

`error: failed to run "rustc" to learn about target-specific information

Caused by:
process didn't exit successfully: "/home/toto/.cargo/bin/cachepot rustc - --crate-name ___ --print=file-names --crate-type bin --crate-type rlib --crate-type dylib --crate-type cdylib --crate-type staticlib --crate-type proc-macro --print=sysroot --print=cfg" (exit status: 2)
--- stderr
cachepot: error: Timed out waiting for coordinator startup`

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.