Giter VIP home page Giter VIP logo

services's Introduction

This is not the repo you're looking for

This repository is no longer in use. Various aspects of Veraison have been split into separate repositories.

If you're looking for the main Veraison services repository, you can find it here:

https://github.com/veraison/services

Please look into the the project overview on Veraison Organization for the description of how Veraison code is organized and where to look for specific things:

https://github.com/veraison

This repository is now archived!

services's People

Contributors

aj-stein-nist avatar dependabot[bot] avatar hannestschofenig avatar kakemone avatar sabreenkaur avatar setrofim avatar thomas-fossati avatar yogeshbdeshpande avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

services's Issues

Enhancement: Document and Demonstrate via example, making extension / re-use of a existing scheme (like PSA)

Veraison implementation is using standard schemes like PSA/DICE and demonstrating this via README.md for usage.

This feature enhancement discusses ways and means to explain how:

A specific scheme like PSA can be extended by a Vendor specific Profile of PSA to add vendor specific claims.
Introduce helpful documentation with implementing Vendor specific Plugins.
Anything else required to complete end to end implemation (example enhancing CoRIM/CoMID Encoder Decoder and so is CoCli)

Allow VTS to listen on multiple networks

Currently the VTS backend listens for GRPC requests on one address and both provisioning and verification services communicate with VTS using this single address. Therefore currently VTS does not support listening on multiple networks. So instead we want the following, to address the complete separation of both provisioning and verification services:

  • One network which contains the provisioning and VTS services (called network N1)
  • One network which contains the verification and VTS services (called network N2)
  • The ability for VTS to listen on both networks N1 and N2

Policy support in the `AttestationResult`

Current policy implementation if fairly limited -- it allows overwriting the result status and the trust vector values. We may want to give the policy the ability to add new derived claims, and also make the changes made by policy more transparent. In order to do that, we need to consider how this ought to be handled in the AttestationResult (if at all).

  • Where in AttestationResult should the policy-derived claims reside? In its current implementation, the post appropriate place seems to be under "endorsed claims" but that doesn't seem quite right -- logically, there is a difference between claims directly provisioned/endorsed by an outside entity, and those generated based on policy.
  • What is the right way to keep track of changes made by policy, and to they need to be exposed to the Relying Party inside the AttestationResult (if so, where)? The alternative would be to log them, either via generic application logging, or through some dedicated mechanism.

provisioning executable needs configuration options

in provisioning/cmd/main.go, the PluginDir, ListenAddr, and VTSClientCfg variables are all set to hard-coded values. These need to be configurable so users do not have to modify the source files, creating a fork of the project.

BUG: Security Vulnerability reported on OPA version 0.43.0

Following security vulnerability reported:
OPA Compiler: Bypass of WithUnsafeBuiltins using "with" keyword to mock functions

Solution:
Upgrade github.com/open-policy-agent/opa to version 0.43.1 or later. For example:
require github.com/open-policy-agent/opa v0.43.1

feature: logging++

Is your feature request related to a problem? Please describe
Time seems ripe for us to have a deep look at logging and make sure it is consistent across the different components, it is well documented, and testable.

Currently, we use hclog as well as zap. The combo works mostly fine, except for when one needs to deal with both at the same time, in which case it can create some potential and actual confusion (for example, see veraison/veraison#55) .

If possible, describe a solution
One thing that I like to achieve is more homogeneity, i.e., picking one library, if possible.
Another thing is we bundle all log related things (e.g., the Logger interface to mock and use in testing, a bunch of COTS logger constructors with sensible Logger configuration choices) into a component that can be used across the codebase. This would in future accommodate other related things, for example function to anonymise PII, etc.

feature: Unify provisioning and VTS plugin handling

We currently have two components that extend their functionality via Scheme-specific plugins -- provisioning Decoder and VTS. Each defines its own plugin interface and implements a loader to discovers and loads plugins implementing it. This has a number of downsides:

  • Adding support for a new attestation Scheme requires implementing two different plugins.
  • As the loaders are unaware of each other, plugins must reside in different locations.
  • As a scheme evolves, both need to be maintained in parallel

To address this, we should unify the handling of pluggable functionality in a single loader/framework. This can be achieved either by creating a single loader aware of both interfaces, allowing fro plugins implementing them to reside together in the same location, or potentially, the same executable (i.e. allowing one plugin to implement both interfaces).

Handler's tenantID does not match vts's DummyTenantID

The Handler code in verification/ap/handler.go sets tenantID to "0"

The VTS code in vts/trustedservices/trustedservices_grpc.go sets DummyTenantID to "0123456789"

It is my understanding that these should be the same value.

Better CoRIM plugin extractors

We need to improve the state of the CoRIM extractors. In particular:

  • ability to handle cross-triple checks
  • how to deal with extensibility of the triples
  • how to do checks or extraction from the "global" context (e.g., making sure that the profile is right)

BUG: vts policymanager getPolicy will never return ErrNoPolicy

What version of the package are you using?

aa55d15

Does this issue reproduce with the latest release?

Yes (that's latest)

What OS and CPU architecture are you using (go env)?

GO111MODULE=""
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/ec2-user/.cache/go-build"
GOENV="/home/ec2-user/.config/go/env"
GOEXE=""
GOEXPERIMENT=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOINSECURE=""
GOMODCACHE="/home/ec2-user/go/pkg/mod"
GONOPROXY=""
GONOSUMDB=""
GOOS="linux"
GOPATH="/home/ec2-user/go"
GOPRIVATE=""
GOPROXY="https://proxy.golang.org,direct"
GOROOT="/usr/local/go"
GOSUMDB="sum.golang.org"
GOTMPDIR=""
GOTOOLDIR="/usr/local/go/pkg/tool/linux_amd64"
GOVCS=""
GOVERSION="go1.19"
GCCGO="gccgo"
GOAMD64="v1"
AR="ar"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD="/home/ec2-user/VeracruzVerifier/go.mod"
GOWORK=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -Wl,--no-gc-sections -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build3596495490=/tmp/go-build -gno-record-gcc-switches"

What did you do?

It's complicated, since I'm calling it from Veracruz. I'm using the kvstore/Memory backend.

What did you expect to see?

getPolicy should return ErrNoPolicy when there is no policy loaded

What did you see instead?

getPolicy returns the error generated by o.Store.Get, which is not ErrNoPolicy
The error generated is from kvstore/memory.go, line 49 in my version, of the format:

return nil, fmt.Errorf("key %q not found", key)

Since getPolicy returns the wrong error message, the check:

if err == ErrNoPolicy

that is used to make sure we handle the no policy case elegantly does not match.

feature: Add tooling

Is your feature request related to a problem? Please describe

Debugging the demo implementation of the front end highlighted significant difficulties in working data used for testing. Part of the issue is due to the current lack of provisioning path. However, even if that existed, most of these difficulties would remain to some extent. Specific pain points identified:

  • Difficulty in manually comparing endorsements and evidence extracted from a token.
  • Difficulty in updating and testing a policy against a specific set of endorsements and evidence.
  • Difficulty in updating the stores with modifications to endorsements/policy.

If possible, describe a solution

A few CLI tools, implemented on top of the same components as the service, that allow:

  • Query and update endorsements.
  • Query and update trust anchors.
  • Query and update policies.
  • Execute a policy against specifically-crafted inputs.

As part of this, it would also make sense to implement a common configuration mechanism that would be used to resolve things such as plugin and store locations by the tools, as well as the service, removing the need for them to be provided each time on invocation.

Describe alternatives you have considered

As mentioned above, once the provisioning flows have been implemented certain things will be easier (specifically, updating the stores). Improved logging would also help here. However, it would still be useful to have a set of small, lightweight CLI tools that allow working on individual steps without having to go through the entire flow for a tighter REPL during development.

Additional context

N/A

BUG: No need to generate proto files on every build

Services repo make system runs protoc command (as defined in proto.mk) on every make.

This is not required as the generated files are already in git repository.

New user does not need to install protoc, unless defining a new proto interface and generating a new interface file!

feature: attestation result format

Is your feature request related to a problem? Please describe

At present, the challenge-response API returns the attestation result in form of "naked" JSON object with a custom / free-form format.

There are two aspects to address here:

  1. moving from custom / free-form to something that is more stable;
  2. allow attestation results to be signed by the attester.

We should move towards the model defined by the RATS architecture bow tie diagram:

                Evidence           Attestation Results
.--------------.   CWT                    CWT   .-------------------.
|  Attester-A  |------------.      .----------->|  Relying Party V  |
'--------------'            v      |            `-------------------'
.--------------.   JWT   .------------.   JWT   .-------------------.
|  Attester-B  |-------->|  Verifier  |-------->|  Relying Party W  |
'--------------'         |            |         `-------------------'
.--------------.  X.509  |            |  X.509  .-------------------.
|  Attester-C  |-------->|            |-------->|  Relying Party X  |
'--------------'         |            |         `-------------------'
.--------------.   TPM   |            |   TPM   .-------------------.
|  Attester-D  |-------->|            |-------->|  Relying Party Y  |
'--------------'         '------------'         `-------------------'
.--------------.  other     ^      |     other  .-------------------.
|  Attester-E  |------------'      '----------->|  Relying Party Z  |
'--------------'                                `-------------------'

If possible, describe a solution

  1. Define the format of Veraison attestation results
    1. Document it publicly (e.g., in the relevant OpenAPI component's schema, and/or in an Internet draft);
    2. Register the associated MIME format (e.g., application/vnd.veraison-attestation-result+json);
    3. Adapt codebase.
  2. Define a signed envelope that can be used to carry it (maybe we could use JWT and CWT always, with optional signature)

Describe alternatives you have considered

No alternatives have been considered for the moment.

Additional context

feature: Create flexibility around freshness

### Is your feature request related to a problem? Please describe

A highly available AVS may have trouble keeping track of issued nonces. The AVS implementer should have a choice of whether issued nonces are per-session (and thus may require a client retry if the AVS node terminating the session fails) or per-epoch, for instance.

If possible, describe a solution

Some kind of interface that would allow the AVS implementer to instruct the main engine about the nonce mechanism.

Describe alternatives you have considered

N/A

Additional context

Existing discussion about "Freshness" here: https://veraison.zulipchat.com/#narrow/stream/264037-general/topic/Freshness

feature: each service should have a versioned "is running" route

Is your feature request related to a problem? Please describe

When starting a Veraison-based service as part of my test suite, I find I need to add sleep commands to wait for the service to be up. The amount of time required for this varies by platform, and has caused intermittent failures.

In specific, this problem is about the Provisioning service, but it's a general problem with all services.

In addition, in preparation for the support of multiple service versions, we have no simple way to detect which versions are supported by a service.

If possible, describe a solution

If each service were to support a simple versioned GET path (for provisioning service, it might be to /endorsement-provisioning/v1), this could solve both problems. If a GET on a specific version returned non-success (404, I suppose), the caller would know that version was not supported by the service. If it returned "connection refused" the caller would know that the service wasn't up on that URL/port.

Describe alternatives you have considered

We've attempted sleep, as mentioned above, but this adds unnecessary delays in our test suite and has shown intermittent failures on most platforms due to variable loads. The versioning question is also not solved with that solution.

Feature: Provide a non plugin implementation of veraison internal interfaces

Is your feature request related to a problem? Please describe

At present, all implementations of Veraison's internal interfaces are based on hashicorp go plugin. In theory, we are not tied to a specific implementation model, not even the completely decoupled, RPC-based interface provided by the go-plugin. We should prove this with running code.

If possible, describe a solution

Implement Veraison interfaces by means of "local" types only.

Describe alternatives you have considered

This is the alternative :-)

Additional context

No additional context.

Policy Management framework.

We need to figure out how policies are managed. Specifically:

  • Which entity(ies) is responsible for establishing attestation policies?
  • At what points can these policies be added/amended/deleted?
  • Are there auditing requirements (e.g. do we want to keep "deleted" policies in case they need to be reviewed at a future point?
  • Is it possible for multiple policies to be applicable to a single evaluation, and if so, how/in what order should they be applied?

Feature: Use docker in CI pipelines

The CI scripts rebuild their environment from scratch, which was OK in the beginning when the external dependencies were basically 0, but it's increasingly becoming time & resource consuming.

Github Actions allow using customized Docker containers, so we should take advantage of that to build our own CI image.

vts needs configurable options

Currently, the config.Store options in vts/cmd/main.go for PluginManagerCfg, TaStoreCfg, EnStoreCfg, and VtsGrpcConfig are hard-coded.

There should be a way for users of the module to configure those without modifying the source code.

Investigate the issue with arguments for Server side Methods defined in scheme_plugin.go

In the recent change discussed via PR #134

There could be a potential issue with Passing Args by Value, especially where the size of Args is long
example ExtractEvidenceArgs defined on line 145 and used on line 150.

Similar could be issue with getAttestationArgs defined on line 165 and used on line 170.

Need to investigate whether using a reference could be a better method of implementation and
if identified need to be applied consistently to all the existing methods and newly defined methods via PR #134

BUG: Need to have a Harmonised view of IAK Public Key

Current Veraison Provisioning requirement is:
To have a IAK Key encoded in CoMID as:
The IAK public key is a PEM-encoded as SubjectPublicKeyInfo [RFC5280]

However in the Verifier code base, we need a proper PEM Key with PEM Header:
"--BEGNI PUBLIC KEY---"

So we need to check:

Agree on a consistent format between the Provisioning and the Verification Entity

Ensure that whatever Veraison format is : "User Friendly for Community members"

  1. Users can generate the Public Keys easily.

  2. Users are able to introduce the keys easily in the CoMID for provisioning.

  3. Users can easily access the Private Key to sign the Attestation token containing the Evidence Claims

corim-psa-decoder is base64 encoding data that is already base64 encoded

In the file provisioning/plugins/corim-psa-decoder/classattributes.go, line 36 (in my version) looks lke:

o.ImplID, _ = implID.MarshalJSON()

However, the implID is already base64 encoded, and the MarhsalJSON() method base64 encodes it again.
In my code (not able to do a PR because my tree has all sorts of fun in it), changing it to:

o.ImplID = implID[:]

works and gives me the expected results.

BUG: generated files in `verification/api/mocks` are not checked in

What version of the package are you using?

latest

Does this issue reproduce with the latest release?

yes

What OS and CPU architecture are you using (go env)?

GO111MODULE=""
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/ec2-user/.cache/go-build"
GOENV="/home/ec2-user/.config/go/env"
GOEXE=""
GOEXPERIMENT=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOINSECURE=""
GOMODCACHE="/home/ec2-user/go/pkg/mod"
GONOPROXY=""
GONOSUMDB=""
GOOS="linux"
GOPATH="/home/ec2-user/go"
GOPRIVATE=""
GOPROXY="https://proxy.golang.org,direct"
GOROOT="/usr/local/go"
GOSUMDB="sum.golang.org"
GOTMPDIR=""
GOTOOLDIR="/usr/local/go/pkg/tool/linux_amd64"
GOVCS=""
GOVERSION="go1.19"
GCCGO="gccgo"
GOAMD64="v1"
AR="ar"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD="/home/ec2-user/VeracruzVerifier/go.mod"
GOWORK=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -Wl,--no-gc-sections -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build4222535396=/tmp/go-build -gno-record-gcc-switches"

What did you do?

Attempted to use services from an out-of-branch project

What did you expect to see?

Expected to be able to build my project and use the Veraison modules

What did you see instead?

go: finding module for package github.com/veraison/services/provisioning/api/mocks
github.com/veraison/services/provisioning/api tested by
	github.com/veraison/services/provisioning/api.test imports
	github.com/veraison/services/provisioning/api/mocks: no matching versions for query "latest"
github.com/veraison/services/verification/api tested by
	github.com/veraison/services/verification/api.test imports
	github.com/veraison/services/verification/api/mocks: no matching versions for query "latest"

It seems that the mocks directory and it's contents are generated when running make. However, when working outside of the services tree, I cannot do, that. So it seems that this would be possible if those generated files were checked into git.

BUG: Trusted Services errors when there are no softwareIds in the evidence context

What version of the package are you using?

latest

Does this issue reproduce with the latest release?

Yes

What OS and CPU architecture are you using (go env)?

irrelevant

What did you do?

Set up a TrustAnchor (using the provisioning process) with no CoSWIDs. Thus, there is not SoftwareID.
The system is set up for the Endorsement Store to use the memory back end.

What did you expect to see?

Calls to GRPC.GetAttestation with a token for that TrustAnchor to gracefully handle the case, and not produce an error.

What did you see instead?

When GRPC.GetAttestation is called with a token for that TrustAnchor, it calls GRPC.extractEvidence, and extractEvidence returns a proto.EvidenceContext with SoftwareID set to "".
Then, back in GetAttestation, the call to o.EnStore.Get, with key set to "" returns error "The supplied key is empty".

This causes GetAttestation to return the error.

It appears that santizeK is not prepared to handle the case when the key is empty. Possible solutions are to change that behavior in santizeK, or to add a check in GetAttestation for an empty ec.SoftwareID before calling o.EnStore.Get().

Ezy deploy & integration

At present, in order for the Veraison verifier to do useful work, three different services and their dependencies (e.g., stores, plugins) needs to be manually instantiated.

In general, this yields poor end-user experience. In particular, it makes creating demos as well as putting together an integration testing environment that can be run as part of the CI more complex than necessary.

As an incremental step to improve the status quo, we need to:

  1. automate the startup / shutdown phases of the system as a whole
  2. allow easy setup of the endorsement, trust anchor and policy stores
  3. allow easy setup of the plugins (both VTS and provisioning)
  4. allow easy manipulation of the configuration of each service
  5. document any newly introduced interface
  6. document the set up of a PSA demonstrator

Achieving these goals will give us the building blocks for quickly assembling demonstrators, and also define the SUT for a subsequent "integration testing" story.

The envisaged solution will be based on the containerisation of each service (e.g., using Docker) in a way compatible with standard orchestration engines (e.g., docker-compose, k3s, k8s).

BUG: the `.github/workflows/ci-go-cover.yml` action relies on an action that is x86_64-specific

What version of the package are you using?

Latest

Does this issue reproduce with the latest release?

Yes

What OS and CPU architecture are you using (go env)?

Macbook pro with M1 chip.

GO111MODULE=""
GOARCH="arm64"
GOBIN=""
GOCACHE="/Users/dermil01/Library/Caches/go-build"
GOENV="/Users/dermil01/Library/Application Support/go/env"
GOEXE=""
GOEXPERIMENT=""
GOFLAGS=""
GOHOSTARCH="arm64"
GOHOSTOS="darwin"
GOINSECURE=""
GOMODCACHE="/Users/dermil01/go/pkg/mod"
GONOPROXY=""
GONOSUMDB=""
GOOS="darwin"
GOPATH="/Users/dermil01/go"
GOPRIVATE=""
GOPROXY="https://proxy.golang.org,direct"
GOROOT="/usr/local/go"
GOSUMDB="sum.golang.org"
GOTMPDIR=""
GOTOOLDIR="/usr/local/go/pkg/tool/darwin_arm64"
GOVCS=""
GOVERSION="go1.18"
GCCGO="gccgo"
AR="ar"
CC="clang"
CXX="clang++"
CGO_ENABLED="1"
GOMOD="/Users/dermil01/services/go.mod"
GOWORK=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -arch arm64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/h3/5y72_42d1zqg9p9l9d7xr7hh0000gp/T/go-build311892468=/tmp/go-build -gno-record-gcc-switches -fno-common"

What did you do?

Used the act tool (https://github.com/nektos/act) to attempt to run the CI script locally on my aarch64 Mac book.
In the services directory:

act

What did you expect to see?

I expected it to run to completion.

What did you see instead?

Lots of output, and then:

| [command]/run/act/actions/arduino-setup-protoc@v1/node_modules/@actions/tool-cache/scripts/externals/unzip /tmp/9eb53cc1-7422-4b31-bd21-3604e27d4cda
| qemu-x86_64: Could not open '/lib64/ld-linux-x86-64.so.2': No such file or directory
[ci/Test on ubuntu-latest]   ❗  ::error::The process '/run/act/actions/arduino-setup-protoc@v1/node_modules/@actions/tool-cache/scripts/externals/unzip' failed with exit code 255

As it turns out, the arduino-setup-protoc@v1 action stores some useful executables in its node_modules/@actions/tool-cache/scripts/externals/ directory. Unfortunately, at least some of them are x86_64 binaries.

I'm not sure if running CI locally is a priority for the project (there are lots of other things to do, of course), so this is not critical. Just a useful FYI.

services build-time information

We should make build-time information, including but not limited to version, available to all executables, both services and CLIs.

Re: version. One option would be for it to be computed at build time from git metadata (e.g., tags, commit hash, etc.) and assigned to a predefined global via go build -w -X VERSION_VAR=VERSION_VAL made available to all binaries.

Enhance Tooling to check the validity of provisioning submissions

Present Provisioning Infra lets one provision 1. Endorsements ( like Trust Anchors) and 2. Reference Values ( like Software Components) independently as they could appear from different supply chain actors. How does one ensures that the data submitted in 2. does correctly relates to 1. and is not a dis-jointed/incorrect information provisioned in the Veraison Provisioning Pipeline.

This issue tracks the enhancement required to validate the same at the time of submission so that the user can be sent a suitable error message in case such discrepancy is identified.

`ExtractVerifiedClaims` in `scheme-psa-iot` appears to be expecting the wrong format

When I look at the ExtractVerifiedClaims function in the scheme-psa-iot plugin for vts, I see that the code is expecting trustAnchor to be PEM encoded.

However, when I execute the code, I find that the trustAnchor input is JSON data instead. This meant that I had to essentially rewrite the function for the different data format.

Is it possible that I am using the plugin wrong (I am not aware of any changes that I've made to that part of the code)? Or is this a bug?

BUG: `proto` module does not have `*.go` files checked in

What version of the package are you using?

dc40117

Does this issue reproduce with the latest release?

Yes

What OS and CPU architecture are you using (go env)?

GO111MODULE=""
GOARCH="amd64"
GOBIN=""
GOCACHE="/home/ec2-user/.cache/go-build"
GOENV="/home/ec2-user/.config/go/env"
GOEXE=""
GOEXPERIMENT=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOINSECURE=""
GOMODCACHE="/home/ec2-user/go/pkg/mod"
GONOPROXY=""
GONOSUMDB=""
GOOS="linux"
GOPATH="/home/ec2-user/go/"
GOPRIVATE=""
GOPROXY="https://proxy.golang.org,direct"
GOROOT="/usr/local/go"
GOSUMDB="sum.golang.org"
GOTMPDIR=""
GOTOOLDIR="/usr/local/go/pkg/tool/linux_amd64"
GOVCS=""
GOVERSION="go1.19"
GCCGO="gccgo"
GOAMD64="v1"
AR="ar"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD="/home/ec2-user/services/go.mod"
GOWORK=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -Wl,--no-gc-sections -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build24166057=/tmp/go-build -gno-record-gcc-switches"

What did you do?

Tried to use services and it's submodules outside of the services directory (as a go dependency)

What did you expect to see?

I expected to be able to build my project

What did you see instead?

../pure/services/vts/pluginmanager/ipluginmanager.go:6:2: no required module provides package github.com/veraison/services/proto; to add it:
	go get github.com/veraison/services/proto

Running go get ... results in:

go: module github.com/veraison/services@upgrade found (v0.0.0-20220823172721-dc401170e1c9,), but does not contain package github.com/veraison/services/proto

This is happening because none of the *.go files generated by protoc are checked into the directory. This means that in order to use any of the modules that depend on the proto module, the user needs to download the project (via git clone ...), install the protoc compiler, and run make. If the files where checked in, this would not be necessary.

Add capability support to VTS APIs

VTS is intended to encapsulate high-trust services. In order to ensure overall system security, clients should be granted the minimum privileged access to VTS APIs they need to perform their job.

Assign capabilities to VTS APIs based on the function they perform w.r.t. protected assets (e.g. "READ", "WRITE", etc).
Add a mechanism for restricting API access to client connections based on the required capabilities.

feature: Ability to add additional provisioning and verification plugins without modifying the `services` project

### Is your feature request related to a problem? Please describe

I am working to add attestation plugins for AWS Nitro enclaves (mostly for my own edification, but possibly to upstream). In general, things work well. However, I have had to make one change to existing code in the services project to enable this. It's in proto/attestation_format.proto, where I had to add an additional entry in the AttestationFormat enum, AWS_NITRO = 4;. Of course this change meant that the generated files needed to be changed as well, but this is the only manual change that was required.

I would be beneficial to not have to make this change, because besides this change, nothing else in any existing code needed to be modified. This would allow people to develop plugins outside of the services project without having to fork the project.

If possible, describe a solution

I'm not sure what the best solution here would be. One possible (though awkward) solution would be to make AttestationFormat a string instead of an enum. Then plugins and the infrastructure code could match against strings.

Additional context

This work is in connection to the Veracruz project's use of Veraison services.

BUG: config.yaml for `provisioning/cmd/` uses `listen-addr`, code is looking for `list-addr`

What version of the package are you using?

latest

Does this issue reproduce with the latest release?

yes

What OS and CPU architecture are you using (go env)?

n/a

What did you do?

attempted to use the config.yaml file at: https://github.com/veraison/services/blob/main/provisioning/cmd/config.yaml

What did you expect to see?

It should use the listen-addr specified in the config file.

What did you see instead?

It uses the DefaultListenAddr from the code.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.