Giter VIP home page Giter VIP logo

openpubkey's Introduction

OpenPubkey

Go Coverage

Overview

OpenPubkey is a protocol for leveraging OpenID Providers (OPs) to bind identities to public keys. It adds user- or workload-generated public keys to OpenID Connect (OIDC), enabling identities to sign messages or artifacts under their OIDC identity.

We represent this binding as a PK Token. This token proves control of the OIDC identity and the associated private key at a specific time, as long as a verifier trusts the OP. Put another way, the PK Token provides the same assurances as a certificate issued by a Certificate Authority (CA) but critically, does not require adding a CA. Instead, the OP fulfills the role of the CA. This token can be distributed alongside signatures in the same way as a certificate.

OpenPubkey does not add any new trusted parties beyond what is required for OpenID Connect. It is fully compatible with existing OpenID Providers (Google, Azure/Microsoft, Okta, OneLogin, Keycloak) without any changes to the OpenID Provider.

Companies building on OpenPubkey include:

OpenPubkey is a Linux Foundation project. It is open source and licensed under the Apache 2.0 license. This project presently provides an OpenPubkey client and verifier for creating and verifying PK Tokens from Google’s OP (for users) and GitHub’s OP (for workloads).

Getting Started

Let's walk through a simple message signing example. For conciseness we omit the error handling code. The full code for this example can be found in ./examples/simple/example.go.

We start by configuring the OP (OpenID Provider) our client and verifier will use. In this example we use Google as our OP.

opOptions := providers.GetDefaultGoogleOpOptions()
opOptions.SignGQ = signGQ
op := providers.NewGoogleOpWithOptions(opOptions)

Next we create the OpenPubkey client and call opkClient.Auth:

opkClient, err := client.New(op)
pkt, err := opkClient.Auth(context.Background())

The function opkClient.Auth opens a browser window to the OP, Google in this case, which then prompts the user to authenticate their identity. If the user authenticates successfully the client will generate and return a PK Token, pkt.

The PK Token, pkt, along with the client's signing key can then be used to sign messages:

msg := []byte("All is discovered - flee at once")
signedMsg, err := pkt.NewSignedMessage(msg, opkClient.GetSigner())

To verify a signed message, we first verify that the PK Token pkt is issued by the OP (Google). Then we use the PK Token to verify the signed message.

pktVerifier, err := verifier.New(provider)
err = pktVerifier.VerifyPKToken(context.Background(), pkt)
msg, err := pkt.VerifySignedMessage(signedMsg)

To run this example type: go run .\examples\simple\example.go.

This will open a browser window to Google. If you authenticate to Google successfully, you should see: Verification successful: [email protected] (https://accounts.google.com) signed the message 'All is discovered - flee at once' where [email protected] is your gmail address.

How Does OpenPubkey Work?

OpenPubkey supports both workload identities and user identities. Let's look at how this works for users and then show how to extend OpenPubkey to workloads.

OpenPubkey and User Identities

In OpenID Connect (OIDC) users authenticate to an OP (OpenID Provider), and the OP grants the user an ID Token. These ID Tokens are signed by the OP and contain claims made by the OP about the user such as the user's email address. Important to OpenPubkey is the nonce claim in the ID Token.

The nonce claim in the ID Token is a random value sent to the OP by the user's client during authentication with the OP. OpenPubkey follows the OpenID Connect authentication protocol with the OP, but it transmits a nonce value set to the cryptographic hash of both the user's public key and a random value so that the nonce is still cryptographically random, but any party that speaks OpenPubkey can check that ID Token contains the user's public key. From the perspective of the OP, the nonce looks just like a random value.

Let's look at an example where a user, Alice, leverages OpenPubkey to get her OpenID Provider, google.com, to bind her OIDC identity, [email protected], to her public key alice-pubkey. To do this, Alice invokes her OpenPubkey client.

  1. Alice's OpenPubkey client generates a fresh key pair for Alice, (alice-pubkey, alice-signkey), and a random value rz. The client then computes the nonce=crypto.SHA3_256(upk=alice-pubkey, alg=ES256, rz=crypto.Rand()). The value alg is set to the algorithm of Alice's key pair.
  2. Alice's OpenPubkey client then initiates OIDC authentication flow with the OP, google.com, and sends the nonce to the OP.
  3. The OP requests that Alice consents to issuing an ID Token and provides credentials (i.e., username and password) to authenticate to her OP (Google).
  4. If Alice successfully authenticates, the OP builds an ID Token containing claims about Alice. Critically, this ID Token contains the nonce claim generated by Alice's client to commit to Alice's public key. The OP then signs this ID Token under its signing key and sends the ID Token to Alice.

The ID Token is a JSON Web Signature (JWS) and follows the structure shown below:

payload: {
  "iss": "https://accounts.google.com",
  "aud": "878305696756-6maur39hl2psmk23imilg8af815ih9oi.apps.googleusercontent.com",
  "sub": "123456789010",
  "email": "[email protected]",
  "nonce": 'crypto.SHA3_256(upk=alice-pubkey, alg=ES256, rz=crypto.Rand(), typ="CIC")',
  "name": "Alice Example",
  ...
} 
signatures: [
  {"protected": {"typ": "JWT", "alg": "RS256", "kid": "1234...", "typ": "JWT"},
  "signature": SIGN(google-signkey, (payload, signatures[0].protected))`
  },
]

At this point, Alice has an ID Token, signed by google.com (the OP). Anyone can download the OP's (google.com) public keys from google.com's well-known JSON Web Key Set (JWKS) URI (https://www.googleapis.com/oauth2/v3/certs) and verify that this ID Token committing to Alice's public key was actually signed by google.com. If Alice reveals the values of alice-pubkey, alg, and rz, anyone can verify that the nonce in the ID Token is the hash of upk=alice-pubkey, alg=ES256, rz=crypto.Rand(). Thus, Alice now has a ID Token signed by Google that cryptography binding her identity, [email protected], to her public key, alice-pubkey.

PK Tokens

A PK Token is simply an extension of the ID Token that bundles together the ID Token with values committed to in the ID Token nonce. Because ID Tokens are JSON Web Signatures (JWS) and a JWS can have more than one signature, we extend the ID Token into a PK Token by appending a second signature/protected header.

Alice simply sets the values she committed to in the nonce as a JWS protected header and signs the ID Token payload and this protected header under her signing key, alice-signkey. This signature acts as cryptographic proof that the user knows the secret signing key corresponding to the public key.

Notice the additional signature entry in the PK Token example below (as compared to the ID Token example above):

"payload": {
  "iss": "https://accounts.google.com",
  "aud": "878305696756-6maur39hl2psmk23imilg8af815ih9oi.apps.googleusercontent.com",
  "sub": "123456789010",
  "email": "[email protected]",
  "nonce": <crypto.SHA3_256(upk=alice-pubkey, alg=ES256, rz=crypto.Rand(), typ="CIC")>,
  "name": "Alice Example",
  ...
}
"signatures": [
  {"protected": {"alg": "RS256", "kid": "1234...", "typ": "JWT"},
  "signature": <SIGN(google-signkey, (payload, signatures[0].protected))>
  },
  {"protected": {"upk": alice-pubkey, "alg": "EC256", "rz": crypto.Rand(), "typ": "CIC"},
  "signature": <SIGN(alice-signkey, (payload, signatures[1].protected))>
  },
]

The PK Token can be presented to an OpenPubkey verifier, which uses OIDC to obtain the OP’s public key and verify the OP's signature in the ID Token. It then use the values in the protected header to extract the user's public key.

OpenPubkey and Workload Identities

Just like OpenID Connect, OpenPubkey supports both user identities and workload identities.

The workload identity setting is very similar to the user identity setting with one major difference. Workload OpenID Providers, such as github.com, do not include a nonce claim in the ID Token. Unlike user identity providers, they allow the workload to specify an aud(audience) claim. Thus workload identity functions in a similar fashion as user identity but rather than commit to the public key in the nonce, we use the aud claim instead.

GQ Signatures To Prevent Replay Attacks

Although not present in the original OpenPubkey paper, GQ signatures have now been integrated so that the OpenID Provider's (OP) signature can be stripped from the ID Token and a proof of the OP's signature published in its place. This prevents the ID Token within the PK Token from being used against any OIDC resource providers as the original signature has been removed without compromising any of the assurances that the original OP's signature provided.

We follow the approach specified in the following paper: Reducing Trust in Automated Certificate Authorities via Proofs-of-Authentication.

For user-identity scenarios where the PK Token is not made public, GQ signatures are not required. GQ Signatures are required for all current workload-identity use cases.

How To Use OpenPubkey

OpenPubkey is driven by its use cases. You can find all available use cases in the examples folder.

We expect this list to continue growing (and if you have an idea for an additional use case, please file an issue, raise the idea in a community meeting, or send a message in our Slack channel!

How To Develop With OpenPubkey

As we work to get this repository ready for v 1.0, you can check out the examples folder for more information about OpenPubkey's different use cases. In the meantime, we would love for the community to contribute more use cases. See below for guidance on joining our community.

Governance and Contributing

File An Issue

For feature requests, bug reports, technical questions and requests, please open an issue. We ask that you review existing issues before filing a new one to ensure your issue has not already been addressed.

If you have found what you believe to be a security vulnerability, DO NOT file an issue. Instead, please follow our security disclosure policy.

Code of Conduct

Before contributing to OpenPubkey, please review our Code of Conduct.

Contribute To OpenPubkey

To learn more about how to contribute, see CONTRIBUTING.md.

Get Involved With Our Community

To get involved with our community, see our community repo. You’ll find details such as when the next community and technical steering committee meetings are.

Join Our Slack

Find us over on the OpenSSF Slack in the #openpubkey channel.

Report A Security Issue

To report a security issue, please follow our security disclosure policy.

FAQ

See the FAQ for answers to Frequently Asked Questions about OpenPubkey.

openpubkey's People

Contributors

asamborski avatar dependabot[bot] avatar errordeveloper avatar ethanheilman avatar johncmerfeld avatar jonnystoten avatar kipz avatar lgmugnier avatar lipmas avatar maskedeman avatar mrjoelkamp avatar nickajacks1 avatar roceb avatar tg123 avatar thebigbone avatar whalelines avatar ymarcus93 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

openpubkey's Issues

GQ signer/verifier

Create a standalone package for signing/verifying with GQ signatures. This can later be used to generate GQ signatures in place of OIDC provider signatures.

Make code less panicky

There are lots of calls to panic and logrus.Fatalf which can crash the calling program. We should instead make sure we're returning errors in most of these cases. (There may be good reasons to panic in a truly exceptional situation.)

The relationship between OPs and PK Tokens

I wanted to write up an issue on this because while we in OpenPubkey have discussed the nature of the relationship between OP (OpenID Providers) and PK Tokens there isn't much written down.

PK Token Issuance/Validity and OpenID Providers/OIDC Clients

One can not meaningfully ask if a PK Token is valid by itself. The question validity must always been understood in reference to both a OpenID Provider and an OIDC Client.

Some examples:

  • With the Google OP, the PK Token is only valid if the OIDC Client ID specified in the aud (audience) claim in the ID Token matches the expected value. Additionally within the context of the Google OP we must check that the CIC is committed to in the nonce claim in the ID Token.
  • Contrast this with the Github OP where there is no set aud (audience) claim to match against in the ID Token as the CIC is committed to in the nonce, but Github PK Token must be GQ signatures.
  • The proposed Gitlab OP implementation has a set prefix in the aud claim which must match the expected value. Unlike GoogleOP or the Github OP, the CIC isn't committed to directly in the ID Token at all but committed to by the GQ Signature itself. #100

For things like zkLogin we can use the architectural existence of providers clients (OPs) provides an excellent point to transform SNARK based ID Tokens into PK Tokens. The makes expiration of the PK Token depend on the provider client implementation. #101

PK Token usage and OpenID Providers/OIDC Clients

Beyond just validity, using a PK Token depends on the OP.

  • GoogleOP always has an email claim.
  • AzureOP (currently not added) does not have an email claim (although it sometimes does). Instead the email can be looked up from Microsoft using the sub claim.
  • GithubOP doesn't always have an email claim.
  • zkLoginOP may or may not have an email claim but to read the email, the user must reveal a value opening the email commitment.

The following pattern fits this relationship well:

googlePkt, err := GoogleOP.Verify(pkt)
if googlePkt.email == "[email protected]" {
...
}

Since we know that Google PK Tokens always have email addresses, if we cast PK Tokens to the subtypes of PK Tokens based on the OP we can let have compile time type checking of PK Token claims.

sensitive data in `crypto` package

Identified in #48

The crypto package is used for ephemeral key generation and cic signing:

openpubkey/util/files.go

Lines 100 to 109 in cdf1ff6

func GenKeyPair(alg jwa.KeyAlgorithm) (crypto.Signer, error) {
switch alg {
case jwa.ES256:
return ecdsa.GenerateKey(elliptic.P256(), rand.Reader)
case jwa.RS256: // RSASSA-PKCS-v1.5 using SHA-256
return rsa.GenerateKey(rand.Reader, 2048)
default:
return nil, fmt.Errorf("unsupported algorithm: %s", alg.String())
}
}

// Sign over the payload from the ID token and client instance claims
cicToken, err := cic.Sign(signer, alg, idToken)
if err != nil {
return nil, fmt.Errorf("error creating cic token: %w", err)
}

The private key is generated and stored in crypto.PrivateKey which uses math/big big.Int to store the private key data. Similar to #51 the bit.Int Bytes() method copies the underlying data to a new byte slice. So there is no way to wipe the data from this object once it is copied in.

POP Auth

The OPK paper discusses an method for the client instance authenticating to the MFA cosigner called "POP Auth". This protocol would make it so the MFA cosigner does not blindly trust the identity presented by the PK Token (which could be presented by anybody), but instead forces the requesting party to prove they have the corresponding signing key.

The reason this hasn't been included is because the details are still being ironed out and we will add this in when the protocol is ready for implementation.

Unittest coverage of CosignerVerifier needed

We currently have no automated test code coverage on CosignerVerifier. We recently hit a bug #114 in the CosignerVerifier that could've have been caught by a unittest.

This issue is to track adding unittests for CosignerVerifier which allows us to test our bugfix.

Cosigner Versioning

Changes to the Cosiginer protocol and APIs are likely to present backwards compatibility problems. To address this we should:

Put the supported Cosigner protocol versions in the well-known configuration URI of the cosigner. Using the URL pattern like: https:///.well-known/openpubkey-configuration

{
"issuer":"https://<cosigner>",
"jwks_uri":"https://<cosigner>/.well-known/jwks.json",
"cosigner_versions_supported": ["1", "2", "4"],
}

Then the client can determine if any of the cosigner versions it supports match any of the cosigner versions supported by the cosigner server. If no match is found, the client outputs a meaningful error.

The client species the cosigner version it wants via the URI path it uses to access the API. For instance https://<cosigner>/<cosigner-version>/initAuthURI?....

Short docs on how to use OpenPubkey library

This documentation should consist of:

  1. A short paragraph length description in the readme for how use OpenPubkey as a library
  2. and a 4 page markdown document that walks a developer through getting started with OpenPubkey

GQ Signature flagging

Reading through #14 I had the following thoughts I wanted record about possible ways to flag that a signature if GQ Signature.

Consider the following two approaches:

The unsigned header flag approach

We flag that a OP RSA signature has been replaced by a GQ Signature by adding a unsigned header field: "sig_type": "oidc_gq" Lets call

{
	"payload": "payload",
	"signatures": [{
		"protected": {"alg":"ES256","rz":"5","upk":{"crv":"P-256","kty":"EC","x":"RI","y":"Hk"}},
		"header": {
			"sig_type": "cic"
		},
		"signature": "user signature"
	}, {
		"protected": {"alg":"RS256","kid":"6f7254101f56e41cf35c9926de84a2d552b4c6f1","typ":"JWT"},
		"header": {
			"sig_type": "oidc_gq"
		},
		"signature": "GQ signature"
	}]
}

The altered protected header approach

Alternatively we could alter the protected header changing the alg field to GQ256 and then signing the altered protected header and payload with the GQ signature. A verifier would then reconstruct the message the OP signed with RSA by changing the algorithm field back. It would look like:

{
	"payload": "payload",
	"signatures": [{
		"protected": {"alg":"ES256","rz":"5","upk":{"crv":"P-256","kty":"EC","x":"RI","y":"Hk"}},
		"header": {
			"sig_type": "cic"
		},
		"signature": "user signature"
	}, {
		"protected": {"alg":"GQ256","kid":"6","typ":"JWT"},
		"signature": "GQ signature"
	}]
}

Comparison

The unsigned header flag approach

Pros:

  • Results in a simpler implementation of the the verifier and the signer.
  • None of the values which are signed by OP are altered or reserialized.

Cons:

  • Standard OIDC validators don't know about GQ signatures or the sig_type header so they will likely attempt to verify the GQ signature as if it was a very large RSA signature resulting in a hard to debug failure.
  • Uses an unsigned header, which means we have to worry about what happens if someone alters this header. Ideally there is no content which is not covered by a signature. This is a minor con.

The altered protected header approach

Pros:

  • Standard OIDC validators will see that the alg is set to algorithm they don't support and fail gracefully producing a meaningful error message: Signature algorithm GQ256 is unsupported.
  • If we want to standardize GQ signatures in JWS as alg is the standard way of flagging the signing algorithm

Cons:

  • Likely a source of bugs and a pain point for implementors as signed message content is very fragile. Changing the protected header requires serializing and deserializing it. Different JSON libraries make minor changes to whitespace or the ordering of keys that result in signatures not verifying. These signature verification issues can creep in both when generating the GQ signature and when verifying it, as they both involve deserializing the protected header, making a change and then serializing it again.

We could bypass this con by storing the base64 encoding of the OPs signature in the protected header. That seems a little abusive toward the JWS standard.

Proposed zklogin JWS

As zkLogin was inspired by OpenPubkey to use the nonce in the ID Token to commit to the user's public key upk it is worth examining if OpenPubkey can employ zkLogin zero knowledge proofs to provide anonymous or private PK Tokens. This issue examines the feasibility, benefits and disadvantages of using zkLogin proofs in OpenPubkey.

zkLogin is a protocol for using zero knowledge proofs to add privacy to OIDC.
Given an ID token idt and public key upk it creates a zero knowledge proof, pi, which proves knowledge of the ID Token and commits to certain claims in the ID Token.

Using Google as the example OP the format of a ID Token payload using a zkLogin nonce is:

payload: {
  "iss": "https://accounts.google.com",
  "aud": "878305696756-6maur39hl2psmk23imilg8af815ih9oi.apps.googleusercontent.com",
  "sub": "123456789010",
  "email": "[email protected]",
  "nonce": 'Base64(Poseidon_BN254(upk, max_epoch, rz)'
  ...
} 

The format of the nonce is very similar to that of OpenPubkey. The main differences are:

  • The ZKP friendly hash function Poseidon_BN254 is used instead of SHA3_256.
  • An expiration time of the token is specified using max_epoch, i.e., the maximum epoch beyond which the token should not be accepted
  • The algorithm of user's public key is specified in the nonce as a 1 byte flag to the user's public key upk. The public key that includes the flag in this way is called an extended public key.

The zkLogin proof commits to the user's address which is computed as:
addr = Blake2b_256(flag, iss, addr_seed)
where
addr_seed = Poseidon_BN254(claimkey, idT.[claimkey], idT.aud, Poseidon_BN254(user_salt))

If the claimkey was "email" then it would be
addr_seed = Poseidon_BN254(email, idT.email, idT.aud, Poseidon_BN254(user_salt)).

The addr serves as a pseudonymous identity with the user_salt blinding the identifier. If you don't know the user_salt, you can't determine the email. The user knows the user_salt. This allows them to selectively reveal the identity claim in the ID Token. zkLogin generally assumes user's want anonymity so they don't reveal the user_salt and they just use the addr as their identity.

This allows users to interact in public using pseudonym, but then privately interact using their true name identifiers.

zkLogin assumes the following is public:

  • Iss (issuer)
  • Aud (audience)
  • Addr and Addr_Seed
  • extended public key

zkLogin assumes the following is private:

  • user identity claim (email, sub)
  • user_salt

Proposed OpenPubkey zkLogin PK Format

Below is a proposal for how to structure a OpenPubkey PK Token using the objects in zkLogin. This PK Token does not reveal any identity claims about the user, sub, email are all protected.

payload: {
  "iss": "https://accounts.google.com",
  "aud": "878305696756-6maur39hl2psmk23imilg8af815ih9oi.apps.googleusercontent.com",
  "nonce": 'Base64(Poseidon_BN254(upk-ext, max_epoch, rz)'
} 
signatures: [
  {"protected": {"typ": "ZWT", "alg": "...", "kid": "1234...",  "addr": 'addr_seed'},
  "signature": ZKP(idt, salt)`
  },
 {"protected": {"typ": "ZIC", "upk-ext": alice-pubkey, "alg": "EC256",  "rz": crypto.Rand(), max_epoch": exp, },
  "signature": <SIGN(alice-signkey, (payload, signatures[1].protected))>
  },
]
  • The ZWT signature enables the verification of the claims in the payload via the zkp. The allows the verification of the proof of knowledge of an ID Token with the fields supplied in the payload (iss, aud, etc..). Note that for privacy, the payload no longer contains all the fields in the original ID Token.
  • The ZIC signature allows the opening of the nonce commitment, revealing the user's public key. This is very similar to your standard CIC signature.

We now consider the case where a user wishes expose one of their identity claims, email

payload: {
  "iss": "https://accounts.google.com",
  "aud": "878305696756-6maur39hl2psmk23imilg8af815ih9oi.apps.googleusercontent.com",
  "nonce": 'Base64(Poseidon_BN254(upk, max_epoch, rz)'
} 
signatures: [
  {"protected": {"typ": "ZWT", "alg": "...", "kid": "1234...",  "addr": 'addr', "claimKey": "email" "claimValue": "[email protected]", "user_salt": 'user_salt'},
  "signature": ZKP(idt, salt)`
  },
 {"protected": {"typ": "ZIC", "upk": alice-pubkey, "alg": "EC256",  "rz": crypto.Rand(), max_epoch": exp, },
  "signature": <SIGN(alice-signkey, (payload, signatures[1].protected))>
  },
]

Since user_salt is revealed, we can now open the addr_seed and check claimKey and claimValue. We use email in this example but other claims can be used as well.

Open Questions

  • Can this easily be adapted to the fields in github or gitlab? Is it even needed?
  • Can multiple claims be revealed atomically without altering ZKP used in zkLogin?

Cryptography review and remediation

As noted in #3 our GQ signature implementation uses golang's bigint library which is known to be leak information via a timing side-channel. Before going 1.0 with this code base we need perform a review and remediation of cryptography used in this project.

At minimum this should consist of:

  1. Constant time operations tests: Unittests verifying constant time of sensitive cryptographic operations, such as GQ signature generation and user key pair generation, see #56
  2. A review of cryptographic dependencies: Check that the cryptographic libraries we depend on are making the right choices and are protecting against threats such as ECDSA nonce reuse.
  3. A review of our implementation of GQ signatures: this should including documenting and understanding the implications of any differences between our implementation and ISO/IEC 14888-2:2008.
  4. Lifecycle and protection of secrets: Ensuring that sensitive cryptographic secrets are zerod/deleted when no longer in use and that they do not persist after they are needed. For instance with GQ signatures, we should delete the RSA signature immediately after the GQ signature is constructed. We should carefully consider the lifecycle of our ephemeral keypairs, currently we just write them to disk, see #47, #51, #52, #53

Suggestions to grow this list are welcome. When we get closer to 1.0, many of the items on list will be broken out into separate tickets.

Remove pkt.Compact

As discussed in #105 Fixes JWS signing bug where we JSON unmarshalling breaks verification, the function pkt.Compact is intended to produce compact tokens that can pass signature verification. #105 makes pkt.Compact safe to use by just having it pull the corresponding saved token from the PK Token struct. This makes pkt.Compact redundant the developer could just pull the saved token themselves in fewer lines of code.

pkt.Compact, developers should simply use the compact token saved to the PK Token.
Rather than:

cicToken, err := pkt.Compact(pkt.Cic)

do this instead:

cicToken := pkt.CicToken

This issue should be resolved when we create a PR to remove pkt.Compact from the PK Token and from the examples. It depeneds on #105 being merged first.

Github Actions OIDC provider

This one should be a bit simpler than others because we just have to call a URL with a token, both provided as environment variables, passing an audience. We will have to put the nonce in the audience because we don't have access to a full PKCE flow.

PK Token Properties

I want to have a discussion about what our goals are with the PK Token because it impacts data structures which are always nice to get as right as possible, as soon as possible.

I propose that the PK Token should have the following three properties:

Property 1: The PK Token does not require any outside information in order to verify itself

  • This isn't to say that external code shouldn't verify certain claims or parameters. For example, it would still be best practices to check that the issuer claim or a key algorithm match expected values, but a PK Token would only use its own values for verification.

Property 2: A valid PK Token should have, at a minimum:

  • A single oidc provider signature whether that be in the form of rsa or gq (or other!)
  • A single user signature over the same payload including client instance claims headers

Property 3: The PK Token supports any number of cosigners

  • A PK Token can have zero or more cosigners without violating Property 1

Protocol Versioning

This is a very early draft to pose the questions of how we think about versioning OpenPubkey.

This issue exists to think discuss if, and in what places ,we might want to version the OpenPubkey. The primary purpose of such versioning would be simplify our lives. We should take a goal that version must be simple and not introduce complex handshakes or versioning negotiation.

PK Token versioning:

We can version the PK Token using a key in the protected header of each signature we want to version. The OP signature and payload can't be versioned because it is OIDC token and not under our control. Do we want one version for the entire PK Token or instead version the CIC Signature and COS Signature separately.

MFA Cosigner API Versioning

The MFA Cosigner API uses the well-known URI and this provides an excellent point to specify parameters and versions from the cosigner to the client.

OSM and POP Auth versioning

OSM and POP Auth could be versioned at the signature or the API layer.

Design for Gitlab CI support

As discussed in Issue 26: Add GitLab CI signing example the approach used to sign software artifacts in github can not be used with gitlab because the gitlab workload can not specify the aud (audience) claim. Instead an aud claim is hardcoded in the workload configuration YAML. In this issue we propose the following design for gitlab and other settings where there is no client specified nonce or audience field.

Generating a PK Token in Gitlab

To generate a PK Token:

  1. The workload creates a key pair, (upk, usk), and requests an ID Token from the OP (gitlab).
  2. On receiving the ID Token from gitlab, the workload: generates the CIC (upk, alg, nonce), signs the CIC and ID token under the GQ signature.
  3. The workload then immediate deletes the OPs RSA signature on the ID Token.

The CIC is included as part of the GQ protected header of the ID Token. Thus, when the GQ signature signs the new protected header, it also commits to the CIC. As is currently done with GQ signatures, the original protected header of the ID Token is base64 encoded and included in the protected header signed by the GQ signature.

payload: {
  "iss": "gitlab.example.com",
  "aud": "OPENPUBKEY-PKTOKEN:1234567890",
  ...
} 
signatures: [
  {"protected": {"typ": "JWT", "alg": "GQ256", "kid":  Base64(origProtected), 
  "cic": SHA3_256(upk=alice-pubkey, alg=ES256, rz=crypto.Rand()),
  "signature": GQSIGN(RSA-sig, (payload, signatures[0].protected))`
  },
  {"protected": {"upk": alice-pubkey, "alg": "EC256", "rz": <rz>},
  "signature": <SIGN(alice-signkey, (payload, signatures[1].protected))>
  },
]

Verifying a Gitlab PK Token

  1. Check the aud claim is prefixed with "OPENPUBKEY-PKTOKEN-" and that the rest of the aud claim matches the expected aud claim.
  2. Check that CIC signature verifies,
  3. Check that the CIC protected header values are committed to in the cic claim in protected header of GQ signature and that the GQ signature verifies for gitlabs public key.

Security

If configured correctly, this is approach offers the same security as the approach used with github. However this approach does introduce a security risk of misconfiguration not present in the github case. If a GQ signature is not used and an attacker learns the ID Token, the attacker can compute the GQ signature from the RSA signature and insert their own public key.

We mitigate the risk of this occurring by:

  • Requiring that PK Tokens that use the GQ signature to attest to the CIC must have "OPENPUBKEY-PKTOKEN-" prefixed in the audience and that they are not considered valid PK Tokens otherwise. This also prevents attacks in which a gitlab ID Token not intended for use as a PK Token is replayed as a PK Token. Note that this rule does not apply to other types of PK Tokens.
  • Ensuring that OpenPubkey clients always enforces the deletion of the RSA signature and only generates GQ signature PK Tokens for gitlab.

We argue that this sort of misconfiguration is very unlikely because outside of OpenPubkey, ID Tokens are treated as highly sensitive authentication secrets and within OpenPubkey we prevent this misconfiguration. Someone must configure their gitlab to put "OPENPUBKEY-PKTOKEN-" in the aud claim, not use the OpenPubkey protocol and then must expose the ID token to an attacker. We can not prevent people from writing their own insecure protocols.

Cosigner (Optional)

We propose a security enhancement that adds a cosigner to remove the gitlab OP as a single point of compromise. It also provides additional protection against replay attacks, however we consider this protection to be unnecessary given the other safeguards above.

The aud claim can be postfixed by a commitment to a public key, cpk, whose corresponding signing key, csk, configured in gitlabs secret manager. This signing key can be used by the workload to sign the CIC in addition to the GQ signature.

Since this signing key's public key is committed to in the aud claim in the ID Token, an ID Token even with a aud claim prefixed with "OPENPUBKEY-PKTOKEN-" and the RSA signature is no longer sufficient to attest to the CIC.

payload: {
  "iss": "gitlab.example.com",
  "aud": "OPENPUBKEY-PKTOKEN-CPK"+SHA3_256(cpk),
  ...
} 
signatures: [
  {"protected": {"typ": "JWT", "alg": "GQ256", "kid":  Base64(origProtected), 
  "cic": SHA3_256(upk=alice-pubkey, alg=ES256, rz=crypto.Rand()),
  "signature": GQSIGN(RSA-sig, (payload, signatures[0].protected))`
  },
  {"protected": {"upk": alice-pubkey, "alg": "EC256", "rz": <rz>},
  "signature": <SIGN(alice-signkey, (payload, signatures[1].protected))>
  },
  {"protected": {"cospk": cosigner-pubkey, "alg": "EC256"
  "cic": SHA3_256(upk=alice-pubkey, alg=ES256, rz=crypto.Rand()),
  },
  "signature": <SIGN(cosigner-signkey, (payload, signatures[2].protected))>
  },
]

Any OpenPubkey verifier configured to expect this audience claim can check this cosigner signature. This removes the gitlab OP as a single point of compromise. This approach can be adapted to JWKS URIs, but changing the aud claim to use be "OPENPUBKEY-PKTOKEN-CPK"+Cosigner-JWKS-URI

Remove OidcAuth and use Auth

The original method to generate a PK Token is OidcAuth. When adding the Cosigner, I added a second method called Auth which performs OidcAuth and then if a cosigner is configured, runs the cosigner protocol. The Auth method is intended as a replacement for OidcAuth but OidcAuth is being kept around for backwards compatibility until we move it out of the docker code.

  • PRs merged moving all downstream code to Auth
  • PR to Remove OidcAuth from OpenPubkey codebase by changing it to oidcauth so it is no longer exposed. This PR should move unittests on OidcAuth to oidcauth and add coverage to Auth.

GHA Runner VM Entropy for Key Generation

Problem #

The Sign() method generates ephemeral keys for signing attestations shown here:
https://github.com/openpubkey/signed-attestation/blob/f64048341ecd651ed894f094ab70213d20ee6bbb/opk.go#L30-L33

Using GenKeyPair() in utils:

openpubkey/util/files.go

Lines 100 to 109 in f662c2c

func GenKeyPair(alg jwa.KeyAlgorithm) (crypto.Signer, error) {
switch alg {
case jwa.ES256:
return ecdsa.GenerateKey(elliptic.P256(), rand.Reader)
case jwa.RS256: // RSASSA-PKCS-v1.5 using SHA-256
return rsa.GenerateKey(rand.Reader, 2048)
default:
return nil, fmt.Errorf("unsupported algorithm: %s", alg.String())
}
}

Depending upon the system on which this is running, the entropy available (and sources it is derived from) may not be very high quality. The quality of the entropy source is important in determining how secure the resulting signatures are by ensuring that we are generating unpredictable private keys.

A planned use-case is running this as a GitHub Action (GHA). GHAs run on Virtual Machines. Typically, VMs are poor sources of entropy and are very deterministic unless effort is made to provision them with a quality entropy source.

There are several mitigating factors for key compromise built into the solution. Although, identifying the entropy source for GHA runners and improving it if needed, is another mitigating factor for preventing key compromise.

Entropy source for GHA runners

It appears that the ubuntu VM images have been updated to include haveged actions/runner-images#672. Unsure if that is the latest change w.r.t. entropy sources in GHA VMs.

haveged is a daemon — derived from the HAVEGE algorithm — designed to help (the kernel) gather entropy from more sources (than the kernel itself does). It is common to install it on physical hosts to gather entropy faster from their entropy-rich environment. However, it is not recommended to use it in virtual machines, since the very reasons that make them prone to entropy starvation will hinder, if not defeat, HAVEGE (or the quality — randomness — of the entropy it will gather).

Testing the VM configuration

A lot of sources point to this writeup about HAVAGE (the algorithm that haveged is based on). It discusses how it uses the rdtsc (x86 processor time stamp counter with sub-nanosecond precision) as a source of entropy.

They highlight three possible VM configurations for how rdtsc is implemented:

For a virtual machine system, there are three choices with regards to this instruction:

  1. Let the instruction go to the hardware directly.
  2. Trap the instruction and emulate it.
  3. Disable the instruction altogether.

I created a GHA action to run the example code to gain some insight into how the GHA VMs might be configured for rdtsc and good news! It appears that the rdtsc instruction is configured to go directly to the host hardware. Which is the best possible option of the three configurations.

See the action run here https://github.com/mrjoelkamp/gha-vm-rdtsc-test/actions/runs/6720417160/job/18264356488
Looking for a value between 10 - 100.
result:

average : 34.859

This result determines whether the GHA VMs are configured to use the host's rdtsc derived from hardware as opposed to being emulated or not available. If it failed (and the entropy was emulated or hardcoded) that could be a major issue.

What it doesn't answer, is whether it is appropriate to use the GHA VM's entropy source (haveged) for RNG when generating cryptographic key material. We also assume that the GHA VM will continue to provide an environment where we can securely generate key material and perform cryptographic operations in.

Solution #

In addition to the current Go based KDF and crypto library, I recommend that OpenPubKey support cloud provider KMS solutions for cryptographic functions (AWS, GCP, Azure, Hashicorp, etc.). A lot of these systems are backed by Hardware Security Modules (HSMs) with high entropy RNG.

For example:

AWS KMS key generation is performed on the AWS KMS HSMs. The HSMs implement a hybrid random number generator that uses the NIST SP800-90A Deterministic Random Bit Generator (DRBG) CTR_DRBG using AES-256. It is seeded with a nondeterministic random bit generator with 384-bits of entropy and updated with additional entropy to provide prediction resistance on every call for cryptographic material.

There are many other added benefits to supporting external KMS solutions as well:

  • Controlled execution environment that is optimized for cryptographic operations
  • Runtime access to private keys not accessible by software interface

Mitigating Factors #

Ephemeral Key Lifetime

If the system only trusts the ephemeral keys to sign data in a short time frame, the compromise of the private ephemeral may be of little significance. Although, depending on how deterministic the RNG is, an attacker could potentially determine the private key ahead of ephemeral key generation with significant statistical probability.

Transparency Log

Including a Transparency Log to store signatures that could be monitored by publishers to determine if an actor forged a signature using a compromised key and generated a signature outside of their control. Although, this countermeasure would be reactive as opposed to preventative and requires active monitoring and alerting.

Layered Signing

The OPK One Time Signature could also mitigate this issue by providing a hash of the image in the payload that is signed by the Identity Provider. If an image was modified and signed by someone who had compromised a user signing key, the hash comparison would fail for the modified image. This requires that the verification client implement this check.

Open browser from Putty in Win10/Chrome

The "zli login --opk" command does not automatically open the browser from Putty in Win10/Chrome.
Returning:
"Login required, opening browser"

Here the specific URL should be printed, to allow this to manually be copied into a browser.

Thanks.

Check RSA Public Key verifies RSA signature when generating GQ Signature

Currently we don't check the RSA Public Key is correct when generating a GQ Signature. If you accidentally supply the wrong Public Key you get the hard to diagnose error message "error creating GQ signature: input overflows the modulus."

Benefits:

  • We can improve error handling by just check the RSA Public Key is correct and return a more meaningful error message.
  • Sometimes GQ will succeed in generating a signature for the wrong RSA Public Key. It should not, this is a bug.

This behavior was noticed will writing PR #122 but was the bug was not introduced in that PR

Replication

95% of the time this will trigger the bug and throw the error: "input overflows the modulus"

for i := 0; i < 100; i++ {
	oidcPrivKey1, err := rsa.GenerateKey(rand.Reader, 2048)
	require.NoError(t, err)

	oidcPrivKey2, err := rsa.GenerateKey(rand.Reader, 2048)
	require.NoError(t, err)

	require.NotEqual(t, oidcPrivKey1, oidcPrivKey2)

	idToken, err := createOIDCToken(oidcPrivKey1, "test")
	require.NoError(t, err)

	signerVerifier, err := NewSignerVerifier(&oidcPrivKey2.PublicKey, 256)
	require.NoError(t, err)

	gq, err := signerVerifier.SignJWT(idToken)
	require.NoError(t, err)
	require.Nil(t, gq)
}

MFA Cosigner and CSRF

Currently the proposed MFA Cosigner example does not use an CSRF cookies. I'm sure we need them, but we should think very carefully about this this threat. I'm creating this ticket to document our thoughts on this threat.

See #54

Management of sensitive data in memory

Derived from #11

Lifecycle and protection of secrets: Ensuring that sensitive cryptographic secrets are zerod/deleted when no longer in use and that they do not persist after they are needed. For instance with GQ signatures, we should delete the RSA signature immediately after the GQ signature is constructed. We should carefully consider the lifecycle of our ephemeral keypairs, currently we just write them to disk.

Potential ways to address this:

  1. Avoid using strings and instead use byte slices whenever handling sensitive data
    a. Strings are immutable in Go and difficult to wipe
  2. Use the memguard package with LockedBuffers or Enclave objects to handle sensitive data and use the built in data scrubbing methods
    a. This prevents sensitive data from being swapped to disk or leaked through core dumps and ensures it is cleaned up properly
  3. Minimize the lifetime of sensitive data in memory by scrubbing immediately after the data is no longer needed

Document the MFA Cosigner protocol

PR #68 adds the OpenPubkey MFA Cosigner functionality. Since we now have a implementation it makes sense to provide a more detailed protocol description and standard.

  • Create an RFC style protocol description for the MFA Cosigner
  • Update the OpenPubkey paper with this protocol description

PK Token Compact Representation

RFC-7515: JWS (JSON Web Signatures) specifies two serialization formats for JWS: JSON and Compact.

Two closely related serializations for JWSs are defined.  The JWS
Compact Serialization is a compact, URL-safe representation intended
for space-constrained environments such as HTTP Authorization headers
and URI query parameters.  The JWS JSON Serialization represents JWSs
as JSON objects and enables multiple signatures and/or MACs to be
applied to the same content.

RFC-7515: Section 7.1 defines the Compact Serialization format as:

7.1  JWS Compact Serialization

   The JWS Compact Serialization represents digitally signed or MACed
   content as a compact, URL-safe string.  This string is:

      BASE64URL(UTF8(JWS Protected Header)) || '.' ||
      BASE64URL(JWS Payload) || '.' ||
      BASE64URL(JWS Signature)

   Only one signature/MAC is supported by the JWS Compact Serialization
   and it provides no syntax to represent a JWS Unprotected Header
   value.

Note that this means the JWS Compact Serialization only supports one signature. This prevents us from using it as-is for PK Tokens as our PK Tokens are JWS with more than one signature.

While the JWS JSON Serialization supports multiple signatures and thus can easily support our PK Tokens the RFC says: This representation is neither optimized for compactness nor URL-safe.

It would be useful for OpenPubkey to have a compact, URL-safe serialization. I am proposing the following format for JWS Compact Serialization format.

JWS Compact Serialization for multiple signatures

(JWS Protected Header1,Signature1) through (JWS Protected HeaderN,SignatureN)

      BASE64URL(JWS Payload) || ':' ||
      BASE64URL(UTF8(JWS Protected Header1)) || ':' ||
      BASE64URL(JWS Signature1)  || ':' ||
      BASE64URL(UTF8(JWS Protected Header2)) || ':' ||
      BASE64URL(JWS Signature2)  || ':' ||
      ...
      BASE64URL(UTF8(JWS Protected HeaderN)) || ':' ||
      BASE64URL(JWS SignatureN) 

in golang

func FromCompact(pktCom []byte) (*PKToken, error) {
	splitCom := bytes.Split(pktCom, []byte(":"))
	if len(splitCom) == 5 {
		return &PKToken{
			Payload: splitCom[0],
			OpPH:    splitCom[1],
			OpSig:   splitCom[2],
			CicPH:   splitCom[3],
			CicSig:  splitCom[4],
			CosPH:   nil,
			CosSig:  nil,
		}, nil
	} else if len(splitCom) == 7 {
		return &PKToken{
			Payload: splitCom[0],
			OpPH:    splitCom[1],
			OpSig:   splitCom[2],
			CicPH:   splitCom[3],
			CicSig:  splitCom[4],
			CosPH:   splitCom[5],
			CosSig:  splitCom[6],
		}, nil
	} else {
		return nil, fmt.Errorf("A valid PK Token should have exactly two or three (protected header, signature pairs), but has %d signatures", len(splitCom))
	}
}

func (p *PKToken) ToCompact() []byte {
	if p.Payload == nil {
		panic(fmt.Errorf("Payload can not be nil"))
	}

	var buf bytes.Buffer
	buf.WriteString(string(p.Payload))
	buf.WriteByte(':')
	buf.WriteString(string(p.OpPH))
	buf.WriteByte(':')
	buf.WriteString(string(p.OpSig))
	buf.WriteByte(':')
	buf.WriteString(string(p.CicPH))
	buf.WriteByte(':')
	buf.WriteString(string(p.CicSig))

	if p.CosPH != nil {
		buf.WriteByte(':')
		buf.WriteString(string(p.CosPH))
		buf.WriteByte(':')
		buf.WriteString(string(p.CosSig))
	}

	pktCom := buf.Bytes()
	return pktCom
}

on linux with clock in UTC executes as future time

cd examples/simple
go build
./simple

or mfa
or google

login via google oidc returns

failed to exchange token: issuedAt of token is in the future: \
    (iat: 2024-02-22 14:40:31 -0500 EST, now with offset: 2024-02-22 19:40:22 +0000 UTC)

Error using google login example

As mentioned by @lgmugnier on the community call yesterday, there's an issue with the Google login example. The login process in the browser seems to work correctly but we get an error on the library side:

❯ go run example.go login
INFO[0000] listening on http://localhost:3000/          
INFO[0000] press ctrl+c to stop                         
Error logging in: error verifying PK Token: failed to verify signature from OIDC provider: error verifying OP signature on PK Token (ID Token invalid): invalid signature (signature verification failed: square/go-jose: error in cryptographic primitive)

Update Zitadel dependency to latest version

We are currently on zitadel/oidc/v2 which has not been updated in sometime. It appears that if we want updates and patches we should move to zitadel/oidc/v3

We only use Zitadel in two places so the update should be fairly simple:

https://pkg.go.dev/github.com/zitadel/oidc/v2 Last updated Nov 16, 2023
https://pkg.go.dev/github.com/zitadel/oidc/v3 Last updated Apr 11, 2024 (the day this ticket was created)

The Zitadel provides an upgrade guide for upgrading from v2 to v3.

Run unit tests against ISO/IEC 14888-2:2008 GQ test vectors

As discussed in #67, we can further validate our GQ implementation by adding test cases against the specific numerical values in the ISO standard.

This will require some refactoring of the existing code because the ISO tests use SHA-1/PSS as a format mechanism, whereas we use SHAKE-256/PKCSv15.

Remove pkt.Compact unittest

I removed pkt.Compact in #110 so it is reasonable to remove the unittest for pkt.Compact. I want to make sure we aren't losing any coverage by removing this unittest.

Apache 2.0 LICENSE and source header

We have a small amount of in repo "paper work" to finish up.

license headers

We have not created license headers in each source file as specified https://www.apache.org/licenses/LICENSE-2.0

Copyright 2024 OpenPubkey

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

SPDX-License-Identifier: Apache-2.0

NOTICE

We do not have a NOTICE file. Perhaps we don't want one and I believe we can add one later.

Resolving

This ticket can be resolved with a PR that addresses the following:

- [ ] Fill out license boilerplate

  • Create license headers in each source file

Add Allowlist match function to OP providers

We currently (as of PR #86) do not handle the case where the same provider (OP) is specified twice in an allowlist with different audiences. That is same providers sharing the same Issuer but having separate audience claims. Instead it will match on the first provider with the correct issuer and then verify using that provider.

The reason we check for matches rather than just verifying is to avoid blindly downloading a JWKS URI based on the issuer (iss) claim of an ID Token. Instead we check that the issuer is correct and then run verification.

The ideal way to solve this problem is to add a function to the (OP) Provider interface called something like Matches(idt IDToken) (bool) and let each provider determine how decides if an ID Token belongs to it.

GQ package uses math/big

The GQ package uses math/big for its implementation which is not safe for cryptographic use:

Note that methods may leak the Int's value through timing side-channels. Because of this and because of the scope and complexity of the implementation, Int is not well-suited to implement cryptographic operations. The standard library avoids exposing non-trivial Int methods to attacker-controlled inputs and the determination of whether a bug in math/big is considered a security vulnerability might depend on the impact on the standard library.

I had a look at what how the crypto/rsa package deals with this and found that there's an internal bigmod package which is used instead of math/big. Interestingly, this has only been the case since Go 1.20 as detailed on Filippo Valsorda's blog. This bigmod package would be the ideal package to use instead of math/big, but as it's internal we cannot import it. Filippo maintains a re-exported copy of the internal library at https://github.com/FiloSottile/bigmod so that could be the easiest thing to use.

Nonce isn't checked on validation

We don't actually check that the nonce is a hash of the CIC when verifying a PK token. The check currently reads the nonce from the PK and then checks that the PK token contains the same nonce, and of course this will always be true. We should instead construct it from the CIC.

We have a function to compute the nonce hash already which is used to get the nonce when signing, but it suffers from the same JSON key order issue mentioned at the end of #15 - I haven't checked but wouldn't be surprised if the nonces we're generating aren't actually hashes of the CIC protected headers. We should instead hash the exact bytes that are going into the JWS. This might mean that we need to do some more manual signing of the JWS rather than leaning on the jws library.

Proposal: Remove memguard

I'd like to propose that we remove memguard from OpenPubkey.

Some background to why we added memguard can be found in the original issues #11 and #47. The PR that added it is #48. I added a comment to that PR with some thoughts which I can expand on here.

I think there are three issues with the implementation:

Lack of complete protection

Memguard does two things for us:

  1. Provides a function to wipe byte slices, which we can use to remove secrets from memory.
  2. Provides a LockedBuffer type which prevents the contents in memory being swapped to disk.

I think the wiping byte slices is a reasonable idea, but the LockedBuffer is more problematic. If we need to pass the underlying bytes from a LockedBuffer to a library function (in the stdlib or a dependency) and that function copies the bytes, we lose the protection that the LockedBuffer is giving us. This kind of thing is very difficult to keep track of - any upgrade to a dependency or the stdlib could introduce some copying without us noticing unless we're auditing the code. For the stdlib this is even more difficult because the version of Go that an application using OpenPubkey is built with determines the version of the stdlib.

Some places where this happens are documented in #51, #52, and #53. This may not be exhaustive!

I think at the end of the day, Go isn't really suited to this sort of manual memory management.

Leaky abstraction

The way memguard needs to be used means that the OIDC client interface needs to know about memguard, which is slightly cumbersome for implementers.

This isn't the end of the world, and I do think that this added complexity of the implementation could be worth it if the result was totally bulletproof, but as it doesn't fully prevent secrets being swapped etc, I don't think the juice is worth the squeeze.

Issues debugging

I've also ran into issues debugging the code with VS Code which I think is down to awnumar/memguard#150.

Other things to note

This would be a breaking change as memguard forms part of the public-facing interface as noted above. We should make a decision on whether to do this before we release v1.0.

/cc @mrjoelkamp @EthanHeilman

GQ signature support

Currently, an OpenPubkey token contains the original signature from the OIDC provider. This arguably means that it isn't safe to share the OpenPubkey token, as the token could be used to impersonate the subject to a service that doesn't check the audience claim, or to trick an OpenPubkey client to issue a token for the subject.

We can create a GQ signature to prove that we have the original OIDC provider's signature. This can be verified using the original OIDC signing payload (header || '.' || payload) and the OIDC provider's public key. This works because GQ signature private keys are equivalent to RSA signatures.

The GQ signature can be published in place of the OIDC provider's signature in an OpenPubkey token.

Integration Tests

This would add a full end to end integration of OpenPubkey using an OP we spin up ourselves.

sensitive data in `lestrrat-go/jwx` package

Identified in #48

The pktoken package in pktoken.go uses the "github.com/lestrrat-go/jwx/v2/jws" package as introduced in #45.

The OIDC provider signature is stored in the PKToken struct in the Op field as a jws.Signature type:

type PKToken struct {
raw []byte // the original, raw representation of the object
Payload []byte // decoded payload
Op *Signature // Provider Signature
Cic *Signature // Client Signature
Cos *Signature // Cosigner Signature
}

The Signature type stores the byte slice containing the signature data and is not directly exposed making it not possible to implement LockedBuffer at this level. But we can wipe the value from memory using the memguard.WipeBytes() method like so:

// wipe OIDC signature and replace with GQ signature
memguard.WipeBytes(pkt.Op.Signature())
pkt.AddSignature(gqToken, pktoken.Gq)

Although, this won't provide the same data safety features we would get if implemented as a LockedBuffer. We could gain a bit more control over the memory management by reverting back to []byte for PK token signatures (and refactoring them to LockedBuffer) but a discussion around that should occur before proceeding.

OIDC ID Token refresh in client

When available the OpenPubkey client should support the ability to use a refresh token to request an refreshed ID Token. This refreshed ID Token will be stored in an unprotected header of the ID Token.

  1. If refresh token returned from OP, client must save this refresh token,
  2. If refresh called on client, client must make a refresh call to the OP and save the refreshed ID Token
  3. If a refreshed ID Token exists, client must include refreshed ID Token in PK Token as a unprotected header value in the OP signature
  4. A PK Token verifier than can be configured require a refreshed ID Token in the PK Token.

This feature does not automatically refreshing an ID Token. It just provides the ability of an implementor to request a refresh.

Add GitLab CI signing example

Given the broad use of GitLab in environments where an OIDC provider as a CA has significant benefits, it would be helpful to have a reference example.

Thoughts on how to handle the OP PublicKey function on OpenIdProvider

Currently the OpenIdProvider has the following PublicKey function:
PublicKey(ctx context.Context, headers jws.Headers) (crypto.PublicKey, error)

This works for now but doesn't provide the extensibility and features the project will need in the future. More specifically:

The return value doesn't provide everything we need

  • It returns crypto.PublicKey without the full JWK. There are fields in the JWK the caller may wish to inspect. If we ever want to support non-RSA OP keys we will want the alg at minimum.
  • On the return it does not provide information about where the PublicKey came from. Consider using this function with historic OP JWKS logs. You may wish to know when the JWKS was saved, is it currently on the repo? did an JWKS oracle attest to it? Etc...

It takes jws.header as the single selection criteria where this header is the protected header or public header of ID Token. This is done to allow the function to look up keys by both kid and jtk. Additionally it allows PublicKey to detect that a protected header is a GQ protected header and that it needs to decode the kid to find the original header and original kid.

The downsides of taking a jws.header:

  • It exposes the jwx libraries jws.header object. This makes switching libraries difficult and requires that people implementing to this interface use jwx. We are slowly moving away from jwx objects being externally exposed
  • While it is extensible, you can always add keys to the jws.header, it is clear that you can or should do that. I would prefer something that makes clear exactly what fields can be used to filter PublicKeys on a provider.

How do we deal with missing/non-unique `kid` on JWKS endpoint?

In implementing a ledger for OIDC public key, we've found that according to spec, the kid is only mandatory if there is more than one key on the JWKS endpoint, and only needs to be unique for each HTTP response on that endpoint. kid is obviously super important when trying to lookup the public key given an OPK. I was wondering if adding a hash of the OP public key to the OPK header would solve this? Any other ideas?

OpenPubkey single use signatures support

DOI (Docker Official Images) using OpenPubkey extends OpenPubkey with the notion of single use signatures. This provides the security property that even if the signing key was compromised it can not be used or abused to sign another image.

To accomplish this Docker exploits the flexibility of the CIC and specifies an extra claim in CIC which commits to the hash of the data to be signed using the claim 'att'.

func (sv *opkSignerVerifier) Sign(ctx context.Context, data []byte) ([]byte, error) {
	hash := s256(data) // SHA-256
	hashHex := hex.EncodeToString(hash)

	tokSigner, err := pktoken.NewSigner("", "ES256", true, map[string]any{"att": hashHex})"

https://github.com/openpubkey/signed-attestation/blob/main/opk.go#L24

This results in CIC which is {'rz': crypto.random(), 'upk': <publickey>, 'alg': 'EC256', 'sig': SHA-256(dokcerImage)}

This single use signature feature is generic. We should consider if we want to support this as part of OpenPubkey itself.

Support for adding GQ signatures to OpenPubkey tokens

As OpenPubkey tokens are JWS, we need to have a way of identifying the signature as being a GQ signature. The alg will be set to RS256 because the protected headers are fixed, so we will need to add an unprotected header to flag this.

sensitive data in `math/big` package

Identified in #48

GQ signing uses the math/big library for big integer math, specifically the modulus inverting function:

openpubkey/gq/sign.go

Lines 95 to 104 in 69500f1

func (sv *signerVerifier) modInverse(b *memguard.LockedBuffer) *memguard.LockedBuffer {
x := new(big.Int).SetBytes(b.Bytes())
x.ModInverse(x, sv.n)
// need to allocate memory for fixed length slice using FillBytes
ret := make([]byte, len(b.Bytes()))
defer b.Destroy()
return memguard.NewBufferFromBytes(x.FillBytes(ret))
}

Sensitive data (OIDC provider signature) is copied into memory within the big.Int type as part of the SetBytes() method. Unfortunately, the bit.Int Bytes() method copies the underlying data to a new byte slice. So there is no way to wipe the data from this object once it is copied in.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.