Giter VIP home page Giter VIP logo

keripy's Introduction

Python Implementation of the KERI Core Libraries

Project Name: keripy

PyPi GitHub Actions codecov https://pypi.org/project/keri/ Documentation Status

Installation

Local installation - build from source

Once all dependencies are installed and working then run:

$ python3 -m pip install -e ./

Then you can run

$ kli version

to get a version string similar to the following:

1.0.0

Local installation - Docker build

Run make build-keri to build your docker image.

Then run docker run --pull=never -it --entrypoint /bin/bash weboftrust/keri:1.1.10 and you can run kli version from within the running container to play with KERIpy.

Make sure the image tag matches the version used in the Makefile. We use --pull=never to ensure that docker does not implicitly pull a remote image and relies on the local image tagged during make build-keri.

Dependencies

Binaries

python 3.12.1+ libsodium 1.0.18+

python packages

lmdb 0.98+ pysodium 0.7.5+ blake3 0.1.5+ msgpack 1.0.0+ simplejson 3.17.0+ cbor2 5.1.0+

$ pip3 install -U lmdb pysodium blake3 msgpack simplejson cbor2

or separately

$ pip3 install -U lmdb
$ pip3 install -U pysodium
$ pip3 install -U blake3
$ pip3 install -U msgpack
$ pip3 install -U simplejson
$ pip3 install -U cbor2

Development

Setup

  • Ensure Python 3.12.1 is present along with venv and dev header files;
  • Setup virtual environment: python3 -m venv keripy
  • Activate virtual environment: source keripy/bin/activate
  • Setup dependencies: pip install -r requirements.txt

Testing

  • Install pytest: pip install pytest

  • Run the test suites:

pytest tests/ --ignore tests/demo/
pytest tests/demo/

Building Documentation in /docs

  • Install sphinx:
    • $ pip install sphinx
    • $ pip install myst-parser
  • Build with Sphinx in /docs:
    • $ make html

keripy's People

Contributors

2byrds avatar alexandrei98 avatar arsh-sandhu avatar blelump avatar clackwork avatar daidoji avatar dc7 avatar dhh1128 avatar haroldcarr avatar henkvancann avatar jasoncolburne avatar kentbull avatar lenkan avatar luffa99 avatar m00sey avatar nkongsuwan avatar ntelfer avatar pfeairheller avatar pschlarb avatar psteniusubi avatar rc-provenant avatar rodolfomiranda avatar rubelhassan avatar s-a-tanjim avatar smithsamuelm avatar sweidenbach avatar tsterker avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

keripy's Issues

KERI OOBI (Out-Of-Band Introduction)

KERI OOBI

The jump start of any vacuous discovery associated with a KERI AID requires an Out-Of-Band Introduction (OOBI) to associate a given URL with an AID. The principal reason for this requirement is that KERI AIDs are pseudonymous and completely independent of internet and DNS addressing infrastructure. Thus an IP address or URL could be considered a type of Out-Of-Band Infrastructure (OOBI) for KERI. In this context an introduction is an association between a KERI AID and a URL which may include either an explicit IP address for its netloc or a DNS name. We call this a KERI OOBI (Out-Of-Band-Introduction) and is a special case of Out-Of-Band-Infrastructure (OOBI) with a shared acronym. For the sake of clarity, unless otherwise qualified OOBI is used to mean this special case of an introduction and not the general case.

Moreover, because IP infrastructure is not trusted by KERI, a KERI OOBI by itself is considered insecure with respect to KERI and any OOBI must therefore be later proven and verified using a KERI BADA mechanism . The principal use case for an OOBI is to kick start the discovery of a service endpoint for a given AID. To reiterate, the OOBI by itself is not sufficient for discovery because the OOBI itself is insecure. The OOBI merely jump starts authenticated discovery.

The simplest form of a KERI OOBI is a message or attachment that contains both a KERI AID and a URL. The OOBI may contain other information such as the role of the service endpoint represented by the URL. An OOBI may also include a list of URLs thus simultaneously making an introductory association between the AID and multiple URLs. This would be a multi-OOBI . In general we may refer to a multi-OOBI as a special case of an OOBI without making a named distinction. The OOBI message itself is not signed or otherwise authenticatible by KERI but may employ some other Out-Of-Band-Authentication (OOBA) mechanism (non-KERI).

A recipient of an OOBI however may choose to engage in a process of authenticating via KERI the URL as an authorized endpoint by querying the URL provided in the OOBI for supporting BADA reply messages that are KERI authenticatible.

Example OOBIs

The OOBI is intentionally simplistic to enable very low byte count introductions such as a QR code or Data matrix or the like.

An OOBI may be returned as the result of a get request to an IETF RFC 5785 well-known URL. For example:

 /.well-known/keri/oobi/EaU6JR2nmwyZ-i0d8JZAoTNZH3ULvYAfSVPzhzS6b5CM

Where EaU6JR2nmwyZ-i0d8JZAoTNZH3ULvYAfSVPzhzS6b5CM is the AID
and the result of the request is either the URL or a redirection to the URL where the URL is something like

https://example.com/witness/witmer

Alternatively the OOBI could be expressed as a URL that includes the AID in its path. This would be a type of self-describing OOBI URL such as.

https://example.com/oobi/EaU6JR2nmwyZ-i0d8JZAoTNZH3ULvYAfSVPzhzS6b5CM/witness

where EaU6JR2nmwyZ-i0d8JZAoTNZH3ULvYAfSVPzhzS6b5CM/witness/witmer is the AID and witness is the role. The AID (EID) the the endpoint provider of that role would be discovered via the proof returned by querying the URL.

A more verbose version would also include the AID (EID) of the endpoint provider.

https://example.com/oobi/EaU6JR2nmwyZ-i0d8JZAoTNZH3ULvYAfSVPzhzS6b5CM/witness/BrHLayDN-mXKv62DAjFLX1_Y5yEUe0vA9YPe_ihiKYHE

Where BrHLayDN-mXKv62DAjFLX1_Y5yEUe0vA9YPe_ihiKYHE is the AID (EID) of the endpoint provider.

A more verbose expression for an OOBI would be a KERI reply message rpy that is unsigned. The route specifies that it is an OOBI so the recipient knows to apply OOBI processing logic to the message. A list of URLs is provided so that it may provide multiple introductions. For example:

{
          "v" : "KERI10JSON00011c_",
          "t" : "ray",
          "d": "EZ-i0d8JZAoTNZH3ULaU6JR2nmwyvYAfSVPzhzS6b5CM",
          "dt": "2020-08-22T17:50:12.988921+00:00",
          "r" : "/oobi/witness",
          "a" :
          {
             "aid":  "EaU6JR2nmwyZ-i0d8JZAoTNZH3ULvYAfSVPzhzS6b5CM",
             "urls":  ["http://example.com/watcher/watson", "http://example.com/witness/wilma"]
          }
}

A service endpoint location reply message could be repurposed as an OOBI by using a special route path that includes the AID being introduced and optionally the role of the service endpoint provider as follows:

{
          "v" : "KERI10JSON00011c_",
          "t" : "rpy",
          "d": "EZ-i0d8JZAoTNZH3ULaU6JR2nmwyvYAfSVPzhzS6b5CM",
          "dt": "2020-08-22T17:50:12.988921+00:00",
          "r" : "/oobi/EaU6JR2nmwyZ-i0d8JZAoTNZH3ULvYAfSVPzhzS6b5CM/watcher",
          "a" :
          {
             "eid": "BrHLayDN-mXKv62DAjFLX1_Y5yEUe0vA9YPe_ihiKYHE",
             "scheme": "http", 
             "url":  "http://example.com/watcher/wilma",
          }
        }

This more verbose approach includes the AID (EID) of the service endpoint provider which may allow a short cut to authenticating the service endpoint.

OOBI Forwarding

In every case an OOBI may result in a proof for a different URL than that provided in the OOBI itself. The allows OOBI forwarding so that introductions produced as hard copies such as QR codes do not necessarily become stale. The recipient of the OOBI may choose to accept that proof or not. Ultimately the recipient only treats URLs as valid endpoints when they are fully KERI authenticated. The worst case is that an OOBI may be part of a DDOS attack but not a service endpoint cache poison attack.

OOBI Initiated KERI Endpoint Authentication (IKEA)

Upon acceptance of an OOBI the recipient queries the provided URL for proof that the URL is an authorized endpoint for the given AID. The proof format may depend on the actual role of the endpoint. A current witness for an AID is designated in the current key state's latest establishment event in the AID's KEL. Therefore merely replying with the Key State or KEL may serve as proof for a witness introduced by an OOBI. Other roles are not part of key state (i.e. are not designated in KEL establishment events) and therefore must be authorized by another mechanism. This typically will be a signed /end/role/ reply message. So the query of the OOBI URL could return as proof an associated authorizing reply message. For example:

{
          "v" : "KERI10JSON00011c_",
          "t" : "rpy",
          "d": "EZ-i0d8JZAoTNZH3ULaU6JR2nmwyvYAfSVPzhzS6b5CM",
          "dt": "2020-08-22T17:50:12.988921+00:00",
          "r" : "/end/role/add",
          "a" :
          {
             "cid":  "EaU6JR2nmwyZ-i0d8JZAoTNZH3ULvYAfSVPzhzS6b5CM",
             "role": "watcher", 
             "eid": "BrHLayDN-mXKv62DAjFLX1_Y5yEUe0vA9YPe_ihiKYHE",
          }
 }

OOBI with MFA

An OOBI may be augmented with one or more Out-Of-Band Authentications (OOBAs) to minimize the likelihood of a DDOS OOBI attack. A given recipient may require as a precondition to accepting an OOBI one or more OOBA mechanisms such as text messages, emails, etc that together provide some degree of non-KERI based security to the OOBI. Thus an OOBI could employ out-of-band (with respect to KERI) multi-factor-authentication (MFA) to preclude any OOBI based DDOS attacks on KERI.

KERI OOBI Use in Installation Configuration

One way to pre-configure a vacuous KERI installation is to provide OOBIs in a configuration file. The bootstrap process of the installation then queries the associated URLs to retrieve the KERI authentication proofs (BADA) that then are used to populate its database securely. This simplifies the configuration file.

An alternative would be to populate the configuration file with the KERI authentication proofs but then if one already had the proofs one could just pre-populate the database with those proofs.

The main value of an OOBI is that it is compact and is not encumbered by authentication proofs but may be used to kick-start the process of authentication (proving).

`exp` message

New exp message for anchored data to a given KEL not rpy

Signing image data when saving for a contact

All data must be signed at rest so we will be hashing every contact image we load and signing the hash. Then we will verify the signature on the hash of the data at load time.

Obtain facility

Encapsulate lookup of service endpoints to be facility similar to DNS lookup in the obtain library.

Delegated establishment event persistence

A local Habitat should save a delegated establishment events after signing it and before receiving the anchoring event from the delegating identifier. Currently the delegated inception event is held locally in escrow. This causes issues with communication and processing of the delegating identifiers anchor.

kli agent command says it starts TCP service but none is started

I started an agent with

$ kli agent start -I 

******* Starting agent listening: http/5623, tcp/5621 .******

and presumed that there was some TCP interface started.

But confirmed with wireshark and this command that it does not start any TCP listener on this port but just the admin interface.

$ sudo lsof -i -P | grep -i python
Password:
Python    95584  xxxxxxx    7u  IPv4 0xbb3c6d80318c9993      0t0    TCP *:5623 (LISTEN)

Sphinx Compatible Doc Strings

Now that we are moving toward production, its time to clean up the python syntax and make the code docs strings compatible with the Sphinx (readthedocs) auto code documentation.

By reference hio uses the sphinx compatible docs strings. Keripy does not everywhere. some places have been updated others not. Many functions and methods have no doc string at all. There are other pythonic issues to address and some
consistency issues with naming conventions.

Also part of this effort would be to use python type hints in call signatures.
Eventually this would not only make the documentation cleaner but also enable us to use mypy to do
hint type checking as part of unit testing.

  • keri.app.agenting
  • keri.app.apping
  • keri.app.configing
  • keri.app.delegating
  • keri.app.directing
  • keri.app.forwarding
  • keri.app.grouping
  • keri.app.habbing
  • keri.app.httping
  • keri.app.indirecting
  • keri.app.keeping
  • keri.app.kiwiing
  • keri.app.obtaining
  • keri.app.signing
  • keri.app.specing
  • keri.app.storing
  • keri.app.watching
  • keri.core.coring
  • keri.core.eventing
  • keri.core.parsing
  • keri.core.routing
  • keri.core.scheming
  • keri.db.basing
  • keri.db.dbing
  • keri.db.escrowing
  • keri.db.koming
  • keri.db.subing
  • keri.end.ending
  • keri.end.priming
  • keri.help.helping
  • keri.kering
  • keri.peer.exchanging
  • keri.vc.handling
  • keri.vc.proving
  • keri.vc.walleting
  • keri.vdr.eventing
  • keri.vdr.issuing
  • keri.vdr.registering
  • keri.vdr.verifying
  • keri.vdr.viring

Exchange Protocol State

Exchange Protocol State

Exchange protocols like Issuance Exchange Protocol (IXP) and Presentation Exchange Protocol (PXP) require state be tracked across individual instances of the protocol. SAIDs will be added to the exn messages in the exchange protocols and the SAID of the initiating message will be used in the reply route (rr) field of all subsequent messages to track a single instance of an exchange.

Outstanding question: Should the SAID of the initial message be used in all subsequent messages for one conversation or should the SAID of the prior exn message be used to create a hashed chain for a conversation?

Replace single Matter value (not couple triple etc) sub abs in Baser with MatterSuber for MatterDupSuber instances

Currently Baser as several databases whose value is a single qb64 encoded value of a Matter subclass instance. These databases can be replaced with an instance of either MatterSuber or MatterDupSuber as appropriate. This should clean up
the associated code because the Suber instances handle the conversion to from Matter automatically and also simplify
the logic for catching missing entries in the DB with None returned versus a memoryview.

Add witness list to database for each establishment event.

Fetch witness state at any point in the KEL not merely the latest (current) witness state

Add a method to Kevery .fetchWitnessState(pre, sn)
to calculate the witness state (list of witnesses) for a given KEL with prefix pre at sequence number sn.

The history of witness states is not saved in the KEL. This is because the witness list is updated via a delta that adds or removes witnesses. To calculate the witness state in the past one must either start at the inception event and walk forward (replay) calculating the delta from rotation events until one reaches the desired SN or one must start at the end (current state) and walk backwards calculating deltas until one reaches the desired SN. If the desired event is near the end then walking back is more efficient. If its near the beginning then walking forward is more efficient. Interaction isn events may be skipped because they do not change the witness state. So the distance in terms of walking forward or backward is measured by the number of intervening rotation events from start forward to SN or from end backwards to SN.

One use case are received copies of events with additional witness signature that are prior to last est event for current
witness state. This happens after event is accepted as first seen, (otherwise
its escrowed) but another copy of event or receipt is received after the event
is stale (another establishment event that changes witness state has occurred)
We have no way to attach those late arriving witness signatures because currently
we don't have a way to calculate witness state that is not relative to current
witness state of last est event.

Needed when the desired SN does not correspond to latest witness state. because events only have
witness deltas one as to start at inception and walk forward replaying all
witness changes or walk backward from latest witness state to last est evt before
sn and revert all changes. I guess probably start end and walk back since most
likely to be closer to end than beginning unless sn is inception.

inadvertent dependency on python 3.10?

I have python 3.9.12. Followed getting started instructions and saw this error after attempting the pip install step:

Collecting ordered-set>=4.1.0
  Using cached ordered_set-4.1.0-py3-none-any.whl (7.6 kB)
ERROR: Ignored the following versions that require a different python version: 0.6.0 Requires-Python >=3.10.4
ERROR: Could not find a version that satisfies the requirement hio>=0.6.0 (from keri) (from versions: 0.0.1, 0.0.2, 0.0.3, 0.0.5, 0.0.6, 0.0.8, 0.0.9, 0.1.0, 0.1.1, 0.1.2, 0.1.3, 0.1.4, 0.1.5, 0.1.6, 0.1.7, 0.1.8, 0.1.9, 0.2.0, 0.2.1, 0.2.2, 0.2.3, 0.2.4, 0.2.5, 0.2.6, 0.3.0, 0.3.1, 0.3.2, 0.3.3, 0.3.4, 0.3.5, 0.3.6, 0.3.7, 0.3.8, 0.3.9, 0.4.0, 0.4.1, 0.4.2, 0.4.3, 0.4.4, 0.4.5, 0.4.6, 0.4.7, 0.4.9, 0.5.0, 0.5.1, 0.5.2, 0.5.3, 0.5.4, 0.5.5, 0.5.6, 0.5.7, 0.5.8, 0.5.9)
ERROR: No matching distribution found for hio>=0.6.0

Tagging @pfeairheller

Partial Rotation Events

Partial Rotation

New event types with fields to allow rotation that need only expose a threshold set of of the public keys allowing reuse of unexposed public keys.

Ort (partial) Rotation Event

Revised Syntax Next, n, Field

In order to support partial (ort) rotation the next field in the previous establishment event must be changed to a list that includes in order the digest for each of the next public keys and the digest of the next threshold. For example given there are five public keys in the next set of pre-rotated signing keys with the threshold of 3. The next field would be a list with six entries, one for each of the five public key digests and one more for the signing threshold digest such as the following:

"n": 
  [
    "ETNZH3ULvYawyZ-i0d8JZU6JR2nmAoAfSVPzhzS6b5CM", 
    "EYAfSVPzhzaU6JR2nmoTNZH3ULvwyZb6b5CMi0d8JZAS",
    "EnmwyZdi0d8JZAoTNZYAfSVPzhzaU6JR2H3ULvS6b5CM",
    "ETNZH3ULvS6bYAfSVPzhzaU6JR2nmwyZfi0d8JZ5s8bk",                       
    "EJR2nmwyZ2i0dzaU6ULvS6b5CM8JZAoTNZH3YAfSVPzh",
    "EoTNZJR2nmwyZ2i0d6ULvS6b5CM8JZAH3YAfSVPzhzaU",  
  ]

This representation of the next field allows the subsequent corresponding rotation to only expose a subset of the next public keys while still enabling validators to securely verify the next forward commitment.

New Original Threshold Field

The new ort rotation adds one field, the ot field.
The ot field is the original unexposed next threshold from the prior establishment event.

With the additional field a validator is able to verify that both the set of signatures on the rotation event satisfies the original next threshold of signatures that was part of the next digest list committed too by the prior establishment event and that the public keys of that threshold satisficing set of signing public keys were part of the next digest or next digest list committed too by the prior establishment event without revealing the next public keys of those signers that did not participate in the rotation.

Besides providing better fault tolerance to controller availability yet still preserving post-quantum protection, the partial rotation allows unused key pairs from non-participating rotation members to be reused as members of the new next pre-rotation set without exposing the associated public keys. This latter advantage has application to multi-sig thresholds where some of the members are escrow or custodial members where participation in every rotation may be cumbersome. The primary disadvantage of the partial rotation approach is that is is more verbose and consumes more bandwidth. The full rotation is more compact because the next list of digests are XORed together. Any given KEL may switch from back and forth between partial and fully rotation..

The k field of a partial, ort or dor rotation provides the public keys of the participating signers in their same order of appearance in the previous next n field digest list. Non participating public keys are skipped. The ot field provides the original threshold used to compute the last digest entry in the previous n field digest list.
The kt field is the new signing threshold for the subset of public keys in the k field list.

The validator verifies the rotation against the original next digest list with the following procedure.

  • the validator ensures the digest of the ot field matches the last digest entry in the previous n field list.
  • the validator ensures that there is a corresponding entry in order in the previous n digest field list for the digest of each of the public keys in the k field list. This may be performed by an ordered search. The last entry of the previous n field is removed (its the threshold digest, i.e. not a public key digest).
  • Starting with the digest of the first member of the k field and comparing it in turn in order starting with the first member of the previous n field list.
  • When a match is found then the search resumes at the next member of each of the k and n lists until a corresponding match is found. Search resumes by repeating prior step.
  • the validator ensures that the attached signatures satisfy the original threshold given by the ot field where the signers are taken from the k field list of public keys. Attached indexed signature indexes refer to the order of appearance in the. k field not the previous n field.

To reiterate, the signatures on the the rotation event must meet the original next threshold given by the ot field. The new current signing threshold is provided by the kt field and the new current public signing keys are provided by the k field. The new next digest in the n field or n field list may or may not include some of all of the digests from the previous n field list that do not have corresponding entries in the k field list.

This approach allows any threshold satisficing set of signers to rotate to a new current set of signing keys that is a threshold satisficing subset of the previous next threshold without requiring knowledge of all the previous next public signing keys. Those members not represented by the public keys digests in the k field may be part of the new next digest or digest list because the underlying public keys were not disclosed by the rotation. This only may be applied when the previous next field, n is a list of digests not an XORed combination of the digests.

Partial Rotation Event

{
  "v" : "KERI10JSON00011c_",
  "t" : "ort",
  "d" : "E0d8JJR2nmwyYAfZAoTNZH3ULvaU6Z-iSVPzhzS6b5CM",
  "i" : "EZAoTNZH3ULvaU6Z-i0d8JJR2nmwyYAfSVPzhzS6b5CM",
  "s" : "1",
  "p" : "EULvaU6JR2nmwyZ-i0d8JZAoTNZH3YAfSVPzhzS6b5CM",
  "kt": "2",
  "k" :  
    [
      "DnmwyZ-i0H3ULvad8JZAoTNZaU6JR2YAfSVPzh5CMzS6b",
      "DZaU6JR2nmwyZ-VPzhzSslkie8c8TNZaU6J6bVPzhzS6b",
      "Dd8JZAoTNnmwyZ-i0H3U3ZaU6JR2LvYAfSVPzhzS6b5CM"
    ],
  "ot": "3",
  "n" : 
    [
      "ETNZH3ULvYawyZ-i0d8JZU6JR2nmAoAfSVPzhzS6b5CM", 
      "EYAfSVPzhzaU6JR2nmoTNZH3ULvwyZb6b5CMi0d8JZAS",
      "EnmwyZdi0d8JZAoTNZYAfSVPzhzaU6JR2H3ULvS6b5CM",
      "ETNZH3ULvS6bYAfSVPzhzaU6JR2nmwyZfi0d8JZ5s8bk",                       
      "EJR2nmwyZ2i0dzaU6ULvS6b5CM8JZAoTNZH3YAfSVPzh",
      "EoTNZJR2nmwyZ2i0d6ULvS6b5CM8JZAH3YAfSVPzhzaU",  
    ],
  "bt": "1",
  "ba": ["DTNZH3ULvaU6JR2nmwyYAfSVPzhzS6bZ-i0d8JZAo5CM"],
  "br": ["DH3ULvaU6JR2nmwyYAfSVPzhzS6bZ-i0d8TNZJZAo5CM"],
  "a" : []
}

Delegated Partial Rotation Event

{
  "v" : "KERI10JSON00011c_",
  "t" : "dor",
  "d" : "E0d8JJR2nmwyYAfZAoTNZH3ULvaU6Z-iSVPzhzS6b5CM",
  "i" : "EZAoTNZH3ULvaU6Z-i0d8JJR2nmwyYAfSVPzhzS6b5CM",
  "s" : "1",
  "p" : "EULvaU6JR2nmwyZ-i0d8JZAoTNZH3YAfSVPzhzS6b5CM",
  "kt": "2",
   "k" :  
    [
      "DnmwyZ-i0H3ULvad8JZAoTNZaU6JR2YAfSVPzh5CMzS6b",
      "DZaU6JR2nmwyZ-VPzhzSslkie8c8TNZaU6J6bVPzhzS6b",
      "Dd8JZAoTNnmwyZ-i0H3U3ZaU6JR2LvYAfSVPzhzS6b5CM"
    ],
  "ot": "3",
  "n" : 
    [
      "ETNZH3ULvYawyZ-i0d8JZU6JR2nmAoAfSVPzhzS6b5CM", 
      "EYAfSVPzhzaU6JR2nmoTNZH3ULvwyZb6b5CMi0d8JZAS",
      "EnmwyZdi0d8JZAoTNZYAfSVPzhzaU6JR2H3ULvS6b5CM",
      "ETNZH3ULvS6bYAfSVPzhzaU6JR2nmwyZfi0d8JZ5s8bk",                       
      "EJR2nmwyZ2i0dzaU6ULvS6b5CM8JZAoTNZH3YAfSVPzh",
      "EoTNZJR2nmwyZ2i0d6ULvS6b5CM8JZAH3YAfSVPzhzaU",  
    ],
  "bt": "1",
  "ba":  ["DTNZH3ULvaU6JR2nmwyYAfSVPzhzS6bZ-i0d8JZAo5CM"],
  "br":  ["DH3ULvaU6JR2nmwyYAfSVPzhzS6bZ-i0d8TNZJZAo5CM"],
  "a" :[]
  "di" : "EJJR2nmwyYAZAoTNZH3ULvaU6Z-i0d8fSVPzhzS6b5CM"
}

Registrar Backers

Registrar Backers

Unlike witness backers, registrar backers have associated metadata that specifies the registry or ledger upon which the registrar backer anchors the KEL events.

Originally it was proposed that registrar backers be indicated by using a transferable identifier derivation code but have an inception event with a null next field. This would make the registrar identifier effectively non-transferable but provide an inception event that could anchor meta-data. However, this would break a lot of witness-related code, which expects that the witness identifiers are explicit, not merely effectively non-transferable.

This revised proposal uses explicit non-transferable identifiers for backers but adds a new config trait to indicate that the backers are registrars and adds a new seal type to indicate the anchor for the metadata.

Config Trait

RB for registrar backers or ledger registry backed instead of witness pool backed.

Registrar Seal

{
  "bi": "BACDEFG8JZAoTNZH3ULvaU6JR2nmwyYAfSVPzhzS6b5CM",
  "d" : "EaAoTNZH3ULvYAfSVPzhzS6b5CMaU6JR2nmwyZ-i0d8J"
}

The bi field in the seal is the non-transferable identifier of the registrar backer (backer identifier). The first seal appearing in the a field list in the containing event with that registrar backer identifier is the authoritative one for that registrar (in the event that there are multiple registrar seals for the same bi value).

The seal must appear in the same establishment event that designates the registrar backer identifier as a backer identifer in the event's backer's list. Attached to the designating establishment event is a bare, bar, message that includes the registrar's metadata. Metadata could include, the address used to source events onto the ledger and a service endpoint for the leger registrar and a corresponding ledger oracle.

The d in the seal MUST BE the SAID of the associated metadata SAD. The SAD may appear as the value of the sd seal data field in a bare, bar message..

Bare Message

The bare, bar, message provides either a solicited or unsolicited disclosure of anchored data. It includes a reference to the establishment event, and its route indicates that it contains registrar metadata.

{ 
  "v": "KERI10JSON00011c_",
  "t": "bar",
  "d": "EFGKDDA8JZAoTNZH3ULvaU6JR2nmwyYAfSVPzhzS6b5CM",
  "r": "process/registrar/bitcoin",
  "a":
  {
    "d" : "EaU6JR2nmwyZ-i0d8JZAoTNZH3ULvYAfSVPzhzS6b5CM",
    "i" : "EAoTNZH3ULvYAfSVPzhzS6baU6JR2nmwyZ-i0d8JZ5CM",
    "s" : "5",
    "bi", "BACDEFG8JZAoTNZH3ULvaU6JR2nmwyYAfSVPzhzS6b5CM",
    "sd":  
    {
       "d": "EaAoTNZH3ULvYAfSVPzhzS6b5CMaU6JR2nmwyZ-i0d8J",
       "stuff": "meta data field"
     }  
  }
}

The r field is the route. It is the equivalent of the resource path in a ReST interface but because KERI must support peer-to-peer asynchronous protocols (i.e. can't depend on http or ReST) the messages explicitly include a route that indicates resource being returned when unsolicited or the return route when solicited.

in the a block:

d, i, and s indicate the event holdeing the backer seal. said of event, identifier prefix of the event, sequence number of event.
bi is from the backer identifier from the backer seal.

sd is the seal data SAD field block. The nested d said of this block MUST be the d field in the associated seal.

As an option, a registrar's metadata could be updated by anchoring a new metadata seal and promulgating a new bare, bar, message whose route indicates that it is updating leger registrar metadata. This allows registrar metadata to be updated in an interaction event.

A more secure approach to updating registrar metadata is to rotate the registrar's identifier in an establishment event and then provide the new metadata as the metadata for the new registrar identifier.

Prod Message

The prod, pro, message provides a solicitation, i.e. a disclosure request, for disclosure via a bare, 'bar` response.

{ 
  "v": "KERI10JSON00011c_",
  "t": "pro",
  "d": "EZ-i0d8JZAoTNZH3ULaU6JR2nmwyvYAfSVPzhzS6b5CM",
  "r": "registrar/bitcoin",
  "rr": "process/registrar/bitcoin",
  "q":
  {
    "d" : "EaU6JR2nmwyZ-i0d8JZAoTNZH3ULvYAfSVPzhzS6b5CM",
    "i" : "EAoTNZH3ULvYAfSVPzhzS6baU6JR2nmwyZ-i0d8JZ5CM",
    "s" : "5",
    "bi", "BACDEFG8JZAoTNZH3ULvaU6JR2nmwyYAfSVPzhzS6b5CM",
    "sd": "EaAoTNZH3ULvYAfSVPzhzS6b5CMaU6JR2nmwyZ-i0d8J"
  }
}

The r field is the route. It is the equivalent of the resource path in a ReST interface but because KERI must support peer-to-peer asynchronous protocols (i.e. can't depend on http or ReST) the messages explicitly include a route to the resource being requested.

The rr field is the return route. Because KERI must support peer-to-peer asynchronous protocols (i.e. can't depend on http or ReST) the messages explicitly includes a return route so that the resource being requested is returned to the correct processor on the receiving end.

The request message, in this case the pro, prod, includes both route, r and return route, rr fields. The route is the path to the resource on the server host and the return route tells the server host how to return it. The r field in the corresponding bare, bar message is assigned the value of the rr field in the triggering pro, prod message.

An unsolicited bare, bar message uses a well known value for the resource as determined by agreement or the type of data being sent unsolicited. That is resource dependent.

in the q block:

d, i, and s indicate the event holdeing the backer seal. said of event, identifier prefix of the event, sequence number of event.
bi is from the backer identifier from the back seal.

sd field is the said d from the backer seal i.e. seal data said

Refactor verifiage to take advantage of Habitat properties

Changed name to verifiage instead of verifiers as its inconsistent with .verfers in general usage as list of verfers

def verifiage(self, pre=None):
        """
        Returns the Tholder and Verfers for the provided identifier prefix.
        Default pre is own .pre

        Parameters:
            pre(str) is qb64 str of bytes of identifier prefix.
                      default is own .pre

        """
        if not pre:
            pre = self.pre

        prefixer = coring.Prefixer(qb64=pre)
        if prefixer.transferable:
            sever = self.kevers[pre]
            verfers = sever.verfers
            tholder = sever.tholder
        else:
            verfers = [coring.Verfer(qb64=pre)]
            tholder = coring.Tholder(sith="1")

        return tholder, verfers

If use existing properties of Habitat may not need this method at all in the one place its used. in Eachanger.processEvent

Habitat.kever is property that i

Standard test fails in test_configing.py Assertion error on line 136

I run keripy in a Docker container, created using this Dockerfile and docker build -t keripy:v0.6 .:

FROM python:3.10.5-buster

RUN apt-get update
RUN apt-get install -y ca-certificates

RUN apt-get install -y libsodium23

# Python packages
RUN pip3 install -U lmdb
RUN pip3 install -U pysodium
RUN pip3 install -U blake3
RUN pip3 install -U msgpack
RUN pip3 install -U simplejson
RUN pip3 install -U cbor2

COPY ./ /keripy
WORKDIR /keripy

# Python standard port to be able to access the container
EXPOSE 5000/tcp
EXPOSE 5000/udp

RUN pip install -r requirements.txt

# Testing capabilities
RUN pip install pytest

No errors during the creation process:

[+] Building 20.3s (19/19) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 605B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/python:3.10.5-buster 0.6s
=> [ 1/14] FROM docker.io/library/python:3.10.5-buster@sha256:e9a0687161022c5857fd71f958b0bd43aebe5beb4739210ff36f6d5 0.0s
=> [internal] load build context 0.2s
=> => transferring context: 256.03kB 0.2s
=> CACHED [ 2/14] RUN apt-get update 0.0s
=> CACHED [ 3/14] RUN apt-get install -y ca-certificates 0.0s
=> [ 4/14] RUN apt-get install -y libsodium23 2.0s
=> [ 5/14] RUN pip3 install -U lmdb 2.5s
=> [ 6/14] RUN pip3 install -U pysodium 1.7s
=> [ 7/14] RUN pip3 install -U blake3 1.1s
=> [ 8/14] RUN pip3 install -U msgpack 1.1s
=> [ 9/14] RUN pip3 install -U simplejson 1.0s
=> [10/14] RUN pip3 install -U cbor2 1.0s
=> [11/14] COPY ./ /keripy 0.3s
=> [12/14] WORKDIR /keripy 0.0s
=> [13/14] RUN pip install -r requirements.txt 6.8s
=> [14/14] RUN pip install pytest 1.8s
=> exporting to image 0.4s
=> => exporting layers 0.4s
=> => writing image sha256:a46e32ae4967a88b2c8d36609d4b009d21c0e4692a03aef995b1acbd0f9de264 0.0s
=> => naming to docker.io/library/keripy:v0.6 0.0s

Then I run the container using this command:
docker container run -i -t keripy:v0.6 /bin/bash

starts up nicely

Then the standard testing:

root@2ac5ac59a07c:/keripy# pytest tests/ --ignore tests/demo/

==================================================== test session starts ====================================================
platform linux -- Python 3.10.5, pytest-7.1.2, pluggy-1.0.0
rootdir: /keripy
collected 224 items

tests/app/test_agenting.py ... [ 1%]
tests/app/test_apping.py .. [ 2%]
tests/app/test_configing.py F. [ 3%]
tests/app/test_connecting.py .. [ 4%]
tests/app/test_credentials.py . [ 4%]
tests/app/test_delegating.py .... [ 6%]
tests/app/test_directing.py .. [ 7%]
tests/app/test_forwarding.py .. [ 8%]
tests/app/test_grouping.py ....... [ 11%]
tests/app/test_habbing.py ..... [ 13%]
tests/app/test_httping.py ... [ 14%]
tests/app/test_indirecting.py .. [ 15%]
tests/app/test_keeping.py ........ [ 19%]
tests/app/test_kiwiing.py ........... [ 24%]
tests/app/test_multisig.py . [ 24%]
tests/app/test_signing.py .. [ 25%]
tests/app/test_specing.py . [ 25%]
tests/app/test_storing.py . [ 26%]
tests/app/test_watching.py . [ 26%]
tests/app/cli/test_kli_commands.py . [ 27%]
tests/app/cli/commands/multisig/test_multisig.py . [ 27%]
tests/comply/test_direct_mode.py . [ 28%]
tests/core/test_bare.py . [ 28%]
tests/core/test_coring.py .......................... [ 40%]
tests/core/test_crypto.py ...... [ 42%]
tests/core/test_cueing.py . [ 43%]
tests/core/test_delegating.py . [ 43%]
tests/core/test_escrow.py ..... [ 45%]
tests/core/test_eventing.py ......................... [ 57%]
tests/core/test_kevery.py ... [ 58%]
tests/core/test_keystate.py . [ 58%]
tests/core/test_parsing.py .. [ 59%]
tests/core/test_partial_rotation.py . [ 60%]
tests/core/test_replay.py .. [ 61%]
tests/core/test_reply.py . [ 61%]
tests/core/test_scheming.py ... [ 62%]
tests/core/test_weighted_threshold.py . [ 63%]
tests/core/test_witness.py ... [ 64%]
tests/db/test_basing.py ...... [ 67%]
tests/db/test_dbing.py .... [ 69%]
tests/db/test_escrowing.py ... [ 70%]
tests/db/test_koming.py .......... [ 75%]
tests/db/test_subing.py ........... [ 79%]
tests/end/test_ending.py ...... [ 82%]
tests/help/test_helping.py ...... [ 85%]
tests/help/test_ogling.py .... [ 87%]
tests/peer/test_exchanging.py . [ 87%]
tests/vc/test_protocoling.py .. [ 88%]
tests/vc/test_proving.py .... [ 90%]
tests/vc/test_walleting.py . [ 90%]
tests/vdr/test_eventing.py .......... [ 95%]
tests/vdr/test_issuing.py . [ 95%]
tests/vdr/test_txn_state.py ..... [ 97%]
tests/vdr/test_verifying.py ... [ 99%]
tests/vdr/test_viring.py .. [100%]

========================================================= FAILURES ==========================================================
_______________________________________________________ test_configer _______________________________________________________

def test_configer():
    """
    Test Configer class
    """
    # Test Filer with file not dir
    filepath = '/usr/local/var/keri/cf/main/conf.json'
    if os.path.exists(filepath):
        os.remove(filepath)

    cfr = configing.Configer()  # defaults
    # assert cfr.path == filepath
    # github runner does not allow /usr/local/var
    assert cfr.path.endswith("keri/cf/main/conf.json")
    assert cfr.opened
    assert os.path.exists(cfr.path)
    assert cfr.file
    assert not cfr.file.closed
    assert not cfr.file.read()
    assert cfr.human

    # plain json manually
    data = dict(name="habi", oobi="ABCDEFG")
    wmsg = coring.dumps(data)
    assert hasattr(wmsg, "decode")  # bytes
    assert len(wmsg) == cfr.file.write(wmsg)
    assert 0 == cfr.file.seek(0)
    rmsg = cfr.file.read()
    assert rmsg == wmsg
    assert data == coring.loads(rmsg)

     # default is hjson for .human == True
    wdata = dict(name="hope", oobi="abc")
    assert cfr.put(wdata)
    rdata = cfr.get()
    assert rdata == wdata
    assert 0 == cfr.file.seek(0)
    rmsg = cfr.file.read()
    assert rmsg == b'{\n  name: hope\n  oobi: abc\n}'  # hjson

    cfr.close()
    assert not cfr.opened
    assert cfr.file.closed
    # assert cfr.path == filepath
    assert cfr.path.endswith("keri/cf/main/conf.json")
    assert os.path.exists(cfr.path)
    with pytest.raises(ValueError):
        rdata = cfr.get()

    cfr.reopen(reuse=True)  # reuse True and clear False so don't remake
    assert cfr.opened
    assert not cfr.file.closed
    # assert cfr.path == filepath
    assert cfr.path.endswith("keri/cf/main/conf.json")
    assert os.path.exists(cfr.path)
    assert (rdata := cfr.get()) == wdata  # not empty

    cfr.reopen()  # reuse False so remake but not clear
    assert cfr.opened
    assert not cfr.file.closed
    # assert cfr.path == filepath
    assert cfr.path.endswith("keri/cf/main/conf.json")
    assert os.path.exists(cfr.path)
    assert (rdata := cfr.get()) == wdata  # not empty

    cfr.reopen(reuse=True, clear=True)  # clear True so remake even if reuse
    assert cfr.opened
    assert not cfr.file.closed
    # assert cfr.path == filepath
    assert cfr.path.endswith("keri/cf/main/conf.json")
    assert os.path.exists(cfr.path)
    assert (rdata := cfr.get()) == {}  # empty
    wdata = dict(name="hope", oobi="abc")
    assert cfr.put(wdata)
    rdata = cfr.get()
    assert rdata == wdata

    cfr.reopen(clear=True)  # clear True so remake
    assert cfr.opened
    assert not cfr.file.closed
    # assert cfr.path == filepath
    assert cfr.path.endswith("keri/cf/main/conf.json")
    assert os.path.exists(cfr.path)
    assert (rdata := cfr.get()) == {}  # empty
    wdata = dict(name="hope", oobi="abc")
    assert cfr.put(wdata)
    rdata = cfr.get()
    assert rdata == wdata

    cfr.close(clear=True)
    assert not os.path.exists(cfr.path)
    with pytest.raises(ValueError):
        rdata = cfr.get()

    # Test with plain json human==False
    cfr = configing.Configer(human=False)
    # assert cfr.path == filepath
    # github runner does not allow /usr/local/var
    assert cfr.path.endswith("keri/cf/main/conf.json")
    assert cfr.opened
    assert os.path.exists(cfr.path)
    assert cfr.file
    assert not cfr.human
    assert not cfr.file.closed
    assert not cfr.file.read()

    #  .human == False
    wdata = dict(name="hope", oobi="abc")
    assert cfr.put(wdata)
    rdata = cfr.get()
    assert rdata == wdata
    assert 0 == cfr.file.seek(0)
    rmsg = cfr.file.read()
    assert rmsg == b'{\n  "name": "hope",\n  "oobi": "abc"\n}'  # plain json
    cfr.close(clear=True)
    assert not os.path.exists(cfr.path)

    # Test with altPath by using not permitted headDirPath /opt/keri to force Alt
    filepath = '/Users/samuel/.keri/cf/main/conf.json'
    if os.path.exists(filepath):
        os.remove(filepath)

    cfr = configing.Configer(headDirPath="/root/keri")
  assert cfr.path.endswith(".keri/cf/main/conf.json")

E AssertionError: assert False
E + where False = <built-in method endswith of str object at 0x7f778897d170>('.keri/cf/main/conf.json')
E + where <built-in method endswith of str object at 0x7f778897d170> = '/root/keri/keri/cf/main/conf.json'.endswith
E + where '/root/keri/keri/cf/main/conf.json' = <keri.app.configing.Configer object at 0x7f778e3829b0>.path

tests/app/test_configing.py:136: AssertionError
===================================================== warnings summary ======================================================
src/keri/core/coring.py:197
/keripy/src/keri/core/coring.py:197: DeprecationWarning: invalid escape sequence '-'
B64REX = b'^[A-Za-z0-9-_]*\Z'

../usr/local/lib/python3.10/site-packages/apispec/utils.py:11
/usr/local/lib/python3.10/site-packages/apispec/utils.py:11: DeprecationWarning: The distutils package is deprecated and slated for removal in Python 3.12. Use setuptools or check PEP 632 for potential alternatives
from distutils import version

tests/app/test_credentials.py:25
/keripy/tests/app/test_credentials.py:25: PytestCollectionWarning: cannot collect test class 'TestDoer' because it has a init constructor (from: tests/app/test_credentials.py)
class TestDoer(doing.DoDoer):

tests/app/test_multisig.py:21
/keripy/tests/app/test_multisig.py:21: PytestCollectionWarning: cannot collect test class 'TestDoer' because it has a init constructor (from: tests/app/test_multisig.py)
class TestDoer(doing.DoDoer):

tests/app/cli/commands/multisig/test_multisig.py:80
/keripy/tests/app/cli/commands/multisig/test_multisig.py:80: PytestCollectionWarning: cannot collect test class 'TestDoer' because it has a init constructor (from: tests/app/cli/commands/multisig/test_multisig.py)
class TestDoer(doing.DoDoer):

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
================================================== short test summary info ==================================================
FAILED tests/app/test_configing.py::test_configer - AssertionError: assert False
=================================== 1 failed, 223 passed, 5 warnings in 79.14s (0:01:19) ====================================

root@2ac5ac59a07c:/keripy# pytest tests/demo/

==================================================== test session starts ====================================================
platform linux -- Python 3.10.5, pytest-7.1.2, pluggy-1.0.0
rootdir: /keripy
collected 5 items

tests/demo/test_demo.py ..... [100%]

===================================================== 5 passed in 7.13s =====================================================
root@2ac5ac59a07c:/keripy#

Cardano as a KERI backer

Sam went over some of the details of implementing a KERI backer on a distributed ledger like Cardano. We should link the recording of that here. This ticket will allow us to workout the implementation details over time.
RootsID would like to craft the design/proposal over the next month.

Add Support for Habery which is manager of multiple habitats

A given keystore and KEL LMDB database may be shared by multiple Habitats. Each Habitat has a controller prefix AID. By default currently the name of the keystore and key db directory path includes the name of the first Habitat upon creation, but does not need to be that way. For DB shared by multiple habitats the db file dir path should be a name that represents the component or application which could be different from any of the associated Habitats. So we need to have a naming convention and also setup the Doers etc so that it follows that convention when multiple Habitats share the same DBs.

If Habitats are setup asynchronously then there might be that case where in order to use the same database a later habitat with the same name but different AID is created that clobbers a previous Habitat with same name.

An example is the MultiSig setup.

Possible loss of witness receipts when escrowing out-of-order events

The definition of escrowOOEvent is as follows:

def escrowOOEvent(self, serder, sigers, seqner=None, diger=None):

And it is being called from processEvent inside the Tevery that takes wigers as an argument:

def processEvent(self, serder, sigers, *, wigers=None, seqner=None, diger=None, firner=None, dater=None):

It seems that an event placed in the out-of -order escrow would then lose any witness receipts that were attached to it (for example, an issuer sending a KEL anchoring event for a TEL to an issuee who has not seen other events in the KEL).

@SmithSamuelM should this escrow method also persist witness receipts?

Remove redundant context manager

existingHab appears redundant. Just calling openHab with temp=False and reload=True should be equivalent to existingHab
so we should modify openHab so that there is a parameter set that is equivalent to existingHab.

@contextmanager
def openHab(name="test", salt=b'0123456789abcdef', temp=True, **kwa):
    """
    Context manager wrapper for Habitat instance.
    Defaults to temporary database and keeper.
    Context 'with' statements call .close on exit of 'with' block

    Parameters:
        name(str): name of habitat to create
        salt(bytes): passed to habitat to use for inception
        temp(bool): indicates if this uses temporary databases

    """

    with basing.openDB(name=name, temp=temp) as db, \
            keeping.openKS(name=name, temp=temp) as ks:
        salt = coring.Salter(raw=salt).qb64
        hab = Habitat(name=name, ks=ks, db=db, temp=temp, salt=salt,
                      icount=1, isith=1, ncount=1, nsith=1, **kwa)

        yield hab


@contextmanager
def existingHab(name="test", **kwa):
    """
    Context manager wrapper for existing Habitat instance.
    Will raise exception if Habitat and database has not already been created.
    Context 'with' statements call .close on exit of 'with' block

    Parameters:
        name(str): name of habitat to create
    """

    with basing.openDB(name=name, temp=False, reload=True) as db, \
            keeping.openKS(name=name, temp=False) as ks:
        hab = Habitat(name=name, ks=ks, db=db, temp=False, create=False, **kwa)
        yield hab

Gossip protocol

Let me introduce some initial thoughts into the information dissemination via the gossip protocol. The thoughts collected below aren't complete and shall be further discussed.

Discoverability

One of the next features to be addressed is gossiping, especially among witnesses and watcher network actors or just simply nodes. A node to gossip some information requires other nodes to gossip to. Here we have two cases:

  • the overall amount of various node types (witnesses, watchers etc) are managed under federated model, where they're constrained to certain amount;
  • the overall amount of various node types in the network is dynamic and nodes need to discover other nodes to gossip to.

Depending on the above assumptions, the problem space may be defined in a different way. Lets start then with the latter, so the dynamic amount of nodes.

When node bootstraps for the first time, its view of other nodes is either empty or preconfigured (ie. coming from the node config) to a certain amount of other known nodes. However, a node must be also able to find out (or be notified about other nodes) where to gossip to. The easiest solution is to allow nodes to gossip their view of other nodes, so that when a new node joins the network, it first communicates with a node that already participates in the network. Later on this node gossips info about new node in the network. Nodes gossip info about other nodes in the form of OOBI's.

Death detection

If the amount of nodes in the network is dynamic, we must assume new nodes may be established as well as existing nodes may be shutdown. To not gossip to a node that is already considered dead, some kind of death detection mechanism is needed. To be discussed.

Recoverability

When a node goes out of sync with other nodes, ie. it lost the internet connection for some time, node needs to re-sync to get the latest data. Other nodes don't know in what state current node is and current node doesn't know how far behind it is comparing to the other nodes. Two approaches to consider:

  • push/pull model, where nodes are able to gossip to each other to eventually converge;
  • push model only, where other nodes are gossiping as usual, so that node who was offline awaits until there are some new gossips.

What to gossip

There are several data types to be considered:

  • witness receipts;
  • identifiers KSN messages so that if the KEL changes, nodes are able to re-sync using the query mode.
  • ?

When to gossip

event driven gossiping mechanism, where node gossips to other nodes only if any new change has happened on his side, ie. new event appended into KEL/new receipt issued/...

How to gossip

Nodes are gossiping via UDP protocol, so that they don't care whether the message has been delivered.

Witnesses push notifications via SSE

Hi,

After today's meeting some more questions came to my mind regarding the SSE concept. Assuming Witnesses will be the primary role providing subscriptions via push notifications over SSE:

  • Who would be able to subscribe, anyone having OOBI? What if there will be plenty of such "anyone's"? Would it then make sense to introduce some-kind-of AUTH mechanism to limit the subscribers only to those who are "known" in given EGF or perhaps it should be use case specific?
  • How to unsubscribe? Similar mechanism to /end/role/add and end/role/cut?
  • Gossip mentioned in #158 in a push mode would operate in a similar fashion to SSE, however nodes would converge slower to the same state due to the nature of gossip.

Aside note:
In this section there is a warning that unless HTTP/2 is in use, the limit of max opened connections for SSE is 6.

Support for many Legal Entities using handful of Keripy instances (endpoints)

Use Case: Provenant as a custodial service wants to serve many Legal entities and their representatives.

Integration Pattern: Provenant as a service provider will sign up Legal entities and their representatives and integrate with keripy behind the scenes to provide KERI core functionality.

Motivation: Provenant wants to use a pool of keripy agents (ip:ports) to serve many Legal Entities and NOT one per legal entity.

Examples of Legal Entities are: Coke, Starbucks, Nike, Subway …

Assumptions:

Provenant Custodial Service running with a few instances (say 3 for now), but logically one URL endpoint(for example, https://www.provenant.com/custodial-service), behind the scene we proxy few actual instances.

keripy running as an agent (service) with few instances with url/ports (say 3 for now)

Provenant keeps track of the keripy running instances

Typical scenario

Create an Entity to represent Coke in Provenant Service/storage

Integrate with keripy using REST APIs, Provenant needs to use one of the keripy exposed PORT endpoints (tracked by Provenant) to do the following

POST “boot” to init (create) the wallet for Coke (wallet name)

PUT “boot” to unlock the wallet for Coke (wallet name)

POST - to trigger inception event for the the unlocked wallet “Coke” with an alias for the first AID say, “Coke”

Trigger other events that keripy provides using the other REST APIs

As each Legal Entity signs up, Provenant manages which of the keripy PORT instances it selects to do the work described in step #2

Currently keripy is bound to a PORT once a wallet is “booted”/unlocked for a Legal Entity (wallet name in keripy world)

To use the same PORT for another Legal Entity, we need a way for the agent to unbind (“lock”) the current key store and bind (“unlock”) the key store for the other Legal Entity (alias in keripy world)

We experimented with the new GET “lock” API recently introduced in keripy

After an agent “locked” the current entity wallet he was bound to, unlocking another wallet, threw an exception and killed the running agent service

Replace expose, `exp` message with pair of messages, bare, `bar` and prod, `pro`

The expose message was a placeholder for BADA logic for disclosing (i.e. exposing unhides) data anchored with a digest in a seal in event in KEL.
Also need a message to request the disclosure (unhidingO so created new pair of messaged

To disclose one bares (unhides) with bre message and one prompts that disclosure (unhiding) with the prod, prd message.

KERI OOBA Protocol to MFA Public AID

An entity that is standing up an AID as a public root-of-trust needs some way to establish the linkage of that entity with its AID in a purely public manner.
One way to do this is to proof that the entity also controls other already linked identifiers under the entity's control. Those identifiers may be well known as belonging to that entity such as:

a github project or repo
a website dns
social media account

This process is a form of MFA via multiple associations with other identifiers that are under the control of the entity.

These other identifiers each have their own mechanisms of control and authentication which individually may be relatively weak but in combination may be very strong. This is a form of multi-factor authentication via multi-factor association.
With respect to KERI these are all out of band and therefore correctly may correctly be called OOBAs (out-of-band-authentication or out-of-band-association mechanisms)

We need to define a standard vis a vis KERI protocol for confirming OOBAs.
The OOBA protocol is different from the OOBI protocol because an OOBI (or OOBI forwarad) is verified via KERI whereas an OOBA is not verifiable via KERI.

An OOBA request (query) may be blind with respect to the AID because the querier may not yet know the AID. For example the URL that points to the OOBA endpoint might be a fixed configuration parameter (i.e. can't include the AID yet) for KERI software that is meant to bootstrap the KERI software from which the public AID root-of-trust is to be discovered.

This means an OOBA may be a bare URL such as a .well-known or may be a URL plus some OOBA specific authentication information (not KERI).

An OOBA query of its URL MUST return as a reply the associated AID or an OOBI with that AID such as a service endpoint (witness) that can then be verified and that OOBA. The reply to the query MUST include the AID and MUST be signed with the current keys of the AID. This proves that the reply was authorized by the controller of the AID and makes a provable association between the control of both identifiers (OOBA ID and AID).

Think of an OOBA as an authenticated forward to an OOBI. Its different from an OOBI forward because the purpose of the OOBA is to provide some degree of authentication as part of a multi-factor authentication.
The OOBA's authentication mechanism is out-of-band with respect to KERI.

An OOBA may itself be a .well-known URL so its authentication mechanism is implied as well known already.

Watcher Network

Change logic for discovery of KEL by controller of another controllers key to start with watchers not witnesses

Witness Receipts Tevery Potential Bug

Potential bug

Also, while creating the Tevery I noticed that the Kevery is dropping witness
receipts when it places an event in the out-of-order escrow. Is that intentional?
Is this because in replay mode (where the witness signatures will be attached)
the events will never be out of order and for async messages receipts are handled elsewhere?

Return Value of sign Function

I tried using the sign function in the keeping.py file, but that function returns only the address of the object of either sigers or cigars.
Can someone please explain, how to retrieve the list from that address?

Code is here:

def sign(self, ser, pubs=None, verfers=None, indexed=True, indices=None):

def sign(self, ser, pubs=None, verfers=None, indexed=True, indices=None):
"""
Returns list of signatures of ser if indexed as Sigers else as Cigars with
.verfer assigned.
Parameters:
ser is bytes serialization to sign
pubs is list of qb64 public keys to lookup private keys
verfers is list of Verfers for public keys
indexed is Boolean, True means use offset into pubs/verfers/signers
for index and return Siger instances. False means return Cigar instances
indices is list of int indexes (offsets) to use for indexed signatures
that may differ from the order of appearance in the pubs or verfers
lists. This allows witness indexed sigs or controller multi-sig
where the parties do not share the same manager or ordering so
the default ordering in pubs or verfers is wrong for the index.
If provided the length of indices must match pubs/verfers/signers
else raises ValueError. If not provided and indexed is True then use
default index that is offset into pubs/verfers/signers
if neither pubs or verfers provided then returns empty list of signatures
If pubs then ignores verfers otherwise uses verferss
Manager implement .sign method and tests
sign(self,ser,pubs,indexed=True)
checks for pris for pubs in db is not raises error
then signs ser with eah pub
returns list of sigers indexed else list of cigars if not
"""
signers = []

    if pubs is None and verfers is None:
        raise ValueError("pubs or verfers required")

    if pubs:
        for pub in pubs:
            verfer = coring.Verfer(qb64=pub)  # needed to know if nontrans
            raw = self.keeper.getPri(key=pub)
            if raw is None:
                raise ValueError("Missing prikey in db for pubkey={}".format(pub))
            signer = coring.Signer(qb64b=bytes(raw),
                                   transferable=verfer.transferable)
            signers.append(signer)

    else:
        for verfer in verfers:
            pub = verfer.qb64
            raw = self.keeper.getPri(key=pub)
            if raw is None:
                raise ValueError("Missing prikey in db for pubkey={}".format(pub))
            signer = coring.Signer(qb64b=bytes(raw),
                                   transferable=verfer.transferable)
            signers.append(signer)

    if indices and len(indices) != len(signers):
        raise ValueError("Mismatch length indices={} and resultant signers "
                         "list={}".format(len(indices), len(signers)))

    if indexed or indices:
        sigers = []
        for i, signer in enumerate(signers):
            if indices:
                i = indices[i]  # get index from indices
            sigers.append(signer.sign(ser, index=i))  # assigns .verfer to siger
        return sigers
    else:
        cigars = []
        for signer in signers:
            cigars.append(signer.sign(ser))  # assigns .verfer to cigar
        return cigars

Add additional coverage for command line interface.

The file test_kli_commands.py demonstrates how to test kli commands yet we only have tests for a handful of the command available to the kli.

Additional tests need to be added for commands not yet tested. The current tests demonstrate how to capture and test again standard output.

Tests for commands that require witnesses could be added using the code from test_multisig.py to launch witnesses for tests.

How to Verify signed message from a separate code

I want to do verification, I have one issuer application that signs my credentials code:
sig = keeping.Manager(keeper=kpr, salt=salt).sign(bytes(json.dumps(data),'utf-8'),pubs=digers,verfers=verfers)

How can I verify the signature generated above in a separate application?

Demo directory contains lots of stuff that doesn't meet the syntax requirements of KLI

Some familiarity with KERI and KLI.

Installed in virtualenv, and ran the test cases and generally the library work great. Thanks so much.

demo-script.sh does run and seems to report what I would expect.

Now I'm Trying to learn the detail of controllers witnesses and watchers, but lots of demo samples don't really run due to API mismatch. Looks like KLI syntax has changed and these need updating. If someone want to give me a digest of what changed, I don't mind creating the PR.

For example. Below I ran start_agent.sh (after init)

$ kli init --name gleif --nopasscode
KERI Keystore created at: /usr/local/var/keri/ks/gleif
KERI Database created at: /usr/local/var/keri/db/gleif
KERI Credential Store created at: /usr/local/var/keri/reg/gleif

$ ./scripts/demo/start-agent.sh
Identifier prefix does not exist, running incept
usage: kli incept [-h] --name NAME [--base BASE] --alias ALIAS [--config CONFIG] --file FILE [--passcode BRAN] [--aeid AEID]
kli incept: error: the following arguments are required: --alias/-a
usage: kli vc registry incept [-h] --name NAME [--registry-name REGISTRY_NAME] [--no-backers NO_BACKERS] [--backers ]
[--establishment-only ESTABLISHMENT_ONLY] [--base BASE] --alias ALIAS [--passcode BRAN]
kli vc registry incept: error: the following arguments are required: --alias/-a
usage: kli [-h] command ...
kli: error: unrecognized arguments: --name gleif

It seems like name and alias have gone through some transformation, but its not always clear in the KLI commands whether name is the name of the database and keystore or a new name for what I am creating.

KSN Message refactor as attribute section of new reply message

The fields that are already in the reply message may be deleted from the existing KSN in terms of the fields from the KSN that go into the attributes field of the reply.

Likewise the KSN that is used for storing the latest key state for the Kever so it can be simplified since its no longer an over the wire structure.

After our call I thought about the KSN escrow database artifacts.
So to be clear I am talking about the new KSN that is actually embedded in the rpy message.

Unlike the service endpoints, KSN replies serve a temporary function that is obsoleted as soon as the events noticed by the KSN are accepted into the appropriate KEL. There is no replay attack vulnerability in that case
because the logic is to first check the Kever (KEL) state to see if it has already been accepted and then and only then go through the ksn query and then ksn reply. This means that the ksn reply should have a clean up or prune process (Doer) that runs periodically to prune the branches of the ksn reply escrows. The reason for this is that given multiple sources for a ksn for a given AID there may be multiple sources but only the source that gets accepted may be known at the time of acceptance. To reiterate once the key events are accepted into the KEL the KSN reply escrow is moot and may be deleted. Othewise the database will grow without bound.

Update KLI to support Doist scriptability

Update all commands in kli command package to return a list of Doers and have kli.py main method read that list of Doers and execute directing.runController() with that list of doers to execute the command.

This will allow any testing harness to call the handle method registered for any command and execute it by passing to a top level Doist which will make scripting multiple commands together easier for tests.

Tried to install Keripy following instructions in readme.me

The installation runs smooth on a Mac OSX (see attachment for details)

But in the end I get two errors I can't decipher. See attachments.

Screenshot 2022-01-27 at 16 35 59

Screenshot 2022-01-27 at 16 33 55

Screenshot 2022-01-27 at 16 31 31

My goal is to set up a test and demo environment and present how KERI works to others like Phil did for IIW.

New format for next and next threshold in Establishment Events Enables Partial Rotation of all Rotation events

The new format for establishment events is defined here
https://hackmd.io/Tfm63kdnRdmGxcVgrne5Uw

Additional validation logic by multi-sig group members prior to signing.

  • Ensure that digest of next public key is included in next digest list, n field before signing if the group member expects to be part of the next set of signers. Add a flag to the event signing method to indicate this expectation. A group member may willingly expect to be part of the next set of signers if they are transferring control of the identifier to someone else and would still sign. But if they are expecting to part of the next set of signers and their public key digest does not show up in the next list then it may a surreptitious attempt to wrest control away from them and they should not sign.
  • Update UX/UI to display the result of this check prior to signing to allows signer to indicate willingness to sign in spite of no longer being included.

Additional validation logic for rotation event

  • Ensure that set of signatures satisfies both the previous next threshold nt from previous est event and the current key threshold kt of current rotation event being validated. Support for partial rotation means that prior nt and current kt do not have to be the same but must be both satisfiable. Prior nt must be satisfied by a subset of the public keys committed to in the prior n field list. Current kt must be satisfied by a superset composed of the prior nt satisfycing set combined with any additional needed public keys from current k.

Multi-sig group membership

There are two classes of members in a multi-sig group.

A) Member of the current set of signing keys, i.e. public key in k field list
B) Member of the next set of signing keys, i.e. public key in n field list

A Member may be in both A and B or in only one of A or B. Membership in B means ultimate authority because only members of B may participate in the next rotation event which may change membership of both A and B. Membership only in A means current signing authority only and no authority to rotate.

In addition a former Member may be thought of as having Alumni status once removed from a multi-sig group but no longer has any authority over the identifier controlled by the multi-sig group

The UX/UI for a given establishment event may want to indicate the group membership status and any proposed change in status prior to approving a new proposed establishment event.

Change Exchange Protocol logic empty value

keri.peer.exchanging.py

def exchange(route, payload, date=None, modifiers=None, version=coring.Version,
             kind=coring.Serials.json):
    """
    Create an `exn` message with the specified route and payload
    Parameters:
        route (string) to destination route of the message
        payload (dict) body of message to deliver to route
        date (str) Iso8601 formatted date string to use for this request
        modifiers (dict) equivalent of query string of uri, modifiers for the request that are not
                         part of the payload
        version (Version) is Version instance
        kind (Serials) is serialization kind

    """
    vs = coring.Versify(version=version, kind=kind, size=0)
    ilk = eventing.Ilks.exn
    dt = date if date is not None else helping.nowIso8601()

    ked = dict(v=vs,
               t=ilk,
               dt=dt,
               r=route,
               d=payload,
               q=modifiers
               )

    if modifiers is None:
        del ked["q"]

    return eventing.Serder(ked=ked)  # return serialized ked

Suggest changing this to follow the convention of having empty value for field instead of optionally appearing fields .
Logic for validating message format does not have a switch on optional field presense. Instead field is required but may be empty.

Logic for using field must already check for field value anyway given field is present so its less logic to require fields and then deal with empty fields when provided.

empty may depend on the field value type.
if str type then empty str = ""
if dict type then empty dict = {}
if list type then empty list = []
and so forth

def exchange(route, payload, date=None, modifiers=None, version=coring.Version,
             kind=coring.Serials.json):
    """
    Create an `exn` message with the specified route and payload
    Parameters:
        route (string) to destination route of the message
        payload (dict) body of message to deliver to route
        date (str) Iso8601 formatted date string to use for this request
        modifiers (dict) equivalent of query string of uri, modifiers for the request that are not
                         part of the payload
        version (Version) is Version instance
        kind (Serials) is serialization kind

    """
    vs = coring.Versify(version=version, kind=kind, size=0)
    ilk = eventing.Ilks.exn
    dt = date if date is not None else helping.nowIso8601()

    ked = dict(v=vs,
               t=ilk,
               dt=dt,
               r=route,
               d=payload,
               q=modifiers if modifiers else {},
               )
    return eventing.Serder(ked=ked)  # return serialized ked

Delegated Superseding Recovery Rules

Cooperative Recovery of Delegated Pre-Rotated Keys

Superseding Recovery

Supersede means that after an event has already been accepted as first seen
into a KEL that a different event with the same sequence number is accepted
that supersedes the pre-existing event at that sn. This enables the recovery of
events signed by compromised keys. The result of superseded recovery is that
the KEL is forked at the sn of the superseding event. All events in the
superseded branch of the fork still exist but, by virtue of being superseded,
are disputed. The set of superseding events in the superseding fork forms the authoritative
branch of the KEL. All the already seen superseded events in the superseded fork
still remain in the KEL and may be viewed in order of their original acceptance
because the database stores all accepted events in order of acceptance and
denotes this order using the first seen ordinal number, fn.
The fn is not the same as the sn (sequence number).
Each event accepted into a KEL has a unique fn but multiple events due to
recovery forks may share the same sn.

Superseding Rules for Recovery at given SN (sequence number)

A0. Any rotation event may supersede an interaction event at the same sn. (existing rule)
A1. A non-delegated rotation may not supersede another rotation at the same sn. (modified rule)
A2. An interaction event may not supersede any event. ( existing rule).

(B. and C. below provide the new rules)

B. A delegated rotation may supersede another delegated rotation at the same sn under either of the following conditions:
B1. The superseding rotation's delegating event is later than (sn is higher) the superseded rotation's delegating event in the delegator's KEL.
B2. The sn of the superseding rotation's delegating event is the same as the sn of the superseded rotation's delegating event in the delegator's KEL and the superseding rotation's delegating event is a rotation and the superseded rotation's delegating event is an interaction.

C. IF Neither A nor B is satisfied, then recursively apply rules A. and B. to the delegating events of those delegating events and so on until either A. or B. is satisfied, or the root KEL of the delegation has been reached.
C1. If neither A. nor B. is satisfied by recursive application on the delegator's KEL (i.e. the root KEL of the delegation has been reached without satisfaction) then the superseding rotation is discarded. The terminal case of the recursive application will occur at the root KEL which by definition is non-delegated, wherefore either A. or B. must be satisfied, or else the superseding rotation must be discarded.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.