Giter VIP home page Giter VIP logo

indy-plenum's Introduction

logo

Open in Gitpod

Announcements

April 12 2023

The project branches have changed.

The main branch now contains the Ubuntu 20.04 work stream, and the previous main branch containing the Ubuntu 16.04 work stream has been moved to the ubuntu-16.04 branch. We encourage everyone to switch to using the new code and appreciate your patience while we stabilize the work flows and documentation on this new branch.

The following changes were made to the branches:

  • main (default) renamed to ubuntu-16.04
    • This retargeted the associated PRs.
  • ubuntu-20.04-upgrade set as the default branch.
  • ubuntu-20.04-upgrade (default) renamed to main

Plenum Byzantine Fault Tolerant Protocol

Plenum is the heart of the distributed ledger technology inside Hyperledger Indy. As such, it provides features somewhat similar in scope to those found in Fabric. However, it is special-purposed for use in an identity system, whereas Fabric is general purpose.

Technical Overview of Indy Plenum

Refer to our documentation site at indy.readthedocs.io for the most current documentation and walkthroughs.

Please find the general overview of the system in Overview of the system.

Plenum's consensus protocol which is based on RBFT is described in consensus protocol diagram.

More documentation can be found in docs.

Other Documentation

  • Please have a look at aggregated documentation at indy-node-documentation which describes workflows and setup scripts common for both projects.

Indy Plenum Repository Structure

  • plenum:
    • the main codebase for plenum including Byzantine Fault Tolerant Protocol based on RBFT
  • common:
    • common and utility code
  • crypto:
    • basic crypto-related code (in particular, indy-crypto wrappers)
  • ledger:
    • Provides a simple, python-based, immutable, ordered log of transactions backed by a merkle tree.
    • This is an efficient way to generate verifiable proofs of presence and data consistency.
    • The scope of concerns here is fairly narrow; it is not a full-blown distributed ledger technology like Fabric, but simply the persistence mechanism that Plenum needs.
  • state:
    • state storage using python 3 version of Ethereum's Patricia Trie
  • stp:
    • secure transport abstraction
    • it has ZeroMQ implementations
  • storage:
    • key-value storage abstractions
    • contains leveldb implementation as the main key-valued storage used in Plenum (for ledger, state, etc.)

Dependencies

  • Plenum makes extensive use of coroutines and the async/await keywords in Python, and as such, requires Python version 3.5.0 or later.
  • Plenum also depends on libsodium, an awesome crypto library. These need to be installed separately.
  • Plenum uses ZeroMQ as a secure transport
  • indy-crypto
    • A shared crypto library
    • It's based on AMCL
    • In particular, it contains BLS multi-signature crypto needed for state proofs support in Indy.

Contact Us

  • Bugs, stories, and backlog for this codebase are managed in Hyperledger's Jira. Use project name INDY.
  • Join us on Jira's Rocket.Chat at #indy and/or #indy-node channels to discuss.

How to Contribute

How to Start Working with the Code

The preferred method of setting up the development environment is to use the devcontainers. All configuration files for VSCode and Gitpod are already placed in this repository. If you are new to the concept of devcontainers in combination with VSCode here is a good article about it.

Simply clone this repository and VSCode will most likely ask you to open it in the devcontainer, if you have the correct extension("ms-vscode-remote.remote-containers") installed. If VSCode didn't ask to open it, open the command palette and use the Remote-Containers: Rebuild and Reopen in Container command.

If you want to use Gitpod simply use this link or if you want to work with your fork, prefix the entire URL of your branch with gitpod.io/# so that it looks like https://gitpod.io/#https://github.com/hyperledger/indy-plenum/tree/main.

Note: Be aware that the config files for Gitpod and VSCode are currently only used in the main branch!

Please have a look at Dev Setup in indy-node repo. It contains common setup for both indy-plenum and indy-node.

indy-plenum's People

Contributors

adenishchenko avatar alexandershekhovcov avatar andkononykhin avatar artemkaaas avatar artobr avatar ashcherbakov avatar askolesov avatar dhh1128 avatar dsurnin avatar hadleym avatar jasonalaw avatar khagesh avatar lovesh avatar mac-arrap avatar michaeldboyd avatar mzk-vct avatar nemqe avatar pschlarb avatar rajeshkalaria80 avatar ravitezu avatar rytmarsh avatar sergey-shilov avatar sharma-rohit avatar skhoroshavin avatar sovbot avatar spivachuk avatar swcurran avatar toktar avatar udosson avatar wadebarnes avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

indy-plenum's Issues

Timing related bug in view change protocol

It looks like Plenum has timing-related bug in view change protocol.

Potential steps to reproduce

  • create a test pool with 4 nodes
  • pause 2 nodes, none of which are primary. If using docker enviroment:
    • use docker pause command, so nodes are frozen, and no explicit disconnection events happen
    • pause Node3 and Node4 - they are guaranteed not to be primaries initially
  • wait for 30 minutes, during that time
    • master primary will send freshness batch (probably couple of times)
    • working nodes will get and store these batches, but won't be able to order it because of lack of consensus
    • after about 10 minutes working nodes (including primary) should realize, that consensus is lost, and start sending votes for view change (INSTANCE_CHANGE messages), but because of lack of consensus view change won't start
  • after 30 minutes unpause paused nodes
    • they will realize that consensus was lost for too long, and also vote for view change
    • view change will start, NEW_VIEW message with previously unordered freshness batches will be created, but ordering will fail, complaining about incorrect batch time
    • so next view change will happen, with same results
    • so pool will enter perpetual view change cycle even though all nodes are up and healthy
  • restarting all nodes at once should break cycle and put pool back into healthy state

Actual steps when I caught this were longer, but based on my preliminary analysis these should also suffice.

Cause and potential fix

  • there is indeed a safeguard on batch time during normal ordering, so that malicious primary won't be able to create batches far in future or in past
  • however this safeguard also applies to batches that are reordered during view change, and if for whatever reason view change took longer than that safeguard window batches won't be able to be reordered, since their timestamps cannot be altered, and so view change will never be able to finish
  • potential fix should include either different time safeguard logic for reordering phase, or disabling that safeguard during reordering (however before doing that thorough analysis should be performed on safety of such action)

Remove Jenkinsfile.ci Pipeline

GitHub Actions based workflows have been developed, on both the main branch and the ubuntu-20.04-upgrade branch, to replace all of the functionality of the Jenkinsfile.ci pipeline.

Jenkinsfile.ci can safely be removed from the ubuntu-20.04-upgrade branch so when the branch is eventually merged into the main branch the file will be removed there as well.

This should be done after #1545 is merged.

Ubuntu 20.04: Upgrade RocksDB

For the Ubuntu 20.04 version of Plenum, upgrade to RocksDB 5.17, which is the version supported on Ubuntu 20.04; https://packages.ubuntu.com/search?searchon=sourcenames&keywords=rocksdb

Background:

  • Plenum is currently dependent on RocksDB 5.8.8.
  • When RocksDB 5.17 is used without any code changes several tests hang due to issues encountered by the code under test with the updated version of RocksDB. See #1546 for additional details.
  • Therefore updates to the code are required to support RocksDB 5.17
  • As an interim solution the RocksDB 5.8.8 package built by the Build 3rd Party Dependencies job (of the GHA Workflow) will be used. Refer to #1546 (comment) for details.

Update release documentation to reflect new GHA release workflows

The new release workflow is defined here; #1590

The PR contains a workflow diagram, docs/source/diagrams/release-workflow.puml, and some summary documentation regarding the GHA workflows, .github/workflows/README.md, which reference the diagram.

The official release documentation for indy-plenum should be updated to reference and describe the new process in more detail.

The release document for indy-plenum/node (all, or the majority of it) is in the indy-node project here; https://github.com/hyperledger/indy-node/blob/master/docs/source/ci-cd.md. That said, there should be release documentation of the indy-plenum project itself to describe it's release process, or at least refer to the combined release process in indy-node, the former is preferable since releases of indy-plenum technically can be done independently of indy-node.

Make it possible to perform huge catch-up

It looks like now Indy Node/Plenum cannot successfully catch up when node is too much behinds (for example new node joining 3-years old pool with 10000s transactions). While workaround is possible (just copy data) it would be much better if node could catch up using normal process.

More details (including logs with failed catch-up) can be found here: https://sovrin.atlassian.net/browse/SN-18

Acceptance criteria

  • New node with empty ledger should be able to join existing pool containing any number of transactions

Add virtual development environments to the `ubuntu-20.04-upgrade` branch

Add Visual Studio Code and Gitpod virtual development environments to the indy-plenum project's ubuntu-20.04-upgrade branch. Ubuntu 16.04 is not supported as a DevContainer so we're not going to go through the effort of trying to support virtual development on 16.04.

This has already been done for hyperledger/indy-node and can be used as a reference:

Outstanding issues from the above:

Ubuntu 20.04: Reduce (pinned) dependencies to a minimum

With the upgrade of Indy Plenum to run on Ubuntu 20.04, the dependency management should be improved as well.
An overview of all installed, referenced, and required PyPI packages can be found here: Hyperledger Indy-Plenum | Dependency management Ubuntu 20

  • Remove all packages that are not used anymore
  • Update all dependencies to the most recent version (if possible)
  • Remove as many pinned dependencies as possible
  • setup.py in the feature branch ubuntu-20.04-upgrade reflects the current state of all required PyPI packages (install_requires & test_require)

Test sovrin-client ERROR TypeError: Can't mix strings and bytes in path components

I install the sovrin-node by $ pip install -U --no-cache-dir sovrin-client . (And this command is follow the guide form https://www.evernym.com)
However, when I run the sovrin, it was something error. As follow is the error code:

` $ sovrin
Loading module /usr/local/lib/python3.5/dist-packages/config/config-crypto-example1.py
Module loaded.

Sovrin-CLI (c) 2017 Evernym, Inc.
Node registry loaded.
    EvernymV1: 52.33.22.91:9721
    EvernymV2: 52.38.24.189:9723
    RespectNetwork: 34.200.79.65:9729
    BULLDOG: 52.56.74.57:9746
Type 'help' for more information.
Running Sovrin 0.3.23


Saved keyring "Default" restored (/home/pi/.sovrin/keyrings/no-env/default.wallet)
Active keyring set to "Default"
sovrin> prompt ALICE
ALICE> connect test
Active keyring "Default" saved (/home/pi/.sovrin/keyrings/no-env/default.wallet)
Current active keyring got moved to 'test' environment. Here is the detail:
    keyring name: Default
    old location: /home/pi/.sovrin/keyrings/no-env/default.wallet
    new location: /home/pi/.sovrin/keyrings/test/default.wallet

Saved keyring "Default" restored (/home/pi/.sovrin/keyrings/test/default.wallet)
Active keyring set to "Default"
Error while running coroutine shell: TypeError("Can't mix strings and bytes in path components",)
Traceback (most recent call last):
  File "/usr/local/bin/sovrin", line 78, in <module>
    run_cli()
  File "/usr/local/bin/sovrin", line 56, in run_cli
    looper.run(cli.shell(*commands))
  File "/usr/local/lib/python3.5/dist-packages/stp_core/loop/looper.py", line 254, in run
    return self.loop.run_until_complete(what)
  File "/usr/lib/python3.5/asyncio/base_events.py", line 466, in run_until_complete
    return future.result()
  File "/usr/lib/python3.5/asyncio/futures.py", line 293, in result
    raise self._exception
  File "/usr/lib/python3.5/asyncio/tasks.py", line 239, in _step
    result = coro.send(None)
  File "/usr/local/lib/python3.5/dist-packages/stp_core/loop/looper.py", line 245, in wrapper
    raise ex
  File "/usr/local/lib/python3.5/dist-packages/stp_core/loop/looper.py", line 233, in wrapper
    results.append(await coro)
  File "/usr/local/lib/python3.5/dist-packages/plenum/cli/cli.py", line 1119, in shell
    self.parse(c)
  File "/usr/local/lib/python3.5/dist-packages/plenum/cli/cli.py", line 1902, in parse
    r = action(matchedVars)
  File "/usr/local/lib/python3.5/dist-packages/sovrin_client/cli/cli.py", line 1611, in _connectTo
    self._buildClientIfNotExists(self.config)
  File "/usr/local/lib/python3.5/dist-packages/plenum/cli/cli.py", line 543, in _buildClientIfNotExists
    self.newClient(clientName=name, config=config)
  File "/usr/local/lib/python3.5/dist-packages/sovrin_client/cli/cli.py", line 382, in newClient
    client = super().newClient(clientName, config=config)
  File "/usr/local/lib/python3.5/dist-packages/plenum/cli/cli.py", line 1039, in newClient
    config=config)
  File "/usr/local/lib/python3.5/dist-packages/sovrin_client/client/client.py", line 51, in __init__
    sighex)
  File "/usr/local/lib/python3.5/dist-packages/plenum/client/client.py", line 85, in __init__
    if self.exists(self.stackName, basedirpath):
  File "/usr/local/lib/python3.5/dist-packages/plenum/client/client.py", line 211, in exists
    os.path.exists(os.path.join(basedirpath, name))
  File "/usr/lib/python3.5/posixpath.py", line 89, in join
    genericpath._check_arg_types('join', a, *p)
  File "/usr/lib/python3.5/genericpath.py", line 145, in _check_arg_types
    raise TypeError("Can't mix strings and bytes in path components") from None
TypeError: Can't mix strings and bytes in path components`

I dont know how to solve this question....

Python version:3.5
Pip version:9.0.1
OS version:Debian 9

THANKS

Fix issue with Debian package checksums changing with each build

The checksums of the indy-plenum Debian packages change with each build, even when there are no code changes. If the code does not change, the checksum of the package should be the same each time it's built.

The biggest contributor to this issue is fpm (the tool used to build the packages). It includes a dummy changelog that has date and time stamps in it that change with each build.

The other contributors are the order of the content in the PKG-INFO and __manifest__.json files. Changes have recently been made to keep the contents of __manifest__.json ordered. However the tests_require section in setup.py still creates an unordered list that can change the order of the content in PKG-INFO from build to build.

getting started without documentation?

@jasonalaw Thank you for the presentation and discussion at BYU OIT today.

Using Plenum/ledger outside of Sovrin might accomplish some of your outreach goals. For example, I'm after a distributed, append only, and ordered, object repository (avoiding rdbms) to use as an event store. More docs would be nice, esspeically comparted to the some of the hyperledger project's docs.

No module named 'indy.pool'

Hi,

I am trying to run 'python -m plenum.test' under indy-plenum project, but got error: No module named 'indy.pool'. Please give suggestions on where to compile the module and install it. Thank you!

/root/.virtualenvs/indyVenv/lib/python3.5/site-packages/_pytest/config.py:329: in _getconftestmodules
return self._path2confmods[path]
E KeyError: local('/sovrin/build/indy-plenum/plenum/test')

During handling of the above exception, another exception occurred:
/root/.virtualenvs/indyVenv/lib/python3.5/site-packages/_pytest/config.py:360: in _importconftest
return self._conftestpath2mod[conftestpath]
E KeyError: local('/sovrin/build/indy-plenum/plenum/test/conftest.py')

During handling of the above exception, another exception occurred:
/root/.virtualenvs/indyVenv/lib/python3.5/site-packages/_pytest/config.py:366: in _importconftest
mod = conftestpath.pyimport()
/root/.virtualenvs/indyVenv/lib/python3.5/site-packages/py/_path/local.py:668: in pyimport
import(modname)
/root/.virtualenvs/indyVenv/lib/python3.5/site-packages/pytest/assertion/rewrite.py:213: in load_module
py.builtin.exec
(co, mod.dict)
plenum/test/conftest.py:16: in
from indy.pool import create_pool_ledger_config, open_pool_ledger, close_pool_ledger
E ImportError: No module named 'indy.pool'

During handling of the above exception, another exception occurred:
/root/.virtualenvs/indyVenv/lib/python3.5/site-packages/py/_path/common.py:377: in visit
for x in Visitor(fil, rec, ignore, bf, sort).gen(self):
/root/.virtualenvs/indyVenv/lib/python3.5/site-packages/py/_path/common.py:429: in gen
for p in self.gen(subdir):
/root/.virtualenvs/indyVenv/lib/python3.5/site-packages/py/_path/common.py:418: in gen
dirs = self.optsort([p for p in entries
/root/.virtualenvs/indyVenv/lib/python3.5/site-packages/py/_path/common.py:419: in
if p.check(dir=1) and (rec is None or rec(p))])
/root/.virtualenvs/indyVenv/lib/python3.5/site-packages/_pytest/main.py:411: in _recurse
ihook = self.gethookproxy(path)
/root/.virtualenvs/indyVenv/lib/python3.5/site-packages/_pytest/main.py:315: in gethookproxy
my_conftestmodules = pm._getconftestmodules(fspath)
/root/.virtualenvs/indyVenv/lib/python3.5/site-packages/_pytest/config.py:343: in _getconftestmodules
mod = self._importconftest(conftestpath)
/root/.virtualenvs/indyVenv/lib/python3.5/site-packages/_pytest/config.py:368: in _importconftest
raise ConftestImportFailure(conftestpath, sys.exc_info())
E _pytest.config.ConftestImportFailure: ImportError("No module named 'indy.pool'",)
E File "/root/.virtualenvs/indyVenv/lib/python3.5/site-packages/pytest/assertion/rewrite.py", line 213, in load_module
E py.builtin.exec
(co, mod.dict)
E File "/sovrin/build/indy-plenum/plenum/test/conftest.py", line 16, in
E from indy.pool import create_pool_ledger_config, open_pool_ledger, close_pool_ledger

Align local and GHA build processes

The Ubuntu 16.04 and 20.04 builds follow a similar pattern, the 16.04 pattern is being used as the example since the 20.04 files are currently in a pending PR.

The GHA workflows introduced a new Dockerfiles to create the build/test images. The Jenkins/local build processes also have a Dockerfile to create the image used for builds. The associated build-*-docker.sh scripts are not used by the GHA workflows, but the other build scripts are used by both processes.

Having a way to build the project locally in docker is handy, however there should only be one Dockerfile defining the build image.

Consolidate the docker files for the two build processes so there is only a single one shared by the two processes.

These updates should be developed on the branch containing the updates for Ubuntu 20.04. Currently ubuntu-20.04-upgrade, and done after this PR is merged; #1545

Python library `distro==1.8.0` does not work with `fpm`

Python library distro==1.8.0 does not work with fpm and therefore breaks the build-scripts/ubuntu-2004/build-3rd-parties.sh script, without first doing something like this:

sed -i 's|build_from_pypi distro|build_from_pypi distro 1.7.0|g' ./build-scripts/ubuntu-2004/build-3rd-parties.sh

It appears that this is due to the removal of setup.py which fpm requires. Some projects appear to be trying to replace fpm with https://github.com/upciti/wheel2deb as a solution.

Incorrect ViewChange messages consensus calculating

The simulation test for ViewChange sometimes fails

def test_new_view_combinations(random):

because of problem with getting a checkpoint on the phase of collecting ViewChange messages in the method calc_checkpoint(). It receives a list of ViewChange messages like a parameter.
If it's a 4 node pool and the list contains the follow ViewChange messages

  • ViewChange_1 | checkpoints_ends: 10, 20 | stable_checkpoint: 10
  • ViewChange_2 | checkpoints_ends: 0, 10 | stable_checkpoint: 0
  • ViewChange_2 | checkpoints_ends: 0 | stable_checkpoint: 0

Then we don't have a strong consensus of 3 (n-f=4-1) checkpoins with the same checkpoint end. It means, that the node can't finish a view change.

Expected problem: Low probability one or more nodes may not finish View Change and after a short period just start a new one.
With an incredibly low probability a pool can freeze with endless view changes. But it can be fixed by POOL_RESTART transaction.

We don’t think we have a big chance to face this case. But we need to remember about it and fix.

plenum/test/node_catchup/test_node_catchup_with_connection_problem hanging on Ubuntu 20.04

When running on Ubuntu 20.04 the follow tests hang and never complete:

plenum/test/node_catchup/test_node_catchup_with_connection_problem.py::test_catchup_with_lost_ledger_status

  • On the fourth iteration when all four iterations are run the test hangs.
  • When the fourth iteration (lost_count=4) is run on it's own the test passes.
  • Details of the investigation below.

plenum/test/node_catchup/test_node_catchup_with_connection_problem.py::test_catchup_with_lost_first_consistency_proofs

  • On the first iteration.
  • Cause has not been investigated.

plenum/test/node_catchup/test_node_catchup_with_connection_problem.py::test_cancel_request_cp_and_ls_after_catchup

  • On the first iteration.
  • Cause has not been investigated.

Investiagtion into hang issue with plenum/test/node_catchup/test_node_catchup_with_connection_problem.py::test_catchup_with_lost_ledger_status

The tests are hanging on this line:

Thinking that it could be an issue with RocksDB or the Python wrapper I tried building the wrapper straight from source git+https://github.com/twmht/python-rocksdb.git#egg=python-rocksdb to get the new close method that is not included in the released PyPi version. The latest code causes a seg fault on close, so I also tried git+https://github.com/alexreg/python-rocksdb.git@fix_close_segfault#egg=python-rocksdb which fixes the seg fault issue. My thought was the rocksDB instances were not getting closed/deposed properly. None of this made any difference, the tests still hung.

If you modify the code to only run 3 iterations, rather than the 4, you avoid the hang and the tests pass. If you modify the code to run just the 4th iteration the tests pass.

Steps to reproduce:

Using https://github.com/WadeBarnes/indy-plenum/blob/20.04-test-debugging

MINGW64 /c/indy-plenum (20.04-test-debugging)
$ docker build -t plenum-build:2004 -f .github/workflows/build/Dockerfile.ubuntu-2004 .
MINGW64 /c/indy-plenum (20.04-test-debugging)
$ docker build -t indy-plenum-test:2004 -f .github/workflows/build/Dockerfile.test-2004 .
MINGW64 /c/indy-plenum (20.04-test-debugging)
$ docker run --rm -it --name plenum-testing --volume='//c/indy-plenum:/home/indy/indy-plenum:Z' indy-plenum-test:2004 bash
root@cd83db811641:/home/indy/indy-plenum# python3 -m pytest -l -v --log-cli-level=WARNING --disable-warnings plenum/test/node_catchup/test_node_catchup_with_connection_problem.py::test_catchup_with_lost_ledger_status

Result:
On the fourth iteration of plenum/test/node_catchup/test_node_catchup_with_connection_problem.py::test_catchup_with_lost_ledger_status it will hang on root:kv_store_rocksdb.py:30 Init KeyValueStorageRocksdb -> open

WARNING  root:compact_merkle_tree.py:57 <- _update
PASSED                                                                                                                                                                                                           [ 75%] 
plenum/test/node_catchup/test_node_catchup_with_connection_problem.py::test_catchup_with_lost_ledger_status[4]
---------------------------------------------------------------------------------------------------- live log call -----------------------------------------------------------------------------------------------------WARNING  root:test_node_catchup_with_connection_problem.py:44 lost_count: 4
WARNING  root:test_node_catchup_with_connection_problem.py:45 txnPoolNodeSet: [Alpha, Beta, Gamma, Delta]
WARNING  root:test_node_catchup_with_connection_problem.py:46 looper: <stp_core.loop.looper.Looper object at 0x7f6629910220>
WARNING  root:test_node_catchup_with_connection_problem.py:47 sdk_pool_handle: 2
WARNING  root:test_node_catchup_with_connection_problem.py:48 sdk_wallet_steward: (5, 'MSjKTWkPLtYoPEaTF1TUDb')
WARNING  root:test_node_catchup_with_connection_problem.py:49 tconf: <module 'indy_config.py' from '/tmp/pytest-of-root/pytest-1/tmp0/etc/indy/indy_config.py'>
WARNING  root:test_node_catchup_with_connection_problem.py:50 tdir: /tmp/pytest-of-root/pytest-1/tmp0
WARNING  root:test_node_catchup_with_connection_problem.py:51 allPluginsPath: ['/home/indy/indy-plenum/plenum/test/plugin/stats_consumer']
WARNING  root:test_node_catchup_with_connection_problem.py:52 monkeypatch: <_pytest.monkeypatch.MonkeyPatch object at 0x7f662817fca0>

...

WARNING  root:kv_store_rocksdb_int_keys.py:23 -> Init KeyValueStorageRocksdbIntKeys
WARNING  root:kv_store_rocksdb.py:20 -> Init KeyValueStorageRocksdb
WARNING  root:kv_store_rocksdb.py:30 Init KeyValueStorageRocksdb -> open

Frozen ledgers should propagate to new nodes in the pool

According to HIPE 0162 Frozen Ledgers, frozen ledgers do not participate in catch up. New validators added to a pool with frozen ledgers do not receive the history of the frozen ledgers.

Theoretically, someone could freeze a ledger with history and lose that history due to evolution of the validator pool, which would prevent them from being able to perform a third party audit of the ledger history. This can be avoided by propagating the history of frozen ledgers to new nodes in the pool.

View change with large number

When there is an integer in the request that is larger than a 64 bit integer (9223372036854775807), the ujson package throws an uncaught value error which results in a view change.

Stacktrace:

Traceback (most recent call last):
  File "/usr/local/lib/python3.5/dist-packages/stp_zmq/zstack.py", line 565, in processReceived
    msg = self.deserializeMsg(msg)
  File "/usr/local/lib/python3.5/dist-packages/stp_zmq/zstack.py", line 835, in deserializeMsg
    msg = json.loads(msg)
ValueError: Value is too big

zstack.py

    @staticmethod
    def deserializeMsg(msg):
        if isinstance(msg, bytes):
            msg = msg.decode()
>>      msg = json.loads(msg)
        return msg

Upgrade and unpin remaining dependencies

An overview of all installed, referenced, and required PyPI packages can be found here: Hyperledger Indy-Plenum | Dependency management Ubuntu 20

As discussed here, Issue #1544, there are still several dependencies that need to be upgraded following the initial Ubuntu 20.04 release.

Where possible the dependencies should be upgraded individually to reduce the scope of the work, and a separate issue linking back to this one should be created to track the work.

Running Error

Hey guys.

I was trying to install plenum like you explained in the introduction. Unfortunately I get an error when I try to run it.

The error message:
`(evernym) musquash@ubuntu:~$ start_plenum_node Alpha

530 DEBUG Using selector: EpollSelector
535 DEBUG Starting ledger...
Keys exists for remote role EvernymV1
Keys exists for remote role EvernymV2
Keys exists for remote role WSECU
Keys exists for remote role BIG
Keys exists for remote role RespectNetwork
540 INFO Looper shutting down now...
551 INFO Looper shut down in 0.011 seconds.
Traceback (most recent call last):
File "/home/musquash/evernym/bin/start_plenum_node", line 22, in
node = Node(selfName, nodeRegistry=None, basedirpath=keepDir)
File "/home/musquash/evernym/lib/python3.5/site-packages/plenum/server/node.py", line 113, in init
self.poolManager.nodeReg)
File "/home/musquash/evernym/lib/python3.5/site-packages/plenum/common/stacked.py", line 302, in init
self._name = stackParams["name"]
TypeError: 'NoneType' object is not subscriptable
607 DEBUG Close <_UnixSelectorEventLoop running=False closed=False debug=True>
(evernym) musquash@ubuntu:~$
`

I am using a xubuntu vm (20GB and 1 GB ram, Ubuntu 16.06)

I hope you can help me.

Verify transaction types in genesis files

It turned out Indy Node/Plenum doesn't verify transaction types in genesis transaction. Given that there are actually two genesis transactions files in Indy Plenum (pool and domain) it can lead to situations when those files are mixed up and Indy Node doesn't give any warning. It might be also sensible to perform some additional validations of those transactions (for example - static validation part)

More details can be found here: https://sovrin.atlassian.net/browse/SN-18

Acceptance criteria

  • When building ledgers from genesis transaction files Indy Plenum/Node should perform at least minimal validation of these transactions

Ubuntu 20.04: create CD for Ubuntu 20.04

Create CD for Ubuntu 20.04 for a feature brunch ubuntu-20.04-upgrade

Acceptance criteria:

  • All existing tests are launched via GitHub Actions, and they pass (if pass locally)
  • All jobs(steps) pass
  • Reports from launches are available
  • Indy Plenum deb package are published to a public repository (focal like a channel name for Ubuntu 20.04 Focal Fossa)

README.md out of date - init_plenum_raet_keep parameters

The keep key gen commands listed in the readme have incorrect parameters -

init_plenum_raet_keep --name Alpha --seeds 000000000000000000000000000Alpha Alpha000000000000000000000000000 --force

The current code (installed using pip) asks for a --seed parameter instead, and will only accept a single seed:

init_plenum_raet_keep --name Alpha --seed 000000000000000000000000000AlphaAlpha000000000000000000000000000 --force

I'm not sure what the implications of the change are - whether it would use the two seeds to generate the key pair whereas it now uses the same seed.

Run sovrin error

I install the sovrin-node by $ pip install -U --no-cache-dir sovrin-client . (And this command is follow the guide form https://www.evernym.com)
However, when I run the sovrin, it was something error. As follow is the error code:

(sovrin) [xzc@xzc ~]$ sovrin
Loading module /home/xzc/anaconda3/lib/python3.6/site-packages/config/config-crypto-example1.py
Module loaded.
Traceback (most recent call last):
  File "/home/xzc/anaconda3/bin/sovrin", line 37, in <module>
    from sovrin_client.cli.cli import SovrinCli
  File "/home/xzc/anaconda3/lib/python3.6/site-packages/sovrin_client/cli/cli.py", line 17, in <module>
    from plenum.cli.cli import Cli as PlenumCli
  File "/home/xzc/anaconda3/lib/python3.6/site-packages/plenum/cli/cli.py", line 82, in <module>
    from plenum.server.node import Node
  File "/home/xzc/anaconda3/lib/python3.6/site-packages/plenum/server/node.py", line 71, in <module>
    from plenum.server.monitor import Monitor
  File "/home/xzc/anaconda3/lib/python3.6/site-packages/plenum/server/monitor.py", line 24, in <module>
    pluginManager = PluginManager()
  File "/home/xzc/anaconda3/lib/python3.6/site-packages/plenum/server/notifier_plugin_manager.py", line 35, in __init__
    self.importPlugins()
  File "/home/xzc/anaconda3/lib/python3.6/site-packages/plenum/server/notifier_plugin_manager.py", line 79, in importPlugins
    plugins = self._findPlugins()
  File "/home/xzc/anaconda3/lib/python3.6/site-packages/plenum/server/notifier_plugin_manager.py", line 105, in _findPlugins
    for pkg in pip.utils.get_installed_distributions()
AttributeError: module 'pip' has no attribute 'utils'

Howeverm,when i go to python console by type python and import pip.utils, it works ok.

(sovrin) [xzc@xzc ~]$ python
Python 3.6.4 |Anaconda, Inc.| (default, Jan 16 2018, 18:10:19) 
[GCC 7.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import pip.utils
>>> pip.utils
<module 'pip.utils' from '/home/xzc/.virtualenvs/sovrin/lib/python3.6/site-packages/pip/utils/__init__.py'>
>>> pip.utils.get_installed_distributions()
[wheel 0.31.1 (/home/xzc/.virtualenvs/sovrin/lib/python3.6/site-packages), setuptools 39.2.0 (/home/xzc/.virtualenvs/sovrin/lib/python3.6/site-packages), pip 9.0.1 (/home/xzc/.virtualenvs/sovrin/lib/python3.6/site-packages)]

Python version:3.6.4
Pip version:9.0.1
OS version:Centos7

THANKS

request to remove stale branches from this repo

Can we remove branches other than Master and Stable from this repo and have them moved to forks? I would make a PR to do this, but I'm not sure which ones are still being used and which are stale.

GitHub Action CI/CD Enhancements

The following is a list of what the existing Jenkins Pipelines do that the GitHub Actions Workflows don't (yet).

We'll be focusing on the features and functionality of the Jenkinsfile.cd Pipeline, since the GitHub Actions Workflows have incorporated all of the features and functionality of the Jenkinsfile.ci Pipeline.

The flow of the pipeline is setup in Jenkinsfile.cd but the execution is controlled by the testAndPublish script in the privatesovrin-foundation/jenkins-shared repository therefore anyone working on these enhancements will need to be granted read-only access to that repository in order to follow the code. The scripts automate the release process described here; Indy-Node Release Workflow. The same scripts are used for both indy-node and indy-plenum.

  • Configure auto-merge on PRs containing changes to setup.py and no other files.
  • Update the release version on release candidate (isRC) PRs.
  • Conditionally build (for release candidates) or repack (for releases) artifacts.
  • Promote/copy artifacts (deb packages) to different locations in the repository.
  • Optionally run system tests (this feature is used for indy-node, but not indy-plenum).
  • Create a release PR for (off) RC PRs.
  • Notify a mailing list that a new RC release is waiting for approval, and then wait for the release to be approved.
  • Merge approved release candidate PRs into the release branch.
  • Notify a mailing list that a new release is available.
  • Rollback release commits on PRs when the release is not approved.

When developing the enhancements a separate issue should be created to track the work and be linked back to this issue. Feature enhancements should be limited to the smallest set of related features in order to limit the scope of the work.

Cannot run tests on Mac because of _lzma module

Here's the stack trace I'm getting when trying to run tests from Master build. I believe it is specific to my python build because it's not able to find this standard package which is supposed to built into python3. I am currently installing a python3 (3.5.2) instance through pyenv. Any suggestions on how to resolve this?

Traceback (most recent call last):
File "/Applications/PyCharm.app/Contents/helpers/pydev/pydev_run_in_console.py", line 52, in run_file
pydev_imports.execfile(file, globals, locals) # execute the script
File "/Applications/PyCharm.app/Contents/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/Users/kyle/garage/indy/indy-plenum/plenum/test/primary_election/test_primary_election_case2.py", line 3, in
from stp_core.loop.eventually import eventually
File "/Applications/PyCharm.app/Contents/helpers/pydev/_pydev_bundle/pydev_import_hook.py", line 20, in do_import
module = self._system_import(name, *args, **kwargs)
File "/Users/kyle/garage/indy/indy-plenum/stp_core/loop/eventually.py", line 9, in
from stp_core.common.log import getlogger
File "/Applications/PyCharm.app/Contents/helpers/pydev/_pydev_bundle/pydev_import_hook.py", line 20, in do_import
module = self._system_import(name, *args, **kwargs)
File "/Users/kyle/garage/indy/indy-plenum/stp_core/common/log.py", line 6, in
from stp_core.common.logging.CompressingFileHandler import CompressingFileHandler
File "/Applications/PyCharm.app/Contents/helpers/pydev/_pydev_bundle/pydev_import_hook.py", line 20, in do_import
module = self._system_import(name, *args, **kwargs)
File "/Users/kyle/garage/indy/indy-plenum/stp_core/common/logging/CompressingFileHandler.py", line 4, in
import lzma
File "/Applications/PyCharm.app/Contents/helpers/pydev/_pydev_bundle/pydev_import_hook.py", line 20, in do_import
module = self._system_import(name, *args, **kwargs)
File "/Users/kyle/.pyenv/versions/3.5.2/lib/python3.5/lzma.py", line 26, in
from _lzma import *
File "/Applications/PyCharm.app/Contents/helpers/pydev/_pydev_bundle/pydev_import_hook.py", line 20, in do_import
module = self._system_import(name, *args, **kwargs)
ImportError: No module named '_lzma'

Details on CurveZMQ

Hi,
I am working on a usecase using Hyperledger Indy and Aries in which have below queries,

  1. In Indy official docs, it is written that Client-Client and node-node communication happens on CurveZMQ.
    Reference link: https://github.com/hyperledger/indy-plenum/blob/master/docs/source/main.md
    (a) In this regard, who is exactly a client? Is it another node which is participating in Indy pool or is it an external client outside of the Indy pool like mobile app or any other?
    (b)With regard to this, can you detail me how exactly public keys are managed for CurveZMQ in Indy wrt codebase and how the mobile or web agents (which are used as connectors ) get this server public keys?
    As In CurveZMQ official documentation, it is been said that, “To start a secure connection the client needs the server permanent public key”

http://curvezmq.org/page:read-the-docs

Please help me out with the above queries as little stuck in this perspective.
Hoping for a positive response.
Thanks in Advance!

Zeno cli name

I suggest changing /scripts/cli to /scripts/zeno as the cli when installed in /usr/local/bin/cli might be ambiguous but /usr/local/bin/zeno won't be

IC_queue is getting huge

We experience an issue with a huge IC_queue. Our validator_info response has a size of around 7MB. The issue started already in March, when lots of view changes were triggered before the view change could be completed. View_no to be voted for increased from 8044 to 10931. Then a node voted for a view change to 8045 again which was successful. However every node still carries the huge IC_queue. Restarting does not help since the IC_queue is persisted.

IC - Instance Change

I've attached (parts of) the log from the view_change_trigger_service.py.
extract.txt

Looking at the code it seems that this is the only point where instance change messages get removed.

Is there a way to flush the IC queue without deleting the indy node's data directory? About 10 stewards would need to do this in this case.
Can we do something to prevent such a build up of (unsuccessful) view changes?

I've reported this issue/asked the questions also on Rocketchat.

Thanks for taking the time to look into this!

Network Details

13 validator nodes in March. Today 15.

Software

"Software": {
                    "indy-node": "1.12.4",
                    "sovrin": "unknown",
                    "Indy_packages": [
                        "hi  indy-node                           1.12.4                                        amd64        Indy node",
                        "hi  indy-plenum                         1.12.4                                        amd64        Plenum Byzantine Fault Tolerant Protocol",
                        "hi  libindy-crypto                      0.4.5                                         amd64        This is the shared crypto libirary for Hyperledger Indy components.",
                        "hi  python3-indy-crypto                 0.4.5                                         amd64        This is the official wrapper for Hyperledger Indy Crypto library (https://www.hyperledger.org/projects).",
                        ""
                    ],
                    "OS_version": "Linux-4.15.0-1092-azure-x86_64-with-Ubuntu-16.04-xenial",
                    "Installed_packages": [
                        "jsonpickle 0.9.6",
                        "intervaltree 2.1.0",
                        "pyzmq 18.1.0",
                        "packaging 19.0",
                        "ioflo 1.5.4",
                        "base58 1.0.0",
                        "python-rocksdb 0.6.9",
                        "orderedset 2.0",
                        "indy-crypto 0.4.5-23",
                        "sha3 0.2.1",
                        "portalocker 0.5.7",
                        "distro 1.3.0",
                        "setuptools 38.5.2",
                        "rlp 0.5.1",
                        "psutil 5.4.3",
                        "indy-plenum 1.12.4",
                        "python-dateutil 2.6.1",
                        "semver 2.7.9",
                        "Pympler 0.5",
                        "sortedcontainers 1.5.7",
                        "Pygments 2.2.0",
                        "libnacl 1.6.1",
                        "timeout-decorator 0.4.0",
                        "six 1.11.0",
                        "indy-node 1.12.4"
                    ]
                },

There is some variation on the exact os_version among the validator nodes.

'NoneType' object has no attribute 'split'"

You might have a problem :

(env) > $ plenum                                                                                                         [±master ✓]

Plenum-CLI (c) 2016 Evernym, Inc.
Node registry loaded.
    Alpha: 127.0.0.1:9701
    Beta: 127.0.0.1:9703
    Gamma: 127.0.0.1:9705
    Delta: 127.0.0.1:9707
Type 'help' for more information.
plenum> new node all
plugin FirebaseStatsConsumer successfully loaded from module plugin_firebase_stats_consumer
Delta added replica Delta:0 to instance 0 (master)
Delta added replica Delta:1 to instance 1 (backup)
Delta listening for other nodes at 127.0.0.1:9707
Delta disconnected node is joined
Delta disconnected node is joined
Delta looking for Alpha at 127.0.0.1:9701
Delta starting key sharing
plugin FirebaseStatsConsumer successfully loaded from module plugin_firebase_stats_consumer
Beta added replica Beta:0 to instance 0 (master)
Beta added replica Beta:1 to instance 1 (backup)
Beta listening for other nodes at 127.0.0.1:9703
Beta disconnected node is joined
Beta disconnected node is joined
Beta looking for Alpha at 127.0.0.1:9701
Beta starting key sharing
plugin FirebaseStatsConsumer successfully loaded from module plugin_firebase_stats_consumer
Gamma added replica Gamma:0 to instance 0 (master)
Gamma added replica Gamma:1 to instance 1 (backup)
Gamma listening for other nodes at 127.0.0.1:9705
Gamma disconnected node is joined
Gamma disconnected node is joined
Gamma starting key sharing
plugin FirebaseStatsConsumer successfully loaded from module plugin_firebase_stats_consumer
Alpha added replica Alpha:0 to instance 0 (master)
Alpha added replica Alpha:1 to instance 1 (backup)
Alpha listening for other nodes at 127.0.0.1:9701
Alpha first time running; waiting for key sharing...
Alpha starting key sharing
Alpha looking for Gamma at 127.0.0.1:9705
Gamma now connected to Alpha
Alpha now connected to Gamma
Alpha msg validated ({'ledgerType': 1, 'txnSeqNo': 0, 'merkleRoot': '47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU='}, 'Gamma')
Gamma msg validated ({'ledgerType': 1, 'txnSeqNo': 0, 'merkleRoot': '47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU='}, 'Alpha')
plenum> status
Nodes: Beta, Delta, Gamma, Alpha
Clients: No clients are running. Try typing 'new client <name>'.
f-value (number of possible faulty nodes): 1
Instances: 2
Error while running coroutine shell: AttributeError("'NoneType' object has no attribute 'split'",)
Traceback (most recent call last):
  File "/home/crunch/projects/route/env/bin/plenum", line 53, in <module>
    run_cli()
  File "/home/crunch/projects/route/env/bin/plenum", line 49, in run_cli
    looper.run(cli.shell(*commands))
  File "/home/crunch/projects/route/env/lib/python3.5/site-packages/plenum/common/looper.py", line 251, in run
    return self.loop.run_until_complete(what)
  File "/usr/lib/python3.5/asyncio/base_events.py", line 466, in run_until_complete
    return future.result()
  File "/usr/lib/python3.5/asyncio/futures.py", line 293, in result
    raise self._exception
  File "/usr/lib/python3.5/asyncio/tasks.py", line 239, in _step
    result = coro.send(None)
  File "/home/crunch/projects/route/env/lib/python3.5/site-packages/plenum/common/looper.py", line 242, in wrapper
    raise ex
  File "/home/crunch/projects/route/env/lib/python3.5/site-packages/plenum/common/looper.py", line 230, in wrapper
    results.append(await coro)
  File "/home/crunch/projects/route/env/lib/python3.5/site-packages/plenum/cli/cli.py", line 966, in shell
    self.parse(c)
  File "/home/crunch/projects/route/env/lib/python3.5/site-packages/plenum/cli/cli.py", line 1277, in parse
    r = action(matchedVars)
  File "/home/crunch/projects/route/env/lib/python3.5/site-packages/plenum/cli/cli.py", line 976, in _simpleAction
    self.getStatus()
  File "/home/crunch/projects/route/env/lib/python3.5/site-packages/plenum/cli/cli.py", line 711, in getStatus
    format(Replica.getNodeName(mPrimary)))
  File "/home/crunch/projects/route/env/lib/python3.5/site-packages/plenum/server/replica.py", line 234, in getNodeName
    return replicaName.split(":")[0]
AttributeError: 'NoneType' object has no attribute 'split'
`

What is the use of the clients added to the Test Network ?

Hi,

I am new to Indy and I am trying to understand better the structure of the whole system.
I understand what is a Trustee, Steward and Trust Anchor.

I was reading the 'indy-plenum/plenum/common/test_network_setup.py' script and I saw this piece of code

for cd in client_defs:
            txn = Member.nym_txn(cd.nym, verkey=cd.verkey, creator=trustee_def.nym,
                                 seq_no=seq_no,
                                 protocol_version=genesis_protocol_version)

What are these clients for ? what can they do ?

Thanks in advance for your answer !

Rename master branch

Summary

We should rename the branch master to main and use that going forward for our work.

From Problematic Terminology in Open-Source on master/slave terminology in software:

Use of this term is problematic. It references slavery to convey meaning about the relationship between two entities.

Removing this terminology from our workflow is a small gesture to include more people from marginalized groups in this project.

(I’m open to names other than main)

Technical Steps

  • create main branch from master
  • make main the default GitHub branch
  • modify github/central to use main for release notes reloading
  • redirect PRs to main in hyperledger/indy-plenum
  • move branch protections from master to main
  • modify docs to reference main instead of master
  • delete master branch to avoid confusion?

Feedback?

Tests using the sdk fail on the master branch

Tests (majority) on master branch that require indy-sdk fail on the call to open_pool_ledger which is calling indy_open_pool_ledger. Easiest way to reproduce is to run plenum/test/sdk/test_sdk_bindings.py. The same tests work on older branches, the difference being the format of genesis files.
Failing test have this genesis file (new txn format)

{"reqSignature":{},"txn":{"data":{"data":{"alias":"Alpha","blskey":"p2LdkcuVLnqidf9PAM1josFLfSTSTYuGqaSBx2Dq72Z5Kt2axQicYyqkQ6ZfcwzHpmLevmcXVwD4EC32wTbusvYxb5D1MJBfu67SQqxRTcK7pRBQXYiaPrzqUo9odhAgrwNPSbHcBJM6s5cNUPvjZZDuSJvhjC7tKFV9FGqyX4Zs4u","client_ip":"127.0.0.1","client_port":6152,"node_ip":"127.0.0.1","node_port":6151,"services":["VALIDATOR"]},"dest":"JpYerf4CssDrH76z7jyQPJLnZ1vwYgvKbvcp16AB5RQ"},"metadata":{"from":"MSjKTWkPLtYoPEaTF1TUDb"},"type":"0"},"txnMetadata":{"seqNo":1,"txnId":"b1a96dd646bccaa24cef7a3db22a6f995f05658f4f1c3272913e258c03e6fb24"},"ver":"1"}
{"reqSignature":{},"txn":{"data":{"data":{"alias":"Beta","blskey":"2JY8jXAiy3ffLu1ggSaiFTBpmb9X7wUZEedg7G3mJSU1vCnqFzYAofGR9SGEvb1C3p88Kdm2CPAdMyMc5v9KxL26vfeeHzRa2N5EHwV1JpPH5kcdYYkFhgNf8wxFAvJ9vPS1aCVms41ZC17GeovJLh4L2iACNd7ttPyS5M6a9Uux9oz","client_ip":"127.0.0.1","client_port":6154,"node_ip":"127.0.0.1","node_port":6153,"services":["VALIDATOR"]},"dest":"DG5M4zFm33Shrhjj6JB7nmx9BoNJUq219UXDfvwBDPe2"},"metadata":{"from":"E4rYSWBUA12j5ScG6mie1p"},"type":"0"},"txnMetadata":{"seqNo":2,"txnId":"703390318bd55aef50b7823d2b90a846debff99e6e3d401a24a921b733912a6d"},"ver":"1"}
{"reqSignature":{},"txn":{"data":{"data":{"alias":"Gamma","blskey":"1JwRChBPGQTtp4m4aNBRrf2kG3mzgxtRUTAscx8iV9uDih34pKWnEA54CoNq3DhAgEURQCN6VKrSZUb6zzzLBHhQt7HBdw2kbfUR3Fap2jqE6TEDamFQpqced2GRVcDo5wgVVKydsf1rFundAk7jMSk7mLrf7zBBN9xBx2yaUkvueN","client_ip":"127.0.0.1","client_port":6156,"node_ip":"127.0.0.1","node_port":6155,"services":["VALIDATOR"]},"dest":"AtDfpKFe1RPgcr5nnYBw1Wxkgyn8Zjyh5MzFoEUTeoV3"},"metadata":{"from":"QxzxUA7gePtb9t46n1YgsC"},"type":"0"},"txnMetadata":{"seqNo":3,"txnId":"a8ab3d5805c9214bc66b794f599cbccd5a5958dc5a6a322ee81e3a68344c6db7"},"ver":"1"}
{"reqSignature":{},"txn":{"data":{"data":{"alias":"Delta","blskey":"4kkk7y7NQVzcfvY4SAe1HBMYnFohAJ2ygLeJd3nC77SFv2mJAmebH3BGbrGPHamLZMAFWQJNHEM81P62RfZjnb5SER6cQk1MNMeQCR3GVbEXDQRhhMQj2KqfHNFvDajrdQtyppc4MZ58r6QeiYH3R68mGSWbiWwmPZuiqgbSdSmweqc","client_ip":"127.0.0.1","client_port":6158,"node_ip":"127.0.0.1","node_port":6157,"services":["VALIDATOR"]},"dest":"4yC546FFzorLPgTNTc6V43DnpFrR8uHvtunBxb2Suaa2"},"metadata":{"from":"WMStfRmANynUmdpa1QYKDw"},"type":"0"},"txnMetadata":{"seqNo":4,"txnId":"18833da39fb9b7f8c917fe0220daf9cf12e6524df8fb16e39f04dbe827e2d200"},"ver":"1"}

Passing tests have this genesis file (old txn format)

{"data":{"alias":"Alpha","blskey":"p2LdkcuVLnqidf9PAM1josFLfSTSTYuGqaSBx2Dq72Z5Kt2axQicYyqkQ6ZfcwzHpmLevmcXVwD4EC32wTbusvYxb5D1MJBfu67SQqxRTcK7pRBQXYiaPrzqUo9odhAgrwNPSbHcBJM6s5cNUPvjZZDuSJvhjC7tKFV9FGqyX4Zs4u","client_ip":"127.0.0.1","client_port":6144,"node_ip":"127.0.0.1","node_port":6143,"services":["VALIDATOR"]},"dest":"JpYerf4CssDrH76z7jyQPJLnZ1vwYgvKbvcp16AB5RQ","identifier":"MSjKTWkPLtYoPEaTF1TUDb","type":"0"}
{"data":{"alias":"Beta","blskey":"2JY8jXAiy3ffLu1ggSaiFTBpmb9X7wUZEedg7G3mJSU1vCnqFzYAofGR9SGEvb1C3p88Kdm2CPAdMyMc5v9KxL26vfeeHzRa2N5EHwV1JpPH5kcdYYkFhgNf8wxFAvJ9vPS1aCVms41ZC17GeovJLh4L2iACNd7ttPyS5M6a9Uux9oz","client_ip":"127.0.0.1","client_port":6146,"node_ip":"127.0.0.1","node_port":6145,"services":["VALIDATOR"]},"dest":"DG5M4zFm33Shrhjj6JB7nmx9BoNJUq219UXDfvwBDPe2","identifier":"E4rYSWBUA12j5ScG6mie1p","type":"0"}
{"data":{"alias":"Gamma","blskey":"1JwRChBPGQTtp4m4aNBRrf2kG3mzgxtRUTAscx8iV9uDih34pKWnEA54CoNq3DhAgEURQCN6VKrSZUb6zzzLBHhQt7HBdw2kbfUR3Fap2jqE6TEDamFQpqced2GRVcDo5wgVVKydsf1rFundAk7jMSk7mLrf7zBBN9xBx2yaUkvueN","client_ip":"127.0.0.1","client_port":6148,"node_ip":"127.0.0.1","node_port":6147,"services":["VALIDATOR"]},"dest":"AtDfpKFe1RPgcr5nnYBw1Wxkgyn8Zjyh5MzFoEUTeoV3","identifier":"QxzxUA7gePtb9t46n1YgsC","type":"0"}
{"data":{"alias":"Delta","blskey":"4kkk7y7NQVzcfvY4SAe1HBMYnFohAJ2ygLeJd3nC77SFv2mJAmebH3BGbrGPHamLZMAFWQJNHEM81P62RfZjnb5SER6cQk1MNMeQCR3GVbEXDQRhhMQj2KqfHNFvDajrdQtyppc4MZ58r6QeiYH3R68mGSWbiWwmPZuiqgbSdSmweqc","client_ip":"127.0.0.1","client_port":6150,"node_ip":"127.0.0.1","node_port":6149,"services":["VALIDATOR"]},"dest":"4yC546FFzorLPgTNTc6V43DnpFrR8uHvtunBxb2Suaa2","identifier":"WMStfRmANynUmdpa1QYKDw","type":"0"}

It looks like indy-sdk is not able to parse new txn format

By using the libindy.so and libindy-crypto.so build from master branch from respective repositories, i get the error _load_cdll: Can't load libindy: .... libindy.so: undefined symbol: crypto_pwhash while running plenum tests (try running the file mentioned above). I have libsodium18 and libsodium18-dev installed, do i need a newer version of libsodium for crypto_pwhash?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.