Giter VIP home page Giter VIP logo

sriep / 0chain Goto Github PK

View Code? Open in Web Editor NEW

This project forked from 0chain/0chain

0.0 0.0 0.0 54.61 MB

0Chain is a decentralized blockchain-based storage platform with built-in privacy and security compliance. It provides high performance, enforceable SLAs, choice of providers for all enterprise grade applications.

Home Page: https://0chain.net/

License: Other

Shell 2.92% Go 96.36% Dockerfile 0.69% Makefile 0.03%

0chain's Introduction

TestNet Setup with Docker Containers

Table of Contents

Initial Setup

Directory Setup for Miners and Sharders

In the git/0chain run the following command

./docker.local/bin/init.setup.sh

Setup Network

Setup a network called testnet0 for each of these node containers to talk to each other.

Note: The config file should be providing the IP address of the nodes as per the IP addresses in this network.

./docker.local/bin/setup_network.sh

Building the Nodes

  1. Open 5 terminal tabs. Use the first one for building the containers by being in git/0chain directory. Use the next 3 for 3 miners and be in the respective miner directories created above in docker.local. Use the 5th terminal and be in the sharder1 directory.

1.1) First build the base containers, zchain_build_base and zchain_run_base

Use -m1 flag to build for Apple m1 chip

./docker.local/bin/build.base.sh
  1. Building the miners and sharders. From the git/0chain directory use

2.1) To build the miner containers

Use -m1 flag to build for Apple m1 chip

./docker.local/bin/build.miners.sh

2.2) To build the sharder containers

Use -m1 flag to build for Apple m1 chip

./docker.local/bin/build.sharders.sh

for building the 1 sharder.

2.3) Syncing time (the host and the containers are being offset by a few seconds that throws validation errors as we accept transactions that are within 5 seconds of creation). This step is needed periodically when you see the validation error.

./docker.local/bin/sync_clock.sh

Configuring the nodes

  1. Use ./docker.local/config/0chain.yaml to configure the blockchain properties. The default options are setup for running the blockchain fast in development.

1.1) If you want the logs to appear on the console - change logging.console from false to true

1.2) If you want the debug statements in the logs to appear - change logging.level from "info" to "debug"

1.3) If you want to change the block size, set the value of server_chain.block.size

1.4) If you want to adjust the network relay time, set the value of network.relay_time

Note: Remove sharder72 and miner75 from docker.local/config/b0snode2_keys.txt and docker.local/config/b0mnode5_keys.txt respectively if you are joining to local network.

Starting the nodes

  1. Starting the nodes. On each of the miner terminals use the commands (note the .. at the beginning. This is because, these commands are run from within the docker.local/<miner/sharder|i> directories and the bin is one level above relative to these directories)

Start sharder first because miners need the genesis magic block. On the sharder terminal, use

../bin/start.b0sharder.sh

Wait till the cassandra is started and the sharder is ready to listen to requests.

On the respective miner terminal, use

../bin/start.b0miner.sh

Re-starting the nodes

To reflect a change in config files 0chain.yaml and sc.yaml, just restart the miner or sharder to take the new configuration. If you're doing a code change locally or pulling updates from GitHub, you need to build.

git pull
docker.local/bin/build.base.sh && docker.local/bin/build.sharders.sh && docker.local/bin/build.miners.sh

For existing code and if you have tried running once, make sure there are no previous files and processes.

docker stop $(docker ps -a -q)
docker.local/bin/clean.sh
docker.local/bin/init.setup.sh
docker.local/bin/sync_clock.sh

Then go to individual miner/sharder:

../bin/start.b0sharder.sh (start sharders first!)
../bin/start.b0miner.sh

Running on systems with SELinux enabled

Library by herumi for working with BLS threshold signatures requires this flag turned on:

setsebool -P selinuxuser_execheap 1

If you are curious about the reasons for this, this thread sheds some light on the topic:

herumi/xbyak#9

Setting up Cassandra Schema

The following is no longer required as the schema is automatically loaded.

Start the sharder service that also brings up the cassandra service. To run commands on cassandra, use the following command

../bin/run.sharder.sh cassandra cqlsh
  1. To create zerochain keyspace, do the following
../bin/run.sharder.sh cassandra cqlsh -f /0chain/sql/zerochain_keyspace.sql
  1. To create the tables, do the following
../bin/run.sharder.sh cassandra cqlsh -k zerochain -f /0chain/sql/txn_summary.sql
  1. When you want to truncate existing data (use caution), do the following
../bin/run.sharder.sh cassandra cqlsh -k zerochain -f /0chain/sql/truncate_tables.sql

Generating Test Transactions

There is no need to generate the test data separately. In development mode, the transaction data is automatically generated at a certain rate based on the block size.

However, you can use the block explorer to submit transactions, view the blocks and confirm the transactions.

Monitoring the progress

  1. Use block explorer to see the progress of the block chain.

  2. In addition, use the '/_diagnostics' link on any node to view internal details of the blockchain and the node.

Troubleshooting

  1. Ensure the port mapping is all correct:
docker ps

This should display a few containers and should include containers with images miner1_miner, miner2_miner and miner3_miner and they should have the ports mapped like "0.0.0.0:7071->7071/tcp"

  1. Confirming the servers are up and running. From a browser, visit

to see the status of the miners.

Similarly, following links can be used to see the status of the sharders

  1. Connecting to redis servers running within the containers (you are within the appropriate miner directories)

Default redis (used for clients and state):

../bin/run.miner.sh redis redis-cli

Redis used for transactions:

../bin/run.miner.sh redis_txns redis-cli
  1. Connecting to cassandra used in the sharder (you are within the appropriate sharder directories)
../bin/run.sharder.sh cassandra cqlsh

Dependencies for local compilation

You need to install rocksdb and herumi/bls, refer to docker.local/build.base/Dockerfile.build_base for necessary steps.

For local compilation it should be enough of go build from a submodule folder, e.g.

cd code/go/0chain.net/miner
go build

You can pass tag development if you want to simulate n2n delays. And you also need tag bn256 to build the same code as in production:

go build -tags "bn256 development"

Debugging

Debug builds of 0chain

If you want to run a debug 0chain build you can follow the details contained in the 0chain/local folder.

Only one miner and one sharder can be run on any single machine, so you will need at least three machines to for a working 0chain.

Log files

The logs of the nodes are stored in log directory (/0chain/log on the container and docker.local/miner|sharder[n]/log in the host). The 0chain.log contains all the logs related to the protocol and the n2n.log contains all the node to node communication logs. The typical issues that need to be debugged is errors in the log, why certain things have not happeend which requires reviewing the timestamp of a sequence of events in the network. Here is an example set of commands to do some debugging.

Find arrors in all the miner nodes (from git/0chain)

grep ERROR docker.local/miner*/log/0chain.log

This gives a set of errors in the log. Say an error indicates a problem for a specific block, say abc, then

grep abc docker.local/miner*/log/0chain.log

gives all the logs related to block 'abc'

To get the start time of all the rounds

grep 'starting round' docker.local/miner*/log/0chain.log

This gives the start timestamps that can be used to correlate the events and their timings.

Unit tests

0chain unit tests verify the behaviour of individual parts of the program. A config for the base docker image can be provided on run to execute general unit tests.

unit testing uml

Getting started

Prerequisites

Docker and Git must be installed to run the tests .

Install Git using the following command:

sudo apt install git

Docker installation instructions can be found here.

Cloning the repository and Building Base Image

Clone the 0chain repository:

git clone https://github.com/0chain/0chain.git

Navigate to 0chain folder and run the script to build base docker image for unit testing :

cd 0chain
./docker.local/bin/build.base.sh

The base image includes all the dependencies required to test the 0chain code.

Running Tests

Now run the script containing unit tests .

./docker.local/bin/unit_test.sh 

The list of packages is optional, and if provided runs only the tests from those packages. Command for running unit tests with specific packages .

./docker.local/bin/unit_test.sh [<packages>]

Testing Steps

Unit testing happens over a series of steps one after the other.

Step 1: FROM zchain_build_base

This FROMstep does the required preparation and specifies the underlying OS architecture to use the build image. Here we are using the base image created in the build phase.

Step 2: ENV SRC_DIR=/0chain

The SRC_DIR variable is a reference to a filepath which contains the code from your pull request. Here /0chain directory is specified as it is the one which was cloned.

Step 3: Setting the GO111Module variable to ON

GO111MODULE is an environment variable that can be set when using go for changing how Go imports packages. It was introduced to help ensure a smooth transition to the module system.

GO111MODULE=on will force using Go modules even if the project is in your GOPATH. Requires go.mod to work.

Note: The default behavior in Go 1.16 is now GO111MODULE=on

Step 4: COPY ./code/go/0chain.net $SRC_DIR/go/0chain.net

This step copies the code from the source path to the destination path.

Step 5: RUN cd $SRC_DIR/go/0chain.net && go mod download

The RUN command is an image build step which allows installing of application and packages requited for testing while thego mod download downloads the specific module versions you've specified in the go.modfile.

Step 6: RUN cd $GOPATH/pkg/mod/github.com/valyala/[email protected]. && chmod -R +w . && make clean libzstd.a

This step runs the gozstd package and provides write permissions to the directory. gozstd which is a go wrapper for zstd (library) provides Go bindings for the libzstd C library. The make clean is ran in the last to clean up the code and remove all the compiled object files from the source code

Step 7: WORKDIR $SRC_DIR/go

This step defines the working directory for running unit tests which is (0chain/code/go/0chain.net/).For all the running general unit tests their code coverage will be defined in the terminal like this

ok      0chain.net/chaincore/block      0.128s  coverage: 98.9% of statements

The above output shows 98.9% of code statements was covered with tests.

Here is a sample output for all the unit test cases:

?       0chain.net/chaincore    [no test files]
ok      0chain.net/chaincore/block      0.128s  coverage: 98.9% of statements
?       0chain.net/chaincore/block/magicBlock   [no test files]
ok      0chain.net/chaincore/chain      0.254s  coverage: 6.0% of statements
?       0chain.net/chaincore/chain/state        [no test files]
ok      0chain.net/chaincore/client     0.328s  coverage: 30.8% of statements
?       0chain.net/chaincore/config     [no test files]
?       0chain.net/chaincore/diagnostics        [no test files]
ok      0chain.net/chaincore/httpclientutil     2.048s  coverage: 91.7% of statements
ok      0chain.net/chaincore/node       0.011s  coverage: 8.9% of statements
ok      0chain.net/chaincore/round      0.048s  coverage: 97.1% of statements
ok      0chain.net/chaincore/smartcontract      0.032s  coverage: 9.1% of statements                                                                             
ok      0chain.net/chaincore/smartcontractinterface     0.032s  coverage: 97.3%                                                                               
?       0chain.net/chaincore/state      [no test files]
ok      0chain.net/chaincore/threshold/bls      9.912s  coverage: 1.1% of statem                                                                             
ok      0chain.net/chaincore/tokenpool  10.034s coverage: 100.0% of statements
ok      0chain.net/chaincore/transaction        0.029s  coverage: 0.4% of statements [no tests to run]
ok      0chain.net/chaincore/wallet     6.600s  coverage: 40.0% of statements
?       0chain.net/conductor    [no test files]
?       0chain.net/conductor/conductor  [no test files]
?       0chain.net/conductor/conductrpc [no test files]
?       0chain.net/conductor/config     [no test files]
?       0chain.net/conductor/sdkproxy   [no test files]
?       0chain.net/conductor/utils      [no test files]
?       0chain.net/core [no test files]
?       0chain.net/core/build   [no test files]
ok      0chain.net/core/cache   0.004s  coverage: 100.0% of statements
ok      0chain.net/core/common  0.238s  coverage: 87.4% of statements
ok      0chain.net/core/datastore       0.033s  coverage: 92.0% of statements
ok      0chain.net/core/ememorystore    1.018s  coverage: 91.7% of statements
ok      0chain.net/core/encryption      1.290s  coverage: 95.3% of statements
?       0chain.net/core/encryption/keys [no test files]
ok      0chain.net/core/logging 0.069s  coverage: 96.5% of statements
ok      0chain.net/core/memorystore     0.281s  coverage: 93.8% of statements
?       0chain.net/core/metric  [no test files]
ok      0chain.net/core/persistencestore        0.036s  coverage: 73.5% of statements
ok      0chain.net/core/util    22.237s coverage: 76.5% of statements
ok      0chain.net/miner        0.303s  coverage: 8.0% of statements
?       0chain.net/miner/miner  [no test files]
?       0chain.net/miner/mocks  [no test files]
?       0chain.net/mocks        [no test files]
?       0chain.net/mocks/core/datastore [no test files]
?       0chain.net/mocks/core/encryption        [no test files]
ok      0chain.net/sharder      0.168s  coverage: 20.8% of statements
ok      0chain.net/sharder/blockdb      0.004s  coverage: 79.3% of statements
ok      0chain.net/sharder/blockstore   0.045s  coverage: 79.7% of statements
?       0chain.net/sharder/sharder      [no test files]
?       0chain.net/smartcontract        [no test files]
?       0chain.net/smartcontract/faucetsc       [no test files]
ok      0chain.net/smartcontract/interestpoolsc 0.030s  coverage: 45.0% of statements
ok      0chain.net/smartcontract/minersc        0.104s  coverage: 30.9% of statements
?       0chain.net/smartcontract/multisigsc     [no test files]
?       0chain.net/smartcontract/multisigsc/test        [no test files]
?       0chain.net/smartcontract/setupsc        [no test files]
ok      0chain.net/smartcontract/storagesc      1.877s  coverage: 58.8% of statements
ok      0chain.net/smartcontract/vestingsc      0.034s  coverage: 81.8% of statements
ok      0chain.net/smartcontract/zrc20sc        0.030s  coverage: 23.3% of statements

Creating The Magic Block

First build the magic block image.

Use -m1 flag to build for Apple m1 chip

./docker.local/bin/build.magic_block.sh

Next, set the configuration file. To do this edit the docker.local/build.magicBlock/docker-compose.yml file. On line 13 is a flag "--config_file" set it to the magic block configuration file you want to use.

To create the magic block.

./docker.local/bin/create.magic_block.sh

The magic block and the dkg summary json files will appear in the docker.local/config under the name given in the configuration file.

The magic_block_file setting in the 0chain.yaml file needs to be updated with the new name of the magic block created.

Update the miner config file so it is set to the new dkg summaries. To do this edit the docker.local/build.miner/b0docker-compose.yml file. On line 55 is a flag "--dkg_file" set it to the dkg summary files created with the magic block.

Initial states

The balance for the various nodes is setup in a initial_state.yaml file. This file is a list of node ids and token amounts.

The initial state yaml file is entered as a command line argument when running a sharder or miner, falling that the 0chain.yaml network.inital_states entry is used to find the initial state file.

An example, that can be used with the preset ids, can be found at 0chian/docker.local/config/inital_state.yaml`

Miscellaneous

Cleanup

  1. If you want to restart the blockchain from the beginning
./docker.local/bin/clean.sh

This cleans up the directories within docker.local/miner* and docker.local/sharder*

Note: this script can take a while if the blockchain generated a lot of blocks as the script deletes the databases and also all the blocks that are stored by the sharders. Since each block is stored as a separate file, deleting thousands of such files will take some time.

  1. If you want to get rid of old unused docker resources:
docker system prune

Minio

  • You can use the inbuild minio support to store blocks on cloud

You have to update minio_config file with the cloud creds data, The file can found at docker.local/config/minio_config.txt. The following order is used for the content :

CONNECTION_URL
ACCESS_KEY_ID
SECRET_ACCESS_KEY
BUCKET_NAME
REGION
  • Your minio config file is then used in the docker-compose while starting the sharder node
--minio_file config/minio_config.txt
  • You can either update the setting in the same file which is given above or create a new one with you config and use that as
--minio_file config/your_new_minio_config_file.txt

**_Note: Do not forget to put the file in the same config folder OR mount your new folder.

  • Apart from private connection config, There are other options as well in the 0chain.yaml file to manage minio settings.

Sample config

minio:
  # Enable or disable minio backup, Do not enable with deep scan ON
  enabled: false 
  # In Seconds, The frequency at which the worker should look for files, Ex: 3600 means it will run every 3600 seconds
  worker_frequency: 3600 
  # Number of workers to run in parallel, Just to make execution faster we can have mutiple workers running simultaneously
  num_workers: 5 
  # Use SSL for connection or not
  use_ssl: false 
  # How old the block should be to be considered for moving to cloud
  old_block_round_range: 20000000 
  # Delete local copy of block once it's moved to cloud
  delete_local_copy: true
  • In minio the folders do not get deleted and will cause a slight increase in volume over time.

Integration tests

Integration testing combines individual 0chain modules and test them as a group. Integration testing evaluates the compliance of a system for specific functional requirements and usually occurs after unit testing .

For integration testing, A conductor which is a RPC(Remote Procedure Call) server is implemented to control behaviour of nodes .To know more about the conductor refer to the conductor documentation

Architecture

A conductor requires the nodes to be built in a certain order to control them during the tests. A config file is defined in conductor.config.yaml which contains important details such as details of all nodes used and custom commands used in integration testing.

integration testing

For running multiple test cases,conductor uses a test suite which contains multiple sets of tests .A test suites can be categorized into 3 types of tests

standard tests - Checks whether chain continue to function properly despite bad miner and sharder participants

view-change tests - Checks whether addition and removal of nodes is working

.blobber tests - Checks whether storage functions continue to work properly despite bad or lost blobber, and confirms expected storage function failures

Below is an example of conductor test suite.

# Under `enable` is the list of sets that will be run.
enable: 
  - "Miner down/up"
  - "Blobber tests"

# Test sets defines the test cases it covers.
sets: 
  - name: "Miner down/up" 
    tests:
      - "Miner: 50 (switch to contribute)"
      - "Miner: 100 (switch to share)"
  - name: "Blobber tests"
    tests:
      - "All blobber tests"

# Test cases defines the execution flow for the tests.
tests: 
  - name: "Miner: 50 (switch to contribute)"
    flow: 
    # Flow is a series of directives.
    # The directive can either be built-in in the conductor 
    # or custom command defined in "conductor.config.yaml"
      - set_monitor: "sharder-1" # Most directive refer to node by name, these are defined in `conductor.config.yaml` 
      - cleanup_bc: {} # A sample built-in command that triggers stop on all nodes and clean up.
      - start: ['sharder-1']
      - start: ['miner-1', 'miner-2', 'miner-3']
      - wait_phase: 
          phase: 'contribute'
      - stop: ['miner-1']
      - start: ['miner-1']
      - wait_view_change:
          timeout: '5m'
          expect_magic_block:
            miners: ['miner-1', 'miner-2', 'miner-3']
            sharders: ['sharder-1']
  - name: "Miner: 100 (switch to share)"
    flow:
    ...
  - name: "All blobber tests"
    flow:
      - command:
          name: 'build_test_blobbers' # Sample custom command that executes `build_test_blobbers`
    ...
...

Getting Started

Prerequisites

Docker and Git must be installed to run the tests .

Install Git using the following command:

sudo apt install git

Docker installation instructions can be found here.

Cloning the repository and Building Base Image

Clone the 0chain repository:

git clone https://github.com/0chain/0chain.git

Build miner docker image for integration test

(cd 0chain && ./docker.local/bin/build.miners-integration-tests.sh)

Build sharder docker image for integration test

(cd 0chain && ./docker.local/bin/build.sharders-integration-tests.sh)

NOTE: The miner and sharder images are designed for integration tests only. If wanted to run chain normally, rebuild the original images.

(cd 0chain && ./docker.local/bin/build.sharders.sh && ./docker.local/bin/build.miners.sh)

Confirm that view change rounds are set to 50 on 0chain/docker.local/config.yaml

    start_rounds: 50
    contribute_rounds: 50
    share_rounds: 50
    publish_rounds: 50
    wait_rounds: 50

Running standard tests

Run miners test

(cd 0chain && ./docker.local/bin/start.conductor.sh miners)

Run sharders test

(cd 0chain && ./docker.local/bin/start.conductor.sh sharders)

Running view-change tests

  1. Set view_change: true on 0chain/docker.local/config.yaml
  2. Run view-change tests
(cd 0chain && ./docker.local/bin/start.conductor.sh view-change-1)
(cd 0chain && ./docker.local/bin/start.conductor.sh view-change-2)
(cd 0chain && ./docker.local/bin/start.conductor.sh view-change-3)

Running blobber tests

Blobber tests require cloning of below services.

blobber

git clone https://github.com/0chain/blobber.git

Refer to conductor documentation

zboxcli

git clone https://github.com/0chain/zboxcli.git

zwalletcli

git clone https://github.com/0chain/zwalletcli.git

0dns

git clone https://github.com/0chain/0dns.git

Confirm whether all the cloned directories exists.

0chain/
blobber/
zboxcli/
zwalletcli/
0dns/

Install zboxcli

(cd zboxcli && make install)

Install zwalletcli

(cd zwalletcli && make install)

Patch 0dns for the latest 0chain network configuration .

(cd 0dns && git apply --check ../0chain/docker.local/bin/conductor/0dns-local.patch)
(cd 0dns && git apply ../0chain/docker.local/bin/conductor/0dns-local.patch)

Patch blobbers for the latest blobber tests

(cd blobber && git apply --check ../0chain/docker.local/bin/conductor/blobber-tests.patch)
(cd blobber && git apply ../0chain/docker.local/bin/conductor/blobber-tests.patch)

Add ~/.zcn/config.yaml as follows

block_worker: http://127.0.0.1:9091
signature_scheme: bls0chain
min_submit: 50
min_confirmation: 50
confirmation_chain_length: 3
max_txn_query: 5
query_sleep_time: 5

Apply if on Ubuntu 18.04

docker/for-linux#563 (comment)

The bug in Ubuntu 18.04 related. It relates to docker-credential-secretservice package required by docker-compose and used by docker. A docker process (a build, for example) can sometimes fail due to the bug. Some tests have internal docker builds and can fail due to this bug.

Run blobber tests

(cd 0chain && ./docker.local/bin/start.conductor.sh blobber-1)
(cd 0chain && ./docker.local/bin/start.conductor.sh blobber-2) 

Adding new Tests

New tests can be easily added to the conductor check Updating conductor tests in the conductor documentation for more information.

Enabling or Disabling Tests

Check Temporarily disabling tests in the conductor documentation for more information

Supported Conductor Commands

Check the supported directives in the conductor documentation for more information.

Creating Custom Conductor Commands

Check Custom Commands in the conductor documentation for more information

0chain's People

Contributors

andrenerd avatar andriykutsevol avatar anish-squareops avatar bbist avatar gasparyanyur avatar guruhubb avatar hm90121 avatar integralteam avatar jay-at-0chain avatar kenwes13 avatar kirillt avatar koppakaravali avatar krishnadeqode avatar m-s-a-c avatar mallochine avatar murashovven avatar neha-0chain avatar nidhiardent avatar noskillguy avatar peterlimg avatar platsko avatar pmer avatar sachin-0chain avatar sarath-ambati avatar satchinjoshi avatar siva0chain avatar snaik avatar sriep avatar stewartie4 avatar theteaparty avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.