Giter VIP home page Giter VIP logo

avalanche-rs's People

Contributors

angel-petrov avatar dependabot[bot] avatar dimitrovmaksim avatar exdx avatar gyuho avatar hexfusion avatar nuttymoon avatar richardpringle avatar rkuris avatar rodrigovillar avatar stephenbuttolph avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

avalanche-rs's Issues

[subnet] Incorrect if condition in `get_validator_set`

While researching the rust implementation, we noticed there’s a check in get_validator_set if reps.validators is empty in order to get the publick_key variable updated. But if resp.validators is empty, wouldn’t it skip the loop in the first place? We are assuming there it should check if validator.public_key is empty instead, because it seems it’s returned as a bytes array, and if the validator does not have a public key, the array would be empty. Are our assumptions correct or are we missing something?

Write proc-macro to automatically declare and implement the `Rpc` trait.

Right now, one has to create an Rpc trait, define all the methods, then define the implementation. It would be way nicer if we could just have one impl block that both defined the trait as well as the implementation. This should be pretty easy to accomplish with our own proc-macro.

TODO:
write example of desired output

Fix the linting job

The linter just fails with clippy and fmt not installed, but then shows up as passing.

It would be much better if the commands were specified directly in the jobs and if they were made simpler.

For instance, we should only be running cargo clippy --all --all-features --examples --tests -- -Dwarnings. If anyone wants to see what's failing, it's easy for them to checkout the .github directory and see the commands all in one place without being lead to separate files.

And, in order to check specific rules, those rules can be specified in lib.rs for each individual crate. For example, just place !#[deny(clippy::pedantic)] at the top of lib.rs instead of calling cargo clippy -- -D clippy::pedantic

[flake] avalanchego_conformance_sdk unit test::key

See https://github.com/ava-labs/avalanche-rs/actions/runs/6040807095/job/16392431819 for an example flake.

Compiling avalanchego-conformance v0.0.0 (/home/runner/work/avalanche-rs/avalanche-rs/tests/avalanchego-conformance)
    Finished test [unoptimized + debuginfo] target(s) in 2m 36s
     Running unittests src/lib.rs (target/debug/deps/avalanchego_conformance-70b5c63bbd0deef4)

running 8 tests
[2023-08-31T18:03:21Z INFO  avalanchego_conformance_sdk] creating a new client with http://127.0.0.1:22342/
[2023-08-31T18:03:21Z INFO  avalanchego_conformance_sdk] creating a new client with http://127.0.0.1:22342/
thread 'tests::key::bls::generate_bls_signature' panicked at 'called `Result::unwrap()` on an `Err` value: tonic::transport::Error(Transport, hyper::Error(Connect, ConnectError("tcp connect error", Os { code: 111, kind: ConnectionRefused, message: "Connection refused" })))', /home/runner/work/avalanche-rs/avalanche-rs/avalanchego-conformance-sdk/src/lib.rs:54:72
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
test tests::key::bls::generate_bls_signature ... FAILED
[2023-08-31T18:03:21Z INFO  avalanchego_conformance_sdk] creating a new client with http://127.0.0.1:22342/
thread 'tests::key::certificates::load_certificate_to_node_id' panicked at 'called `Result::unwrap()` on an `Err` value: tonic::transport::Error(Transport, hyper::Error(Connect, ConnectError("tcp connect error", Os { code: 111, kind: ConnectionRefused, message: "Connection refused" })))', /home/runner/work/avalanche-rs/avalanche-rs/avalanchego-conformance-sdk/src/lib.rs:54:72
test tests::key::certificates::load_certificate_to_node_id ... FAILED
thread 'tests::key::certificates::generate_certificate_to_node_id' panicked at 'called `Result::unwrap()` on an `Err` value: tonic::transport::Error(Transport, hyper::Error(Connect, ConnectError("tcp connect error", Os { code: 111, kind: ConnectionRefused, message: "Connection refused" })))', /home/runner/work/avalanche-rs/avalanche-rs/avalanchego-conformance-sdk/src/lib.rs:54:72
test tests::key::certificates::generate_certificate_to_node_id ... FAILED
[2023-08-31T18:03:21Z INFO  avalanchego_conformance_sdk] creating a new client with http://127.0.0.1:22342/
thread 'tests::key::secp256k1::generate' panicked at 'called `Result::unwrap()` on an `Err` value: tonic::transport::Error(Transport, hyper::Error(Connect, ConnectError("tcp connect error", Os { code: 111, kind: ConnectionRefused, message: "Connection refused" })))', /home/runner/work/avalanche-rs/avalanche-rs/avalanchego-conformance-sdk/src/lib.rs:54:72
test tests::key::secp256k1::generate ... FAILED
[2023-08-31T18:03:21Z INFO  avalanchego_conformance_sdk] creating a new client with http://127.0.0.1:22342/
thread 'tests::key::secp256k1::recover_hash_public_key' panicked at 'called `Result::unwrap()` on an `Err` value: tonic::transport::Error(Transport, hyper::Error(Connect, ConnectError("tcp connect error", Os { code: 111, kind: ConnectionRefused, message: "Connection refused" })))', /home/runner/work/avalanche-rs/avalanche-rs/avalanchego-conformance-sdk/src/lib.rs:54:72
test tests::key::secp256k1::recover_hash_public_key ... FAILED
[2023-08-31T18:03:21Z INFO  avalanchego_conformance_sdk] creating a new client with http://127.0.0.1:22342/
thread 'tests::key::secp256k1::load' panicked at 'called `Result::unwrap()` on an `Err` value: tonic::transport::Error(Transport, hyper::Error(Connect, ConnectError("tcp connect error", Os { code: 111, kind: ConnectionRefused, message: "Connection refused" })))', /home/runner/work/avalanche-rs/avalanche-rs/avalanchego-conformance-sdk/src/lib.rs:54:72
[2023-08-31T18:03:21Z INFO  avalanchego_conformance_sdk] creating a new client with http://127.0.0.1:22342/
test tests::key::secp256k1::load ... FAILED
thread 'tests::packer::build_vertex::build_vertex' panicked at 'called `Result::unwrap()` on an `Err` value: tonic::transport::Error(Transport, hyper::Error(Connect, ConnectError("tcp connect error", Os { code: 111, kind: ConnectionRefused, message: "Connection refused" })))', /home/runner/work/avalanche-rs/avalanche-rs/avalanchego-conformance-sdk/src/lib.rs:54:72
test tests::packer::build_vertex::build_vertex ... FAILED
[2023-08-31T18:03:21Z INFO  avalanchego_conformance_sdk] creating a new client with http://127.0.0.1:22342/
thread 'tests::ping' panicked at 'called `Result::unwrap()` on an `Err` value: tonic::transport::Error(Transport, hyper::Error(Connect, ConnectError("tcp connect error", Os { code: 111, kind: ConnectionRefused, message: "Connection refused" })))', /home/runner/work/avalanche-rs/avalanche-rs/avalanchego-conformance-sdk/src/lib.rs:54:72
test tests::ping ... FAILED

successes:

successes:

failures:

failures:
    tests::key::bls::generate_bls_signature
    tests::key::certificates::generate_certificate_to_node_id
    tests::key::certificates::load_certificate_to_node_id
    tests::key::secp256k1::generate
    tests::key::secp256k1::load
    tests::key::secp256k1::recover_hash_public_key
    tests::packer::build_vertex::build_vertex
    tests::ping

test result: FAILED. 0 passed; 8 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.01s

error: test failed, to rerun pass `-p avalanchego-conformance --lib`

Develop community roadmap

There are currently no public-facing documents that indicate the roadmap for this project. For the purposes of the community, there should be a clear list of issues and milestones that the project hopes to achieve. This way, issues can be picked up by external contributors and the project can continue to grow.

JSON serialization inconsistencies

There seem to be some inconsistencies with Go and Rust with regards to JSON serialization.

AvalancheGo serializes a tx to JSON json.Marshal(tx) that looks like this:

{
  "unsignedTx": {
    "networkID": 1337,
    "blockchainID": "11111111111111111111111111111111LpoYY",
    "outputs": [
      {
        "assetID": "28VTWcsTZ55draGkmjdcS9CFFv4zC3PbvVjkyoqxzNC7Y5msRP",
        "fxID": "spdxUxVJQbX85MGxMHbKw1sHxMnSqJ3QBzDyDYEP3h6TLuxqQ",
        "output": {
          "addresses": ["P-custom18jma8ppw3nhx5r4ap8clazz0dps7rv5u9xde7p"],
          "amount": 19999999899000000,
          "locktime": 0,
          "threshold": 1
        }
      }
    ],
    "inputs": [
      {
        "txID": "CjisYCwF4j7zSyC25MWR21e5xxzJgwdaLEuf7oBXSGpe3oaej",
        "outputIndex": 0,
        "assetID": "28VTWcsTZ55draGkmjdcS9CFFv4zC3PbvVjkyoqxzNC7Y5msRP",
        "fxID": "spdxUxVJQbX85MGxMHbKw1sHxMnSqJ3QBzDyDYEP3h6TLuxqQ",
        "input": { "amount": 19999999900000000, "signatureIndices": [0] }
      }
    ],
    "memo": "0x",
    "validator": {
      "nodeID": "NodeID-JHD1JDUZYkiMdWPMDA9UJhWXR4wf25t39",
      "start": 1711142673,
      "end": 1711142913,
      "weight": 20,
      "subnetID": "CjisYCwF4j7zSyC25MWR21e5xxzJgwdaLEuf7oBXSGpe3oaej"
    },
    "subnetAuthorization": { "signatureIndices": [0] }
  },
  "credentials": [
    {
      "signatures": [
        "0x3265999bb1a3390f09ca28a75e293faeb2f5670d26d9e99dfef9db7ccf08ccbb35f8fe144e63c91f24d0eca57d9db9d8cf7c223362e58b89eb0d317d4c35ac1c00"
      ]
    },
    {
      "signatures": [
        "0x0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000"
      ]
    }
  ],
  "id": "2EhnpHqoBEJYFmfBiBmvoykWLFvxAjMXuuLGjvSCuCWvMaHbo1"
}

Avalanchego generates these bytes for the above: 00000000000d0000053900000000000000000000000000000000000000000000000000000000000000000000000194b45aa6e4464a9ad3fc4e73c7947ba1632d7c820baf0ebbfc7548c4823f26bd0000000700470de4d97cdcc0000000000000000000000001000000013cb7d3842e8cee6a0ebd09f1fe884f6861e1b29c000000011aa63be8467b5f45765fb172c6ed6644a6edb66ca041167e60a0781662fade580000000094b45aa6e4464a9ad3fc4e73c7947ba1632d7c820baf0ebbfc7548c4823f26bd0000000500470de4d98c1f00000000010000000000000000bd8ad0a918e6791a321e03355e707609ecf856de0000000065fdf7110000000065fdf80100000000000000141aa63be8467b5f45765fb172c6ed6644a6edb66ca041167e60a0781662fade580000000a00000001000000000000000200000009000000013265999bb1a3390f09ca28a75e293faeb2f5670d26d9e99dfef9db7ccf08ccbb35f8fe144e63c91f24d0eca57d9db9d8cf7c223362e58b89eb0d317d4c35ac1c0000000009000000010000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000

But the only structure I can come up with that this Rust SDK will deserialize looks like this:

{
  "base_tx": {
    "networkID": 1337,
    "blockchainID": "11111111111111111111111111111111LpoYY",
    "outputs": [
      {
        "assetID": "28VTWcsTZ55draGkmjdcS9CFFv4zC3PbvVjkyoqxzNC7Y5msRP",
        "fxID": "spdxUxVJQbX85MGxMHbKw1sHxMnSqJ3QBzDyDYEP3h6TLuxqQ",
        "output": {
          "addresses": ["P-custom18jma8ppw3nhx5r4ap8clazz0dps7rv5u9xde7p"],
          "amount": 19999999899000000,
          "locktime": 0,
          "threshold": 1
        }
      }
    ],
    "inputs": [
      {
        "txID": "2c1CbR7FGYdeFPB4WaeWphHZrChLH7TQ92FRW6U4mCWTnaxVsB",
        "outputIndex": 0,
        "assetID": "28VTWcsTZ55draGkmjdcS9CFFv4zC3PbvVjkyoqxzNC7Y5msRP",
        "fxID": "spdxUxVJQbX85MGxMHbKw1sHxMnSqJ3QBzDyDYEP3h6TLuxqQ",
        "input": {
          "amount": 19999999900000000,
          "signatureIndices": [0]
        }
      }
    ],
    "memo": "0x"
  },
  "validator": {
    "validator": {
      "node_id": "NodeID-111111111111111111116DBWJs",
      "start": 1710879913,
      "end": 1710966313,
      "weight": 20
    },
    "subnet_id": "2c1CbR7FGYdeFPB4WaeWphHZrChLH7TQ92FRW6U4mCWTnaxVsB"
  },
  "subnet_auth": { "sig_indices": [0] },
  "creds": []
}

20+ mins is kind of a crazy amount of time for unit tests to run

We should look into what's causing the tests to take so long, see if there's any way to speed them up or split them up.

We should only be running tests on a sub-crate that actually had changes applied, that would probably get us a nice speed up right off the bat.

Expected outcomes to close this issue:

  • Identify slow tests and create individual issues specifying that they need to be sped up (probably one issue per slow test)
  • Split up sub-crate tests based on diff.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.