sygmaprotocol / sygma-relayer Goto Github PK
View Code? Open in Web Editor NEWLicense: GNU Lesser General Public License v3.0
License: GNU Lesser General Public License v3.0
Add substrate message handler for the executor module
add message handler that will create proposal data for the execution method on the substrate pallet
Add registration functionality for the message handler
Add unit tests
We are able to register message handler
We are able to handle the message and create proposal data for the execution method on the substrate pallet
Unit tests added
Expand communication layer implementation, so broadcast is reliable.
Based on this reliable broadcast specification
Implement deposit(fungibleTransfer) event handler
Parse deposit event and create message
pass the message into the message channel
add unit tests
we are able to parse substrate deposit events(fungibletransfer)
Since we decided that our first production ready Generic Message Passing should include fees support with scurrying tx's batching we need to plan next work and design accordingly.
We ned to work on SoW to understand all possbile problems we could face.
Then based on this Sow we should create all necessary tasks
Currently, we have only stable and latest tag images, but also need tagged version on releases so we and parters can rollback and use fixed versions so it doesn't accidentally pull new changes.
Sygma commit (or docker tag):
chainbride-solidity version:
Go version:
Repalce e2e and local-setup images
[] e2e tests and local setup should now use truffle migrated image from docker
[] Add local setup documentation to the docs
Currently, the example app is not needed anymore (the production app and example app are mostly the same) and it would be better to use the same for example app and production app as we use example for e2e tests and it would be a better test.
resource.json
file has incorrect resources id's for all the assets.
Export metrics for:
Use openetelemetry to export specified metrics.
Deposit count is provided from core, error rate should be added to chain write methods,
total amount of relayers and available relayers should be added from the communication health check method.
Time between event and execution can calculated if the starting time is added into a map with the deposit nonce and destination and then calculated after the execution.
Check metrics after running e2e tests
keeping startBLock in sharedConfig makes it is impossible to change this value only for one service in order to resync it if necessary. This property should be strictly related only to particular services and should be removed to sharedConfig
[] Updated shared config specification
[] Make necessary changes for Relayer code
[] Make sure that other services hardly depend on this. If they are crate more issues to handle this
Add pipeline to the repository that will enable releasing new versions of relayers with generated CHANGELOG file.
We can use the release-please plugin to set up this flow.
Add Executor module for substrate, executor should be able to send extrinsic
details could be found in the SoW research doc
Unit test
Relayer should be able to sign and send extrinsic to substrate pallet
By mistake, left a Println in permissionless generic deposit handler.
Sygma commit (or docker tag):
chainbride-solidity version:
Go version:
Implement bridgePallet functions
implement bridgePallet functions:
add unit tests
bridge pallet functions implemented and tested
We need to encrypt/decrypt the topology file that is saved to ChainSafe Storage
To avoid spamming RPC endpoints accidentally, we should set indexing from the latest block as a default behavior if no start block is set.
Our defaults currently are really low and are impacting throughput as the transactions don't appear because of too low gas.
Update max gas price, gas limit and gas multiplier to some sane values.
Go through the config and check everything.
The Multilocation is hardcoded for evm -> substrate transfer
Relayer should parse multiplication from deposit data and pass it to substrate execution method
When transferring tokens from evm -> substrate the multiplication is hardcoded in message-handler
Sygma commit (or docker tag):
chainbride-solidity version:
Go version:
Add substrate connection
Implement the substrate connection so we are able to fetch data from substrate
implement connection struct with methods:
add unit tests
we are able to establish connection and pull data from substrate chain
For relaying parters to generate libp2p identity private key in protobuf format for ease of use we should
add command to generate keypair and printout peerID and private key in base64 format.
Based on changes made to the generic cross-chain message format we need to refactor relayers so they can process this new format. In addition, relayers need to use the information on the maximum fee from the message itself when executing on the destination.
PermissionlessGenericDepositHandler
and PermissionlessGenericMessageHandler
to process new message format.maxFee
parameter from the cross-chain message when executing a generic request on the destination chain.maxFee
parameter is used on execution.Instead of manually specifying each domain we should have one config param for all domains to avoid needing to update terraform scripts each time.
Check if it is easy to make it a map and merge with shared config. If not make it a list.
Update devops task files and standalone script for deployment accordingly.
We should avoid being dependent on Storage API for fetching topology as we can just pull it from IPFS which
should be more resistant to failure.
Fetch topology from IPFS (preferably create an IPNS domain for it).
It is pretty difficult to debug some of the occasions when relayer is running, next improvements are suggestions to improve general verbosity of some Relayer actions
1.[Info] On startup Relayer should log all SYG_DOM_N evs values. (Except private key, please in this log convert private key to corresponding address)
2. [Info] Processing any events logs should appear not every iteration call but every 5 minutes with the range of all parsed during that time blocks and amount of found events. Although when event found all the logs should remain the same
3. [Info] Print Network Topology on startup and after Refresh
4. [Info] Print Topology URL on startup and on key Refresh
5. [Info] When Relayer is not part of part of MPC group (situation when Relayer have been deployed by partners, but it is still have not being added to Topology map hence does not have any peers) it should notify about this every 5 minutes.
[] Logs have been added
To get more continuous insight into libp2p communication, we want to move the communication check that is currently happening on application startup to the function being invoked when the /health
endpoint is invoked.
/health
endpoint./health
endpoint is invoked.Avoid using domainID to sign EIP712 data to fix overflow of chainID into domainID.
Sygma commit (or docker tag):
chainbride-solidity version:
Go version:
We need to start adding E2e Substrate related tests to our Relayer code.
[] E2E tests are passing
We ran a script that executes 10.000 deposit requests simultaneously. This resulted in relayers batching a huge number of bridging requests in one MPC signing, or what is actually problematic, into one executeProposals
call. As you can see here, this fails on the destination as it is impossible because of the gas limit to execute so many transfers in one transaction.
We should limit the number of requests that can be batched into one MPC signing (execution). We can implement this on relayers. Once the relayer process more than X requests, it starts a new MPC signing and continues to process requests from this batch of blocks. Currently, we are batching all requests (without generic) that came in the last N blocks (where N is the number of blocks that relayers are processing in batch).
Run a large number of deposits simultaneously and check that all executions are successful.
LoadPeers unit test has flaky behavior and sometimes reorders peers.
Make the test reproducable.
Sygma commit (or docker tag):
chainbride-solidity version:
Go version:
Create new example of local setup with fee oracle instead of basic fee handler
docker-compose.yml should consist fee oracle server which works for test token at geth nodes
Add Substrate chain support in the app level along with corresponding configuration
details could be found in the SoW research doc
Unit tests
Relayer should support substrate chain beside EVM chain type
Relayer should be able to load substrate chain config from configuration file
Relayer should be able to launch after reading substrate chain configuration file
Relayers need to be refactored so they support the new shared configuration.
Add substrate client
Add Client package that should be able to sign and submit extrinsics
Add unit tests if posible
Substrate client package added
The client is able to submit and sign extrinsics
Unit tests added
Implement E2E test that makes deposit from Substrate to EVM and from EVM to Substrate.
Check that balances
[] E2E tests
[] E2E Tests are passing including deposits EVM <> Substrate
All relayers need to start processing each domain on a specific block (dividable by block interval), as this is how we are sure that all relayers are processing the same batches of blocks. This is working as described, except when relayers are set to start from the latest block, the --latest
flag.
Redundancy is needed when invoking RPC endpoints for interaction with the chain.
Firstly, expand configuration so it can accept an array of RPC endpoints. Then design and implement a mechanism that will dial the next endpoint from the array if multiple requests timeouts.
Add unit tests for this functionality.
For relaying partners to be able to run their own relayers we should make the docker images built on release public.
Switch to dockerhub and make them public there.
remove unused functions in /evm/calls/contracts/bridge
all unused functions are removed
We want to make generic handler permissionless, where each developer can use Sygma infrastructure to execute cross-chain calls without needing to contact the Sygma team to register it beforehand.
As a result, we are implementing v1.0.0 of the generic handler as a starting point - see the issue for solidity changes.
For more details on implementation and more context check this notion page.
GenericDepositHandler
and register it on relayer initialization (example and app)
depositData
GenericMessageHandler
and register it on relayer initialization (example and app)
depositData
. We need a new implementation that is taking into consideration newly defined format of depositData
.GenericDepositHandler
and GenericMessageHandler
add substrate event-listener
Add event-listener module for listening substrate events
add unit tests
Relayer is able to listen on substrate events
Add e2e tests for dynamic fee calculation for GMP.
Since relayer now supports batch event process for EVM, it should also support it for substrate
details could be found in the SoW research doc
Unit test
E2E test that with local substrate evm with manually sending token transfer extrinsic
Relayer should be able to batch substrate events
Batch event number or block number should be configurable
We should have binaries of each version as assets stored in release for us and partners to be able to run relayer and
CLI commands related to relayer.
Add build and binaries to release CI pipeline.
For relaying parters to generate libp2p identity private key in protobuf format for ease of use we should
add command to generate keypair and printout peerID and private key in base64 format.
For our v2 iteration of the generic handler, we need generic bridge requests not to be batched (one request per MPC signing). More on the reasoning behind this can be found inside technical documentation.
Implement a new deposit handler that will process generic requests one by one.
Add unit tests for the new deposit handler. Expand generic handler e2e tests with a case where multiple generic requests are sent in the same block.
We realized that our relayers, after working for some time, get to this state where they are not able to open streams toward other relayers (peers).
I would say it is related to the connection number limit as described in the discussion below:
Error on dial: system: cannot reserve connection: resource limit exceeded
This needs more investigation and generally checking that all connections are being closed once we are not using them anymore. In conjunction with this, I realized that if you observe the diagram on datadog of memory usage for our relayers (check for a month period) we have some kind of memory leakage. This is likely related to connection management.
We first need to validate how connections are being managed by adding some additional logging and then evaluate what is next step.
Unfortunately, it is hard to define the exact steps to reproduce this. It happens in our dev environment after relayers work for some time.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.