Giter VIP home page Giter VIP logo

peggo's People

Contributors

adamewozniak avatar albertchon avatar alexanderbez avatar dependabot[bot] avatar facundomedica avatar gorgos avatar gsk967 avatar hsiang-xxs avatar jim380 avatar mankenavenkatesh avatar nhannamsiu avatar rafilxtenfen avatar robert-zaremba avatar toteki avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

peggo's Issues

If a single broadcasted ethereum transaction fails, the whole batch is ignored

TOB-UMEE-032

The Peggo orchestrator broadcasts ethereum events as Cosmos messages in batches of 10 (at least by default). If a single message fails to run on the Umee side, then all others will also be ignored (according to the comments shown in figure 32.1)


For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

Peggo still process batch that nonce below lastest nonce in ethereum

Summary of Bug

Peggo still process batch that nonce below lastest nonce in ethereum.
( I don't know if peggo call eth.call to test transaction before sending but safer will be better )

example.
peggo process batch nonce 71 and 73
image

lastest batch nonce in ethereum is 72
here

How to fix

change batch_relaying.go

if batch.Batch.BatchNonce <= latestEthereumBatch.Uint64() {
	continue
}

to

if batch.Batch.BatchNonce <= latestEthereumBatch.Uint64() {
	break
}

Version

Please provide the output of the following commands:

  • $ peggo version lastest upgrade is 0.2.5
  • $ go version go version go1.17 linux/amd64
  • $ uname -a Linux 5.4.0-97-generic

For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

dev: MockQueryClient implement GetPendingIbcAutoForwards

Summary

  • The gravitytypes.QueryClient added a new method and we should implement the mocked data accordingly as the method works on the gravity client side

Problem Definition

  • We are not testing the full query client

Proposal

  • Check how gravity is implementing that method to write a mocked version of it

For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

Peggo docs

Summary

  • Update and explain every flag on peggo
  • Add spec context around explaining the functionality and what peggo does

Problem Definition

  • Give the validators, a better understanding of how to run

For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

pubished peggo: version command fails

Running $ peggo version against the published version of peggo yields the following:

$ ./peggo version
Error: open go.mod: no such file or directory

However, the command works fine when building from source, i.e. make install and are in the current peggo directory:

$ peggo version
version: v0.2.5
commit: 7c549ce213d0f3764dcb58f115d8efc199367efe
sdk: v0.45.1
go: go1.17.3 darwin/amd64

For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

Peggo orchestrator uses only a single price oracle in IsBatchProfitable

TOB-UMEE-033

The Peggo orchestrator relays ethereum batches only if they are profitable (figure 33.1). Profitability is determined by using the ETH/USD price fetched from a single source – the CoinGecko API. This creates a single point of failure for the Peggo relayer as if the used API gets hacked, an attacker will be able to choose whether to relay the batches.


For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

bug: deploy-erc20-raw uses strconv.Atoi instead of strconv.ParseUint

Summary of Bug

From Trail of Bits first weekly report. ID: TOB-UMEE-001

 - denomDecimals, err := strconv.Atoi(args[4])
 + denomDecimals, err := strconv.ParseUint(args[4], 10, 8)

For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

Default eth-blocks-per-loop is too high / Transactions are too big

Summary of Bug

This affects any syncing validators that get started a long time after the bridge started. They will see lots of claims and Peggo will try to relay them in a single TX, which will cause the claims to never be sent and just get stuck until we lower the amount of ETH blocks to scan.

I see 2 candidate solutions here:

  1. Reduce the default amount of blocks.
  2. Somehow know the max amount of data that can be sent and split the messages into several transactions.

I like more the option 2, as it would give us the opportunity to keep the performance high while ensuring that we won't be trying to send huge txs.
Option 1 is a bit of a gamble, you never know how many events there'll be in a block.

Special thanks to our community member Merak who helped me debug this 🤝


For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

Implement LogicCalls relaying and signing

Summary

This issue tracks the progress of implementing logic calls on Peggo. This is the only feature we are missing to be 100% compatible with Gravity Bridge. At the moment there's no usage for it, but there will be some (probably related to bridging other assets like NFTs).


For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

Peggo doesn't warn about non-local unencrypted URL schemes

TOB-UMEE-030

The peggo orchestrator command accepts the --tendermint-rpc and --cosmos-grpc flags specifying the corresponding RPC URL addresses but neither rejects nor warns a user when unencrypted non-local URL schemes are provided to those flags (such as http:///). This allows an attacker who is connected to the same local network as the system Peggo runs on to perform man in the middle attacks and so intercept and modify the network traffic of the device influencing its operations.


For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

ERC20s that block transfer to particular addresses enable DoS/Censorship

By default OpenZeppelin’s ERC20 will revert any tx going to address(0). Let’s give this some thought and see if we should add this check in the Peggy module (according to Justin’s comment we should).

I think this one needs to be solved in the Peggy module instead of Peggo.

Ref: code-423n4/2021-08-gravitybridge-findings#8


For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

chore: Increase test coverage to 65%

Summary

Increase test coverage of peggo to ~65%

Problem Definition

Peggo doesn't have enough test coverage. To ensure it is stable as we make changes we need better test coverage.

Proposal

Write more tests covering the areas that aren't covered. Run a test coverage report and pick the least covered parts and proceed to increase their coverage.


For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

Lack of prioritization for Peggo orchestrator messages

TOB-UMEE-031

The Peggo orchestrator's messages have no priority over other transactions whether they get included in a block or not. As a result, the orchestrator transactions may not be included in earlier blocks when the network is highly congested. This is a similar problem to the one described in finding TOB-UMEE-020


For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

test: Re-add test for HashingTest.sol

Summary

This test was commented out.

Problem Definition

This tests the hashing procedure we use for checkpoints

Proposal

Re-write the necessary functions in the contract and the test.


For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

Investigate the right amount of messages per transaction

At the moment we are sending events (claims) from Ethereum to Umee in batches of 10 messages per batch.
I have some concerns about this because every Eth->Umee tx will create a message, so if there are a thousand txs going from Ethereum to Umee, that would mean 1000msgs per validator, or 100tx (of 10 msgs) per validator.

These messages are on top of batch request messages, batch confirm msgs, valset confirm msgs, etc.

Concerns:

  • slow network during high traffic events, especially going from eth to umee.
  • that orchestrators take all of the block gas for themselves

Not concerns:

  • batch timeouts. These are given by an ethereum block height, so if Umee didn't see that block yet, it won't timeout (even if on the Ethereum network that block already happened).
  • TXs going from Umee to Eth won't be affected. Only the destruction of sent tokens will be delayed.

Possible solutions:

  • Increasing block gas limit
  • Increase amount of msgs per tx (this is a double-edged solution, as it can reduce gas usage a bit but also occupy a larger space in a block)
  • Batch deposits (I don't see a practical way to do this atm)

For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

Peggo does not validate ethereum addresses format

TOB-UMEE-028

There are multiple code paths in the Peggo codebase where ethereum addresses are parsed with the go-ethereum's HexToAddress function (figure 28.1) which does not return an error in case of incorrect address format passed to it.


For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

SafeMath library is not always used in Gravity

This one seems to be solved on Althea and PeggyJV (at least there are mentions of SafeMath in Gravity.sol)

https://github.com/PeggyJV/gravity-bridge/blob/main/solidity/contracts/Gravity.sol#L30
https://github.com/althea-net/cosmos-gravity-bridge/blob/main/solidity/contracts/Gravity.sol#L46

Ref: code-423n4/2021-08-gravitybridge-findings#60


For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

Implement and test Valset rewards

These are the rewards given to validators when relaying a valset update. We're not using it at the moment, so this issue is here to track anything related to this in Peggo.


For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

Tx will not resync nonce if nonce too low or too high

Summary of Bug

from SendTx in eth_committer.go

there is some part of code that work like this.

txHashRet, err := e.evmProvider.SendTransactionWithRet(opts.Context, signedTx)
case strings.Contains(err.Error(), "nonce is too low"),
	strings.Contains(err.Error(), "nonce is too high"),
	strings.Contains(err.Error(), "the tx doesn't have the correct nonce"):
	if resyncUsed {
		e.logger.Error().
		Str("from_address", e.fromAddress.Hex()).
		Int64("nonce", nonce).
		Msg("nonces synced, but still wrong nonce for address")
		err = errors.Wrapf(err, "nonce %d mismatch", nonce)
		return err
	}
	resyncNonces(e.fromAddress)
        resyncUsed = true
	// try again with updated nonce
	nonce, _ = e.nonceCache.Get(e.fromAddress)
	opts.Nonce = big.NewInt(nonce)
	continue

Which mean if I send TX and response with error about nonce , it will resync nonce and send TX again. If it still response with error about nonce then return error.

The problem is rpc will response with "nonce too low" and "nonce too high" but the code is use "nonce is too low" and "nonce is too high"

So the above code will not work at all in case rpc response with "nonce too low" or "nonce too high" and nonce will not resync and stay same and all txs will not sucecssful commit until nonce value has been updated. After nonce has been update , it will send all tx in one go ( which is too late and all failed because other node submit higher nonce[ batch nonce not eth nonce ] ) cause heavily lost.

image

Version

Please provide the output of the following commands:

  • $ peggo version : I modify some code , so it's don't allow to use peggo version
  • $ go version : go1.17 linux/amd64
  • $ uname -a : Linux ubuntu10 5.4.0-92-generic

gravity wars issue : link


For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

Outdated and vulnerable dependencies

TOB-UMEE-035

Both Umee and Peggo have outdated and vulnerable dependencies. The table below shows the problematic package versions for Umee and if the entry is highlighted in yellow, it was also detected in Peggo dependencies. (Check audit document)


For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

Make it clear that ETH gas price is in wei, not in Gwei

At the moment there's no indication on the unit:

      --eth-gas-price int          The Ethereum gas price to include in the transaction; If zero, gas price will be estimated

For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

feat: support fee-payer

Summary

By default, validator's will use their validator operator address as their orchestrator address (ref: https://github.com/umee-network/umee/blob/main/x/peggy/keeper/msg_server.go#L40-L49).

Now will this is technically OK, I presume many validators will not want to do this, especially being that many of them are actually multi-sigs and would be seen as a potential security concern to use this account as their orchestrator account.

Validator's can instead chose to use another account as their orchestrator address. However, they may not want to continuously top off this account with funds to pay for bridge relayer fees. In fact, they may not want this account to have many funds at all.

Thus, we should utilize the x/feegrant module in the SDK. Nothing really needs to be done from our side apart from:

  1. Support an (optional) fee-payer field in the configuration.
  2. Populate FeeGranter in the client.Context with fee-payer (if non-empty)

The terms are slightly confusing, but fee-payer == FeeGranter


For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

Tweak loops and wait durations

Loop durations shouldn't be based on arbitrary numbers, but based on multipliers of known "constants" (or approximate constants).

Constants we can use:

  • Ethereum average block time
  • Cosmos average block time

These values are hardcoded into the Peggy parameters in the genesis.

Loops:

Eth Oracle

Retrieves events from Ethereum and sends them over to Cosmos. This one should be run slower than the average Eth block time. I suggest 2x or 3x. Edit: I'll put this a bit higher, 5x for now to reduce the number of calls to the geth node. If it's to slow we can just modify it again.

Batch Requester

Takes whatever is in the outgoing tx pool and creates a batch; the batch will only be created if it's more profitable than the one that's already waiting in the batch queue. This should be the slowest loop, given that all validators will be calling it and the outgoing TX pool is one for the entire network. It should be a multiplier of the Cosmos average block time because that's were TX are created. I suggest we start with a value like 60x (5m) and see how it goes.
I assume we'll endup using a much higher value like 120x (10m) or 240x (20m). I don't know how to estimate with a technical method but when you have 200 (unsynchronized) validators calling this endpoint, the amount of calls this endpoint will receive is much higher than the individual amount of times.

Eth Signer

This gets any unsigned batches and signs them. This should be based on the cosmos block time, we should allow at least 1 block to pass before re-checking. I suggest 2x.

Relayer

For this one we want to allow at least 1 Eth block to pass between calls.
Faster checking means a validator could be more competitive while trying to be the first to detect the contract state changes and sending new batches; but that would be at the expense of possibly more invalid TXs and more RPC calls to the Ethereum node and the Cosmos node.

Proposal

Given that the only loop that could provide with some competitive advantage is the Relayer loop we should consider adding a flag to modify this one with a default value of 3x. Could be something like: --relayer-loop-multiplier=1.2

The other loops can be hardcoded as multiples of the base times.

EDIT:
We need to have --requester-loop-multiplier to make e2e tests go faster. A sensible default for a network would be 60 blocks (5m) but for a e2e test we want this to go as fast as possible.
On production, this flag won't have much use, although we might want to adjust it from time to time according to the number of TXs.
Low volume of TXs: means fewer TXs in the outgoing pool; we might want to make this value larger so we don't create too many small batches.
High volume of TXs: means more TXs in the outgoing pool; we might want to make this value smaller so we don't let too many TXs pile up in the outgoing pool.


For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

Support prometheus

Summary

  • Helps to keep track of metrics

Problem Definition

Proposal


For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

Add TMKMS and Web3Signer support to Peggo

Summary

Adding TMKMS and Web3Signer support to Peggo would allow runners of the Peggo bridge to run it more securely, seperating concerns - generation of transactions (non-sensitive) and signing of said txs (sensitive).

Problem Definition

Running Peggo in its current state requires having soft keys (both ETH and Umee) on the server instance where Peggo runs. This poses a security concern, especially in bigger teams, where multiple people manage separate networks. From an operational perspective it would be better if the Peggo instance itself would not concern itself (ideally) with signing transactions and not store soft keys.

Proposal

We propose using TMKMS and Web3Signer as signer interfaces for Peggo.


For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

Refactor SendEthereumClaims to make it more readable

Summary

The function SendEthereumClaims (orchestrator/cosmos/broadcast.go) goes through all the scanned Eth events and relays them over to Umee. Right now it works perfectly but has poor readability.

Proposal

The proposed solution adds some overhead but offers way more readability. Instead of tracking 4 different lists of events, let's add them to a single list using a wrapper (SortableEvent) and iterate through them.

// SortableEvent exists with the only purpose to make a nicer sortable slice
	type SortableEvent struct {
		EventNonce         uint64
		DepositEvent       *wrappers.PeggySendToCosmosEvent
		WithdrawEvent      *wrappers.PeggyTransactionBatchExecutedEvent
		ValsetUpdateEvent  *wrappers.PeggyValsetUpdatedEvent
		ERC20DeployedEvent *wrappers.PeggyERC20DeployedEvent
	}
	allevents := []SortableEvent{}
	// add all events to allevents
	// sort allevents by event nonce
	// remove all events which nonce is equal or lower than lastClaimEvent
	for _, event := range allevents {
		if event.DepositEvent != nil {
			if err := s.sendDepositClaims(ctx, event.DepositEvent); err != nil {
				log.WithError(err).Errorln("broadcasting MsgDepositClaim failed")
				return err
			}
		}

		if event.WithdrawEvent != nil {
			if err := s.sendWithdrawClaims(ctx, event.WithdrawEvent); err != nil {
				log.WithError(err).Errorln("broadcasting MsgWithdrawClaim failed")
				return err
			}
		}

		// do the same for valsets
		// do the same for erc20 deployed
	}

For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

For the PriceFeed all tokens have 18 decimals

Summary of Bug

In the function CheckFeeThreshold the exponent is hardcoded, meaning that if we have tokens that do not have 18 decimals, the calculated price will be off by a lot. That value should be queried from the token's contract, using the "decimals" attribute from the ERC20 standard. Then we can keep that value in memory because it's fixed and won't change.
Ref: https://github.com/umee-network/peggo/blob/main/orchestrator/coingecko/coingecko.go#L137


For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

Create CI and Release Pipelines

  • Create Github workflow actions that test and lint at the very minimum. We can use umee for inspiration or source code.
  • Create a Release workflow using goreleaser. We can probably just copy the umee configuration.

FindLatestValset only looks in the last 2000 blocks

Summary of Bug

We encountered this edge case on Umeemania.
The network didn't send valset updates for a while and then because validators were running in ValsetRelayModeMinimum, they didn't think it was necessary to send a new valset update.

We should change this so if it doesn't find any valset update in the last 2000 blocks it sends one anyway, even tho it's not really necessary in terms of power change.


For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

feat: Make panic message show the address of the validator with empty power

Summary

When there is a validator with empty power, have the panic message show the validator's address. This is useful for debugging.

Problem Definition

When a node panics because of a validator empty power all we see is

May 14 03:43:26 alley umeed[1407250]: panic: generated invalid valset: invalid members: member 5: power: empty

This isn't helpful in debugging because we don't have information that specifically identifies the validator.

Proposal

Show the address and any other identifying information for the validator in the panic message.


For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

Batches won't be created if just one token doesn't reach the minimum batch fee

Summary of Bug

During the internal testnet I found that moving the price of a single token had effects on how the other tokens batches were created. This can impact speed and in some cases even halt the bridge.

How to replicate

We have Tokens (ERC20) A, B, and C. And the batch fees list (got with UnbatchedTokensWithFees) orders them that way (A, B, C).
If token A has less than minimum batch fee, txs of tokens B and C will be halted, even if they have more in fees.

How to solve

Do not return here: https://github.com/umee-network/peggo/blob/main/orchestrator/main_loops.go#L249 so the loop can keep checking for the other tokens

Ref:
https://github.com/umee-network/peggo/blob/main/orchestrator/main_loops.go#L235


For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

Go 1.18

  • Update to go 1.18

For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

Chore: Cleanup Peggo

A braindump of changes I'd like to see in Peggo:

  • Update README
  • Remove dependencies on github.com/InjectiveLabs/sdk-go
  • Remove metrics usage (we can replace this with prom later)
  • Remove statsd usage
  • Replace CLI framework with cobra (#3)
  • Migrate commands from umee-network/umee#177
  • User zerolog logger instead of current logger
  • Groom for idiomatic logging (e.g. proper formatting, grammar, capitalization, etc...)
  • Remove hard-coded references to injective
  • Remove all fmt.Println() in favor of fmt.Fprint* for non-logger diagnostic output
  • Remove all errors.Wrap(...) in favor of fmt.Errorf("...: %w", err)
  • Fix all golangci-lint errors (per $ make lint)
  • Examine Orchestrator implementation
    • Do we need to use an interface?
    • Are there methods or functions we can cleanup or remove?
    • etc...

Nonce too low error

Summary of Bug

This is an error we've been seeing sporadically for a while, it looks like it's not dangerous at all, a simple restart fixes it, but I would like to know why it happens in the first place.

5:00PM ERR sendTransaction failed error="nonce too low" module=ethCommiter tx_hash=0x9a29ff7ea9c26def63ced06a3cc59f906b8d4157ef85f4be336379f9bbe9ea25 tx_hash_ret=0x0000000000000000000000000000000000000000000000000000000000000000

For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

Refactor and add logging to fee related code

Proposal

  • In relation to the rest of the code, what's related to fees is under-logged. Let's add more logs!
  • Some code is repeated, let's clean it up a bit

For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

Cache ERC20 queries (denom, decimals, etc.)

Summary

There are some queries that can be cached given that whatever they are querying for is something that doesn't change. Some examples: ERC20ToDenom, GetERC20Decimals

Notes: use a thread-safe map (not syncmap)


For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

Peggo's current block calculations may overflow

TOB-UMEE-027

Description
Peggo calculates a delayed block number in a few code paths by subtracting the latest block number with a delay value. This subtraction may overflow and cause Peggo to operate incorrectly when Peggo is run against a blockchain node whose latest block is lower than the delay value.
The severity of this finding is set to informational as the issue is unlikely to happen in practice and if it occurs, it is quite easy to spot it and wait for the specific block number.
This happens in the following code paths:

  • In the gravityOrchestrator.GetLastCheckedBlock method (figure 27.1)
  • In the gravityOrchestrator.CheckForEvents method
  • In the gravityOrchestrator.EthOracleMainLoop method
  • In the gravityRelayer.FindLatestValset method

Recommendations
Short term, fix the delayed block calculation integer overflows in Peggo by making Peggo wait until the minimum block number required to work is passed on the blockchain Peggo is configured with


For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

Add a profit multiplier/percentage to batch relaying

Summary

Validators don't have an easy way to set how much they want to earn. They should be able to add a % of profit on top of fees - profits. This will provide more control over the orchestrator, add some competition, and improve incentives.

Proposal

Add a flag like --relayer-profit-mult or --relayer-profit-percentage that receives a number. Then profitability will be calculated using that value so the batch will be profitable only if batch fees are over X% greater than gas cost. I'm not very good at math, I'll need @alexanderbez to help me define this.


For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

Timing Configurations

There are three timing inputs in which I'd like for us to examine if they should be externally configurable or not:


For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

Support Multiple Ethereum RPCs

Summary

Currently, we support providing an Ethereum RPC to Peggo's orchestrator command via --eth-rpc. However, RPCs can fail and many operators are using and/or running multiple RPCs for redundancy and liveness purposes.

Problem Definition

In order to switch to a new ETH RPC if one goes down or is not available, the operator would have to update the --eth-rpc flag value and restart Peggo making this a tedious and manual process. If the operator has no alerting or monitoring in place, it could even be too late.

Proposal

Peggo should support taking a list of ETH RPCs instead of a single one, --eth-rpcs=addr1,addr2,.... Peggo will connect to the first one. If any request ever fails, or at least fails X number of times, it'll automatically try to connect to and use the next RPC, in a round-robin fashion.


For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

Duplicate signature errors with new block time

Summary of Bug

5:07PM ERR failed to (sync) broadcast batch tx error="failed to CalculateGas: rpc error: code = InvalidArgument desc = failed to execute message; message index: 0: duplicate signature: duplicate: invalid request" module=cosmos_client size=1 tx_response=null

I think this is due to sending a tx without waiting enough time for the previous one to get processed. We might need to add a multiplier to block time.

Note: this doesn't stop the bridge from working, it just prints the error once in a while but keeps working because it retries right away.


For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

Upgrade to price-feeder 0.2.3

It requires some fixes given that the APIs have changed


For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

feat: Re-check batch fees before relaying

Summary

I would like for each validator to check a batch's total fees again before relaying it (with some degree of tolerance/slippage).

Problem Definition

Currently, we only check for fees when creating a batch and not when relaying it, this can create some undesired situations:

  • A batch was profitable at the moment of requesting it but it's not profitable anymore (not very likely).
  • A misconfigured validator is requesting unprofitable batches that other validators will pick up and relay at a loss.
  • What some validators consider profitable others might not (this is the most likely case).

Proposal

Adding a fee check before relaying batches would fix the issues listed above while also giving place to some interesting features. This fee check can be paired with a slippage parameter, so the batches are still relayed even if the price of the relayed asset goes down a little between the request and the relay events.

Note: this won't affect the signing of batches, a validator will sign all valid batches but won't relay unprofitable ones.

This could also enable:

  • Validators can be in control of what they are going to relay.
  • Opens up to some degree of competition between validators, because they'll try to request a batch and relay it before others. That is because if validator A has a minimum batch fee of $100 and validator B has a minimum batch fee of $110, validator A will be requesting and relaying batches more often than validator B because its minimum will be reached first.
  • If a validator decides to do "cheap labor", it will be free to do so without compromising the other validators.

A follow-up idea: add the option to have a minimum batch fee per token. So a group could subsidize the transactions of a specific token to improve its bridging speed/cost.


For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

Do not retry any reverting transactions on Ethereum

Summary of Bug

If we get an error including "execution reverted" we shouldn't retry it, just skip it as it will keep failing.

9:16PM ERR Transaction with same batch input data is already present in mempool module=peggy_relayer
9:16PM ERR failed to estimate gas cost error="execution reverted: New batch nonce must be greater than the current nonce" module=peggy_relayer
9:16PM ERR failed to relay tx batches; retrying... error="execution reverted: New batch nonce must be greater than the current nonce" loop=RelayerMainLoop module=peggy_relayer retry=6

For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.