Giter VIP home page Giter VIP logo

cdk-validium-node's People

Contributors

agnusmor avatar amonsosanz avatar arnaubennassar avatar arr552 avatar christophercampbell avatar cool-develope avatar dependabot[bot] avatar djpolygon avatar dpunish3r avatar elias-garcia avatar estensen avatar fgimenez avatar github-actions[bot] avatar joanestebanr avatar kind84 avatar konradit avatar mikelle avatar mt-polygon-technology avatar najeal avatar obrezhniev avatar omahs avatar psykepro avatar rachit77 avatar rebelartists avatar stefan-ethernal avatar tclemos avatar toniramirezm avatar vcastellm avatar xavier-romero avatar zkronos73 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cdk-validium-node's Issues

no forkID received

After running make build-docker containers zkevm-sync, zkevm-sequencer, .... are logging error getting forkIDs. Error: error: no forkID received. It should receive at least one, please check the configuration.... Is there a miss configs? I used the default file test.node.config.toml.

The Metric "SequencesSentToL1" Does Not Work

The metric counter "SequencesSentToL1" is triggered in the sequence-sender, but the sequence-sender does not report any metrics outside.

The sequencer does report this metric but the counter is never triggered in its logic.

So this metric is always 0 and does not work as expected.

Stuck send batch to L1

I got this error from cdk-validium-sync component. I use Goerli as L1.

ERROR	synchronizer/synchronizer.go:284	error checking reorgs. Retrying... Err: missing required field 'maxFeePerDataGas' for txdata 
/src/log/log.go:140 github.com/0xPolygon/cdk-validium-node/log.appendStackTraceMaybeArgs()
/src/log/log.go:249 github.com/0xPolygon/cdk-validium-node/log.Errorf()
/src/synchronizer/synchronizer.go:284 github.com/0xPolygon/cdk-validium-node/synchronizer.(*ClientSynchronizer).syncBlocks()
/src/synchronizer/synchronizer.go:261 github.com/0xPolygon/cdk-validium-node/synchronizer.(*ClientSynchronizer).Sync()
/src/cmd/run.go:312 main.runSynchronizer()
pid=1version=fbd595da
github.com/0xPolygon/cdk-validium-node/synchronizer.(*ClientSynchronizer).syncBlocks
	/src/synchronizer/synchronizer.go:284
github.com/0xPolygon/cdk-validium-node/synchronizer.(*ClientSynchronizer).Sync
	/src/synchronizer/synchronizer.go:261
main.runSynchronizer
	/src/cmd/run.go:312

[BUG?] Modifying the coinbase causes a sequencer validation error

some error

Different field StateRoot. Virtual: stateA, Trusted: stateB
Different field Coinbase. Virtual: coinbaseA, Trusted: coinbaseB
https://github.com/0xPolygon/cdk-validium-node/blob/v0.6.7%2Bcdk/synchronizer/actions/etrog/processor_l1_sequence_batches.go#L371

When submitting to L1, use "Coinbase" from the configuration file as the Coinbase

https://github.com/0xPolygon/cdk-validium-node/blob/v0.6.7%2Bcdk/sequencesender/sequencesender.go#L195

When generating a batch, use "sequencer" as the Coinbase

https://github.com/0xPolygon/cdk-validium-node/blob/v0.6.7%2Bcdk/sequencer/batch.go#L307

The sequencer is defined here

https://github.com/0xPolygon/cdk-validium-node/blob/v0.6.7%2Bcdk/sequencer/sequencer.go#L58

https://github.com/0xPolygon/cdk-validium-node/blob/v0.6.7%2Bcdk/sequencer/sequencer.go#L102

https://github.com/0xPolygon/cdk-validium-node/blob/v0.6.7%2Bcdk/sequencer/finalizer.go#L109

After changing Coinbase=sequencer, the error disappeared. Here are some clues for assistance.

If coinbase must equal trustedSequencer, then why set coinbase? It can be obtained directly in the contract.

Support ephemeral DA backends

Certain DA providers use ephemeral storage which only persist preimages temporarily (e.g, EigenDA only stores blobs for 2 weeks after dispersal). This ephemeral DA type is incompatible with the existing CDK DA interface causing Forced DA verifier nodes to fail syncing and continually death loop when trying to read the first nonexistent batch from DA. This was proven via an experiment where we tuned down the blob expiration limit for a simulated EigenDA proxy backend.

Some type MAX_DA_WINDOW time value which defines a starting point that a verifier can sync chain state from external DA would alleviate this issue. This value would likely be best defined as a mutable public variable in the DA consensus contract.

NOTE: The existing forced DA node doesn't actually sync from DA when spun-up and instead death loops on trying to establish a connection with an intentionally broken trusted sequencer dependency URL. Doing these experiments required doing a few slight modifications to the node software to force the synchronizer to only read from DA to represent the intended forced nature.

ERC20: insufficient allowance when trying to send TX to L1

Hello,

I set up a private chain on top of L1 (Sepolia) and tried to bridge some Sepolia ETH to my private chain.
The first part of bridging transaction succeeded on L1 but I'm having the following issue with cdk-validium-eth-tx-manager when it tries to submit the second part to L1:

cdk-validium-eth-tx-manager  | 2024-02-13T21:39:36.664Z	ERROR	ethtxmanager/ethtxmanager.go:514	failed to estimate gas: execution reverted: ERC20: insufficient allowance	{"pid": 1, "version": "v0.0.2-hotfix1", "monitoredTx": "sequence-from-1-to-1"}
| github.com/0xPolygon/cdk-validium-node/ethtxmanager.(*Client).ReviewMonitoredTx
| 	/src/ethtxmanager/ethtxmanager.go:514
| github.com/0xPolygon/cdk-validium-node/ethtxmanager.(*Client).monitorTxs
| 	/src/ethtxmanager/ethtxmanager.go:347
| github.com/0xPolygon/cdk-validium-node/ethtxmanager.(*Client).Start
| 	/src/ethtxmanager/ethtxmanager.go:217
| 2024-02-13T21:39:36.665Z	ERROR	ethtxmanager/ethtxmanager.go:349	failed to review monitored tx: failed to estimate gas: execution reverted: ERC20: insufficient allowance	{"pid": 1, "version": "v0.0.2-hotfix1", "monitoredTx": "sequence-from-1-to-1", "createdAt": "2024-02-13T20:03:31.179Z"}
| github.com/0xPolygon/cdk-validium-node/ethtxmanager.(*Client).monitorTxs
| 	/src/ethtxmanager/ethtxmanager.go:349
| github.com/0xPolygon/cdk-validium-node/ethtxmanager.(*Client).Start
| 	/src/ethtxmanager/ethtxmanager.go:217
| 2024-02-13T21:39:37.665Z	INFO	ethtxmanager/ethtxmanager.go:263	found 1 monitored tx to process	{"pid": 1, "version": "v0.0.2-hotfix1"}

Can you please advise?

Sequencer will reprocess full batch due to `TimestampResolution` is too small

Problem

supernets2-sequencer                         | 2023-08-28T13:22:50.388Z INFO    sequencer/finalizer.go:1173     Closing batch: 1, because of timestamp resolution.      {"pid": 1, "version": "e7df769"}
supernets2-sequencer                         | 2023-08-28T13:22:50.388Z INFO    sequencer/finalizer.go:360      Closing batch: 1, because deadline was encountered.     {"pid": 1, "version": "e7df769"}
supernets2-sequencer                         | 2023-08-28T13:22:50.461Z INFO    sequencer/finalizer.go:756      storeProcessedTx: storing processed txToStore: 0x7b0e0a23d8eed819fe836d2807a49f0b3611ab9041f0f69cb37d2211bfd2c9a0       {"pid": 1, "version": "e7df769"}
supernets2-sequencer                         | 2023-08-28T13:22:50.462Z DEBUG   sequencer/dbmanager.go:172      Storing tx 0x7b0e0a23d8eed819fe836d2807a49f0b3611ab9041f0f69cb37d2211bfd2c9a0   {"pid": 1, "version": "e7df769"}
supernets2-sequencer                         | 2023-08-28T13:22:50.468Z DEBUG   state/helper.go:62      8 1000000000 624194 <nil> 0 3229 1001   {"pid": 1, "version": "e7df769"}
supernets2-sequencer                         | 2023-08-28T13:22:50.471Z INFO    sequencer/dbmanager.go:214      StoreProcessedTxAndDeleteFromPool: successfully stored tx: 0x7b0e0a23d8eed819fe836d2807a49f0b3611ab9041f0f69cb37d2211bfd2c9a0 for batch: 1      {"pid": 1, "version": "e7df769"}
supernets2-sequencer                         | 2023-08-28T13:22:50.471Z INFO    sequencer/finalizer.go:470      waiting for pending transactions to be stored took: 83.563887ms {"pid": 1, "version": "e7df769"}
supernets2-sequencer                         | 2023-08-28T13:22:50.471Z INFO    sequencer/finalizer.go:1083     reprocessFullBatch: BatchNumber: 1, OldStateRoot: 0xd88680f1b151dd67518f9aca85161424c0cac61df2f5424a3ddc04ea25adecc7, Ger: 0x0000000000000000000000000000000000000000000000000000000000000000   {"pid": 1, "version": "e7df769"}
supernets2-sequencer                         | 2023-08-28T13:22:50.472Z INFO    sequencer/finalizer.go:1092     reprocessFullBatch: Tx position 0. TxHash: 0x7b0e0a23d8eed819fe836d2807a49f0b3611ab9041f0f69cb37d2211bfd2c9a0   {"pid": 1, "version": "e7df769"}

Proposed Solution

after changing TimestampResolution from 10s to 60s, the issues can be mitigated

nonce too low: next nonce 185, tx nonce 184

Hi,

I'm seeing transactions stuck forever in eth-tx-manager with the following error message:

cdk-validium-eth-tx-manager  | 2024-02-22T22:41:10.052Z	ERROR	ethtxmanager/ethtxmanager.go:416	failed to send tx 0xa530490da3ce9897aed17b0446880c0166b4654a8bc43b2c442d87daffd47d18 to network: nonce too low: next nonce 185, tx nonce 184	{"pid": 1, "version": "v0.0.3-hotfix6", "owner": "sequencer", "monitoredTxId": "sequence-from-11-to-12", "createdAt": "2024-02-22T22:40:39.207Z", "from": "0x1E30d96d79b3dA314f855880122b3F285a5bc1CC", "to": "0xB2995479B92AeFE322173fA823C1A4DbD655aDe8"}
cdk-validium-eth-tx-manager  | github.com/0xPolygonHermez/zkevm-node/ethtxmanager.(*Client).monitorTx
cdk-validium-eth-tx-manager  | 	/src/ethtxmanager/ethtxmanager.go:416
cdk-validium-eth-tx-manager  | github.com/0xPolygonHermez/zkevm-node/ethtxmanager.(*Client).monitorTxs.func1
cdk-validium-eth-tx-manager  | 	/src/ethtxmanager/ethtxmanager.go:269

I use v0.0.3-hotfix6 version (checked out the tag and built it myself) I think there are two bugs probably:

  1. Assigning invalid nonce to begin with
  2. Not updating the nonce if the assigned nonce didn't work from the first try.

I tried v0.0.3-hotfix6 version (checked out the tag and built it myself) and I also tried the recent ethtxmanager/ethtxmanager.go from the develop branch. None of them worked.

I don't think this condition ever works if nonce is incorrect from the very beginning (v0.0.3-hotfix6):

	if !confirmed && hasFailedReceipts && allHistoryTxsWereMined {

in my case all these variables are always false

Can you please advise?

Consider bigger batches

  • The zkevm limits batch size for L1 reasons that here do not apply. Consider increasing the batch limit after exploring implications

[BUG] When stress testing in localnet, I find some different L2 block have the same timestamp ?

when i stree testing, there have probability the different L2 block have the same timestamp.。

the log as follows
11

13

the block of 62 is as follows:
{ "jsonrpc": "2.0", "id": 1, "result": { "parentHash": "0x4505c38054bf7513c6fedc1bedf68145ff7d7c889492ecdb984e5d882673d900", "sha3Uncles": "0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347", "miner": "0xf39fd6e51aad88f6f4ce6ab8827279cfffb92266", "stateRoot": "0x58354543fd1049901c75a23a2b70e3c013a7ae1b88b9224a1a2c3d98b42aeda7", "transactionsRoot": "0xa0377fb2b2af4ce63dca3bf99cdedbc216f290826960cfbe2a6df4f29cf8216d", "receiptsRoot": "0x84cd6ea60cefffe0982e6df603133a2b17c64938930bba7a3491789d261618e8", "logsBloom": "0x00000008840001000080400240000000004200000005010000000000201802000000000840000200000000000000000000000008000040800000004000000000308020800000000008010008000400440000001008000010000000000100000020000000000000000180000000000000000400000000000000a020100000001000000000005000000000004040080000000010004000000000000000000100000000040000100800000000000000000000000000004000000000800000000000080080820010400080a0000040004000000188080000000080100000000000802000008000000020101000000084000000000008080008000000100800100000", "difficulty": "0x0", "totalDifficulty": "0x0", "size": "0xc45", "number": "0x3e", "gasLimit": "0x4000000000000", "gasUsed": "0x82497", "timestamp": "0x667bc656", "extraData": "0x", "mixHash": "0x0000000000000000000000000000000000000000000000000000000000000000", "nonce": "0x0000000000000000", "hash": "0xdf2c8db96d6e671b13050254dfdb32928cf57d727803bd59629f8fde21792e33", ......... } }

the block of 63 is as follows:
{ "jsonrpc": "2.0", "id": 1, "result": { "parentHash": "0xdf2c8db96d6e671b13050254dfdb32928cf57d727803bd59629f8fde21792e33", "sha3Uncles": "0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347", "miner": "0xf39fd6e51aad88f6f4ce6ab8827279cfffb92266", "stateRoot": "0xe3879c0822c85b437e0fac91fa2a5c728b3436feee92c9b98c5689de1c72137b", "transactionsRoot": "0x80dbb956f8852c6409b2e6e499d643d6f6bccd0e0a9334bbc9d8d3c33aaf09fe", "receiptsRoot": "0x58560235c9b8022ef3985c818392f023230cc49a3b127f2c963749fa2251276b", "logsBloom": "0x0480000a840001040080400240000000004200020005010000000000201802000040000840000200000000200000000000000008000040820000004000000400308020800000000008010808000400440000001008000090000000000100000020000040000000000180000000000000000400010008000000a0201000000010000400000050000000000040408800000000100040000000000000002001000200000400001008400000000000000000000000000040000000008000000000000801a0820010400080a08000c0004000000188080000080880100000000000802000008000000020101000040084000004000008080008000000100810100000", "difficulty": "0x0", "totalDifficulty": "0x0", "size": "0x85ee", "number": "0x3f", "gasLimit": "0x4000000000000", "gasUsed": "0x68c5b9", "timestamp": "0x667bc656", "extraData": "0x", "mixHash": "0x0000000000000000000000000000000000000000000000000000000000000000", "nonce": "0x0000000000000000", "hash": "0xcaa1c31ff83e67caf8e120e9e7b52574ee08d3ab264e2720401e8d6f26678e50", ......... } }

There is a same timestamp between the block 62 and 63。

RPC node and explorer showing invalid genesis block info.

Hello,

I created a genesis.json file and pass this file to the node via:
--network custom --custom-network-file /app/genesis.json

However the bridge service fails with no contract found at address when it tries to eth_getCode for L2PolygonBridgeAddresses.

This address (along with all other contracts and bytecodes) is present in the genesis.json block but my guess is that the info from the genesis.json should somehow be imported into the state db which isn't happening.

Can you please advise?

CDK not running with the custom network.

cannot create blocks and synchronize, then zkProver does not get port 50071:50071 after run

Output Paratemer

{
 "realVerifier": false,
 "trustedSequencerURL": "http://127.0.0.1:8123",
 "networkName": "only",
 "version": "0.0.1",
 "trustedSequencer": "0x41A7DbF1a08E8cf2d76367eC3b4076868E943B63",
 "chainID": 728697,
 "trustedAggregator": "0x41A7DbF1a08E8cf2d76367eC3b4076868E943B63",
 "trustedAggregatorTimeout": 604799,
 "pendingStateTimeout": 604799,
 "forkID": 6, // Can i change to Sepolia ChainId ? e.g: 11155111 
 "admin": "0x41A7DbF1a08E8cf2d76367eC3b4076868E943B63",
 "cdkValidiumOwner": "0x41A7DbF1a08E8cf2d76367eC3b4076868E943B63",
 "timelockAddress": "0x41A7DbF1a08E8cf2d76367eC3b4076868E943B63",
 "minDelayTimelock": 3600,
 "salt": "0x0000000000000000000000000000000000000000000000000000000000000000",
 "initialCDKValidiumDeployerOwner": "0x41A7DbF1a08E8cf2d76367eC3b4076868E943B63",
 "maticTokenAddress": "0x58F13A23bcdCE71efF2346FB2003413Baa1E4508",
 "cdkValidiumDeployerAddress": "0x44F3A6F2ef386E7714604126DDB3eb3Ef5f735F6",
 "deployerPvtKey": "deployment address privatekey",
 "maxFeePerGas": "",
 "maxPriorityFeePerGas": "",
 "multiplierGas": "",
 "setupEmptyCommittee": true,
 "committeeTimelock": false
}

image

How to build :

Setup Database

docker run -d -e POSTGRES_USER=xxxxx -e POSTGRES_PASSWORD=xxxxx -e POSTGRES_DB=postgres -p 5432:5432 postgres:15

Import DB : PGPASSWORD=xxxxx psql -h localhost -U xxxxx -d postgres -p 5432 -a -q -f ./db/scripts/single_db_server.sql
Create DB DAC : PGPASSWORD=xxxxx psql -h localhost -U xxxxx -d postgres -p 5432 -c "CREATE DATABASE committee_db;"
Create DB BRIDGE : PGPASSWORD=xxxxx psql -h localhost -U xxxxx -d postgres -p 5432 -c "CREATE DATABASE bridge_db;"

logs zkevm-node

2024-03-13T11:58:35.250Z	�[34mINFO�[0m	synchronizer/synchronizer.go:354	L1 state fully synchronized	{"pid": 102908, "version": "v0.0.3"}
2024-03-13T11:58:35.705Z	�[34mINFO�[0m	ethtxmanager/ethtxmanager.go:255	found 0 monitored tx to process	{"pid": 102908, "version": "v0.0.3"}
2024-03-13T11:58:36.013Z	�[34mINFO�[0m	sequencesender/sequencesender.go:84	getting sequences to send	{"pid": 102908, "version": "v0.0.3"}
2024-03-13T11:58:36.014Z	�[34mINFO�[0m	sequencesender/sequencesender.go:198	no batches to be sequenced	{"pid": 102908, "version": "v0.0.3"}
2024-03-13T11:58:36.014Z	�[34mINFO�[0m	sequencesender/sequencesender.go:90	waiting for sequences to be worth sending to L1	{"pid": 102908, "version": "v0.0.3"}
2024-03-13T11:58:36.134Z	�[33mWARN�[0m	pool/pool.go:586	No suggested min gas price since: 2024-03-13 11:53:36.133447242 +0000 UTC	{"pid": 102908, "version": "v0.0.3"}
github.com/0xPolygonHermez/zkevm-node/pool.(*Pool).pollMinSuggestedGasPrice
	/root/cdk-validium/cdk-validium-node/pool/pool.go:586
github.com/0xPolygonHermez/zkevm-node/pool.(*Pool).StartPollingMinSuggestedGasPrice.func1
	/root/cdk-validium/cdk-validium-node/pool/pool.go:172

logs zkevm-dac

2024-03-13T11:48:49.181Z	�[34mINFO�[0m	db/migrations.go:15	running migrations up	{"pid": 103234, "version": "v0.0.3"}
2024-03-13T11:48:49.236Z	�[34mINFO�[0m	db/migrations.go:37	successfully ran 3 migrations	{"pid": 103234, "version": "v0.0.3"}
2024-03-13T11:48:50.958Z	�[34mINFO�[0m	synchronizer/init.go:33	starting search for start block of contract 0x3Cf749F3436e6F90fea577f6F34B58260701947D	{"pid": 103234, "version": "v0.0.3"}
2024-03-13T11:48:56.071Z	�[34mINFO�[0m	sequencer/tracker.go:30	starting sequencer address tracker	{"pid": 103234, "version": "v0.0.3"}
2024-03-13T11:48:56.239Z	�[34mINFO�[0m	sequencer/tracker.go:35	current sequencer addr: 0x41A7DbF1a08E8cf2d76367eC3b4076868E943B63	{"pid": 103234, "version": "v0.0.3"}
2024-03-13T11:48:56.407Z	�[34mINFO�[0m	sequencer/tracker.go:40	current sequencer url: http://127.0.0.1:8123	{"pid": 103234, "version": "v0.0.3"}
2024-03-13T11:48:56.407Z	�[34mINFO�[0m	synchronizer/reorg.go:45	starting block reorganization detector	{"pid": 103234, "version": "v0.0.3"}
2024-03-13T11:48:57.081Z	�[34mINFO�[0m	synchronizer/batches.go:89	starting number synchronizer, DAC addr: 0x41A7DbF1a08E8cf2d76367eC3b4076868E943B63	{"pid": 103234, "version": "v0.0.3"}
2024-03-13T11:48:57.081Z	�[34mINFO�[0m	synchronizer/batches.go:125	starting event producer	{"pid": 103234, "version": "v0.0.3"}
2024-03-13T11:48:57.081Z	�[34mINFO�[0m	rpc/server.go:79	http server started: 0.0.0.0:8444	{"pid": 103234, "version": "v0.0.3"}
2024-03-13T11:48:57.082Z	�[34mINFO�[0m	synchronizer/batches.go:184	starting event consumer	{"pid": 103234, "version": "v0.0.3"}

logs zKprover

aggregatorClientMockThread() failed calling readerWriter->Read(&aggregatorMessage)
aggregatorClientMockThread() channel broken; will retry in 5 seconds
aggregatorClientMockThread() got: get_status_request { }
aggregatorClientMockThread() sent: get_status_response { status: STATUS_IDLE last_computed_end_time: 1710330366 current_computing_start_time: 1710330366 version_proto: "v0_0_1" version_server: "0.0.1" pending_request_queue_ids: "d01133c3-8dda-4b5e-b9f7-cbdc7c2ddb2f" pending_request_queue_ids: "2beef33b-237d-4eed-9982-6252ab7dc031" pending_request_queue_ids: "fc12cef9-40fc-4569-afcd-36eecb5614c9" prover_name: "test-prover" prover_id: "f5e76ec1-368b-496a-bfad-664d0d4e2bee" number_of_cores: 8 total_memory: 16366884 free_memory: 2250840 fork_id: 6 }
aggregatorClientMockThread() got: get_status_request { }
aggregatorClientMockThread() sent: get_status_response { status: STATUS_IDLE last_computed_end_time: 1710330366 current_computing_start_time: 1710330366 version_proto: "v0_0_1" version_server: "0.0.1" pending_request_queue_ids: "2cc0379f-ae3d-4a8b-8a96-8f6e7181b823" pending_request_queue_ids: "ce485b95-b07e-45a7-ac64-89308cb7e65c" pending_request_queue_ids: "04df05ca-11c2-4bde-b6e3-dabdbfc81898" prover_name: "test-prover" prover_id: "f5e76ec1-368b-496a-bfad-664d0d4e2bee" number_of_cores: 8 total_memory: 16366884 free_memory: 2250840 fork_id: 6 }
aggregatorClientMockThread() got: get_status_request { }

logs postgresql

2024-03-13 12:03:35.331 UTC [60] LOG:  checkpoint starting: time
2024-03-13 12:03:36.651 UTC [60] LOG:  checkpoint complete: wrote 14 buffers (0.1%); 0 WAL file(s) added, 0 removed, 0 recycled; write=1.307 s, sync=0.004 s, total=1.321 s; sync files=12, longest=0.003 s, average=0.001 s; distance=42 kB, estimate=13648 kB
2024-03-13 12:03:37.134 UTC [6211] FATAL:  password authentication failed for user "postgres"
2024-03-13 12:03:37.134 UTC [6211] DETAIL:  Role "postgres" does not exist.
        Connection matched pg_hba.conf line 100: "host all all all scram-sha-256"
2024-03-13 12:03:39.364 UTC [6212] FATAL:  password authentication failed for user "postgres"
2024-03-13 12:03:39.364 UTC [6212] DETAIL:  Role "postgres" does not exist.

docker-compose (v1) vs docker compose (v2) issue

There seems to be a mix of the usage of docker compose v1 and v2. As the binary to execute changes, it can cause some issues in the code. docker-compose is used on v1 and docker compose in v2.

As docker-compose is no longer receiving updates since July 2023, I'm assuming the desired behavior is to only use v2, which coincides with what's mentioned on the Readme.md file on line 79: docker compose ps.

Usage of compose v2 on this repo in these locations:

docs/ci/actions.md:73:* External docker images used in the [docker compose file]. For each image the
docs/ci/actions.md:74:code compares the digest existing in the docker compose file with the digest
docs/ci/actions.md:95:[docker compose file]: ../../docker-compose.yml
docs/ci/opsman.md:8:container name (as defined in the [docker compose file]) and a variadic parameter
docs/ci/opsman.md:26:[docker compose file]: ../../docker-compose.yml
README.md:79:docker compose ps

Usage of compose v1 on this repo in these locations:

docs/snap_restore.md:82:docker-compose up -d zkevm-sh
docs/snap_restore.md:83:docker-compose exec zkevm-sh /bin/sh
tools/executor/main.go:30:              cmd := exec.Command("docker-compose", "down", "--remove-orphans")
tools/executor/main.go:37:      cmd := exec.Command("docker-compose", "up", "-d", "executor-tool-db")
tools/executor/main.go:43:      cmd = exec.Command("docker-compose", "up", "-d", "executor-tool-prover")
tools/executor/README.md:73:docker-compose up -d zkevm-sync
test/Makefile:1:DOCKERCOMPOSE := docker-compose -f docker-compose.yml
test/Makefile:83:RUNDACDB := docker-compose up -d cdk-validium-data-node-db
test/Makefile:84:STOPDACDB := docker-compose stop cdk-validium-data-node-db && docker-compose rm -f cdk-validium-data-node-db

As an example, one issue (but they may be more), is that if you follow the steps on the Readme file and have only compose v2 installed, it will not work unless you change on the test/Makefile line 1 from this:

DOCKERCOMPOSE := docker-compose -f docker-compose.yml

to this:

DOCKERCOMPOSE := docker compose -f docker-compose.yml

Pulling zkprover-mock error

the compose file gives error when I run docker-compose pull:

$ docker-compose pull

WARNING: The DOCKERGID variable is not set. Defaulting to a blank string.
Pulling grafana-db                                  ... done
Pulling cdk-validium-sequencer                      ... done
Pulling cdk-validium-sequence-sender                ... done
Pulling cdk-validium-json-rpc                       ... done
Pulling telegraf                                    ... done
Pulling grafana                                     ... done
Pulling cdk-validium-aggregator                     ... done
Pulling cdk-validium-sync                           ... done
Pulling cdk-validium-eth-tx-manager                 ... done
Pulling cdk-validium-l2gaspricer                    ... done
Pulling cdk-validium-state-db                       ... done
Pulling cdk-validium-pool-db                        ... done
Pulling cdk-validium-event-db                       ... done
Pulling cdk-validium-explorer-l1                    ... done
Pulling cdk-validium-explorer-l1-db                 ... done
Pulling cdk-validium-explorer-l2                    ... done
Pulling cdk-validium-explorer-json-rpc              ... done
Pulling cdk-validium-explorer-l2-db                 ... done
Pulling cdk-validium-mock-l1-network                ... done
Pulling cdk-validium-prover                         ... done
Pulling zkprover-mock                               ... error
Pulling cdk-validium-approve                        ... done
Pulling cdk-validium-permissionless-db              ... done
Pulling cdk-validium-permissionless-node-forced-DAC ... done
Pulling cdk-validium-permissionless-node            ... done
Pulling cdk-validium-data-node-db                   ... done
Pulling cdk-validium-data-availability              ... done
Pulling dac-setup-committee                         ... done
Pulling cdk-validium-permissionless-prover          ... done
Pulling cdk-validium-metrics                        ... done

seems the image is missing

Halting finalizer, error: batch sanity check error

I attempted to conduct a test involving abnormal restarts. Following several restart attempts, I encountered an error message: "Halting finalizer, error: batch sanity check error." This resulted in the halting of blocks and the suspension of all transactions.

I think the problem occurs when the system interrupts right when the process of saving the root state is being performed, resulting in the new state root not being saved to the database.
Is there any way to fix this problem?

I tested at Version: 0.6.4-cdk
error log:

2024-04-04T04:26:43.430Z        ERROR   sequencer/finalizer.go:792      halting finalizer, error: batch sanity check error. Check previous errors in logs to know which was the cause%!(EXTRA string=
/home/runner/work/cdk-validium-node/cdk-validium-node/log/log.go:142 github.com/0xPolygonHermez/zkevm-node/log.appendStackTraceMaybeArgs()
/home/runner/work/cdk-validium-node/cdk-validium-node/log/log.go:251 github.com/0xPolygonHermez/zkevm-node/log.Errorf()
/home/runner/work/cdk-validium-node/cdk-validium-node/sequencer/finalizer.go:792 github.com/0xPolygonHermez/zkevm-node/sequencer.(*finalizer).Halt()
/home/runner/work/cdk-validium-node/cdk-validium-node/sequencer/batch.go:358 github.com/0xPolygonHermez/zkevm-node/sequencer.(*finalizer).batchSanityCheck.func1()
/home/runner/work/cdk-validium-node/cdk-validium-node/sequencer/batch.go:421 github.com/0xPolygonHermez/zkevm-node/sequencer.(*finalizer).batchSanityCheck()
/home/runner/work/cdk-validium-node/cdk-validium-node/sequencer/batch.go:56 github.com/0xPolygonHermez/zkevm-node/sequencer.(*finalizer).processBatchesPendingtoCheck()
/home/runner/work/cdk-validium-node/cdk-validium-node/sequencer/finalizer.go:149 github.com/0xPolygonHermez/zkevm-node/sequencer.(*finalizer).Start()
)       {"pid": 1, "version": "0.6.4+cdk"}
github.com/0xPolygonHermez/zkevm-node/sequencer.(*finalizer).Halt
        /home/runner/work/cdk-validium-node/cdk-validium-node/sequencer/finalizer.go:792
github.com/0xPolygonHermez/zkevm-node/sequencer.(*finalizer).batchSanityCheck.func1
        /home/runner/work/cdk-validium-node/cdk-validium-node/sequencer/batch.go:358
github.com/0xPolygonHermez/zkevm-node/sequencer.(*finalizer).batchSanityCheck
        /home/runner/work/cdk-validium-node/cdk-validium-node/sequencer/batch.go:421
github.com/0xPolygonHermez/zkevm-node/sequencer.(*finalizer).processBatchesPendingtoCheck
        /home/runner/work/cdk-validium-node/cdk-validium-node/sequencer/batch.go:56
github.com/0xPolygonHermez/zkevm-node/sequencer.(*finalizer).Start
        /home/runner/work/cdk-validium-node/cdk-validium-node/sequencer/finalizer.go:149

Unable to Settle Transactions on L1 via Sequencer and DAC

I have successfully built and deployed all CDK-validium-node components using cdk-validium-binary. Individually, each component (rpc, txManager, sequencer, etc.) is functioning correctly. I am able to perform transactions on L2 (CDK) without any issues.

However, the batch of transactions is not settling on L1 via the sequencer and DAC. Upon inspecting the SequenceSener component of CDK, I encountered the following error:

error getting signatures and addresses from the data committee: too many members failed to send their signature
error when trying to get signature from 0xc920737273a002b541bB1d8298Db197E9CbEd1ae: -32000 unauthorized%!(EXTRA string=

Configuration Details:

CDK-validium-node Version: v0.0.3-RC2

Issue Analysis:

The error suggests an "unauthorized" access issue when the sequencer attempts to request and access the DAC.

Steps to Reproduce:

  • Deploy CDK

  • Perform transactions on L2 (CDK).

  • Observe the failure to settle transactions on L1 via the sequencer and DAC.

Expected Behavior:

Transactions should be successfully settled on L1 via the sequencer and DAC.

Additional Context:

I have meticulously reviewed the configuration details and ensured that all individual components are working seamlessly. The issue seems to be related to unauthorized access during the DAC interaction by the sequencerSender.

Additionally, I am attaching screenshots of the logs for both the SequencerSender and CDK Data Availability Node for further reference -

Screenshot 2024-01-30 at 4 08 58 PM
Screenshot 2024-01-30 at 4 32 41 PM

rpc error: batch requests are disabled

Problem

I was testing the quickstart and saw L2 tx went through in my Metamask. However, it doesn't show up in the explorer

Root cause analysis

I checked that my node config has:

[RPC]
...
MaxRequestsPerIPAndSecond = 5000
...

however, I still get this error:

zkevm-explorer-l2  | Retrying.
zkevm-explorer-l2  | 2024-01-16T07:04:06.434 application=indexer fetcher=block_catchup first_block_number=2 last_block_number=0 missing_block_count=3 shrunk=false [info] Index had to catch up.
zkevm-explorer-l2  | 2024-01-16T07:04:06.434 application=indexer fetcher=block_catchup [info] Checking if index needs to catch up in 2500ms.
zkevm-explorer-l2  | 2024-01-16T07:04:08.947 application=indexer fetcher=block_catchup first_block_number=2 last_block_number=0 [error] ** (EthereumJSONRPC.DecodeError) Failed to decode Ethereum JSONRPC response:
zkevm-explorer-l2  | 
zkevm-explorer-l2  |   request:
zkevm-explorer-l2  | 
zkevm-explorer-l2  |     url: http://zkevm-explorer-json-rpc:8124
zkevm-explorer-l2  | 
zkevm-explorer-l2  |     body: [{"id":0,"jsonrpc":"2.0","method":"eth_getBlockByNumber","params":["0x2",true]},{"id":1,"jsonrpc":"2.0","method":"eth_getBlockByNumber","params":["0x1",true]},{"id":2,"jsonrpc":"2.0","method":"eth_getBlockByNumber","params":["0x0",true]}]
zkevm-explorer-l2  | 
zkevm-explorer-l2  |   response:
zkevm-explorer-l2  | 
zkevm-explorer-l2  |     status code: 400
zkevm-explorer-l2  | 
zkevm-explorer-l2  |     body: batch requests are disabled
...

I can also curl to reproduce it and get "batch requests are disabled" response with:

curl -X POST -H "Content-Type: application/json" --data '[
  {
    "id": 0,
    "jsonrpc": "2.0",
    "method": "eth_getBlockByNumber",
    "params": ["0x2", true]
  },
  {
    "id": 1,
    "jsonrpc": "2.0",
    "method": "eth_getBlockByNumber",
    "params": ["0x1", true]
  },
  {
    "id": 2,
    "jsonrpc": "2.0",
    "method": "eth_getBlockByNumber",
    "params": ["0x0", true]
  }
]' http://localhost:8124

the RPC service is defined in the compose file as:

  zkevm-explorer-json-rpc:
    container_name: zkevm-explorer-json-rpc
    image: hermeznetwork/cdk-validium-node:v0.0.3-RC2
    ports:
      - 8124:8124
      - 8134:8134 # needed if WebSockets enabled
    environment:
      - ZKEVM_NODE_STATE_DB_HOST=zkevm-state-db
      - ZKEVM_NODE_POOL_DB_HOST=zkevm-pool-db
      - ZKEVM_NODE_RPC_PORT=8124
      - ZKEVM_NODE_RPC_WEBSOCKETS_PORT=8134
    volumes:
      - ./config/node/config.toml:/app/config.toml
      - ./config/node/genesis.config.json:/app/genesis.json
    command:
      - "/bin/sh"
      - "-c"
      - "/app/zkevm-node run --network custom --custom-network-file /app/genesis.json --cfg /app/config.toml --components rpc --http.api eth,net,debug,zkevm,txpool,web3"

which seems alright. so I don't know why I got this error. any ideas?

v0.0.2-hotfix1 sequencer storing tx issue

Hey there,

After changing to v0.0.2-hotfix1 the maxFeePerDataGas issue is resolved, however, when performing a transaction on l2, the sequencer crashes when trying to store the tx

2024-02-01T04:37:47.798Z        DEBUG   sequencer/dbmanager.go:172      Storing tx 0x0ec1d4c9cd53b8581863877caea32d0f1cd0470a675806c446f41ee1ac7e1d78   {"pid": 7, "version": "v0.0.2-hotfix1"}
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x28 pc=0xf58240]

goroutine 209 [running]:
github.com/ethereum/go-ethereum/trie.(*StackTrie).hash(0xc00061aeb0, 0xc0006435c0, {0x0, 0x0, 0x0})
        /go/pkg/mod/github.com/ethereum/[email protected]/trie/stacktrie.go:413 +0x3a0
github.com/ethereum/go-ethereum/trie.(*StackTrie).Hash(0xc0002a0700?)
        /go/pkg/mod/github.com/ethereum/[email protected]/trie/stacktrie.go:468 +0x39
github.com/ethereum/go-ethereum/core/types.DeriveSha({0x1bfdba0, 0xc00081b920}, {0x1c00710, 0xc00061aeb0})
        /go/pkg/mod/github.com/ethereum/[email protected]/core/types/hashing.go:133 +0x37f
github.com/ethereum/go-ethereum/core/types.NewBlock(0x868de7d531c0b178?, {0xc00064c388?, 0x1, 0x1}, {0x28c1220, 0x0, 0x0}, {0xc00064c390, 0x1, 0x1}, ...)
        /go/pkg/mod/github.com/ethereum/[email protected]/core/types/block.go:230 +0x119
github.com/0xPolygon/cdk-validium-node/state.(*State).StoreTransaction(0xc0006d2000, {0x1c04e38, 0xc00004e040}, 0x4e?, 0xc000704b40, {0x78, 0xa1, 0x6, 0x22, 0x39, ...}, ...)
        /src/state/transaction.go:933 +0x4c5
github.com/0xPolygon/cdk-validium-node/sequencer.(*dbManager).StoreProcessedTxAndDeleteFromPool(0xc0006cc140, {0x1c04e38, 0xc00004e040}, {{0xe, 0xc1, 0xd4, 0xc9, 0xcd, 0x53, 0xb8, ...}, ...})
        /src/sequencer/dbmanager.go:178 +0x172
github.com/0xPolygon/cdk-validium-node/sequencer.(*finalizer).storeProcessedTx(0xc000966300, {0x1c04e38, 0xc00004e040}, {{0xe, 0xc1, 0xd4, 0xc9, 0xcd, 0x53, 0xb8, ...}, ...})
        /src/sequencer/finalizer.go:782 +0x14c
github.com/0xPolygon/cdk-validium-node/sequencer.(*finalizer).storePendingTransactions(0xc000966300, {0x1c04e38, 0xc00004e040})
        /src/sequencer/finalizer.go:216 +0x270
created by github.com/0xPolygon/cdk-validium-node/sequencer.(*finalizer).Start
        /src/sequencer/finalizer.go:187 +0x2e5

What's the suggested workflow to edit the test docker image for the L1?

My goal is to edit the Solidity contract that verifies a zkevm proof on the L1 for test purposes.

First step is to identify which of the docker containers in test/docker-compose.yml corresponds to the L1. In the README I see

L2 RPC endpoint: http://localhost:8123
L2 Chain ID: 1001
L1 RPC endpoint: http:localhost:8545
L1 Chain ID: 1337

And in test/docker-compose.yml I see

zkevm-mock-l1-network:
container_name: zkevm-mock-l1-network
image: 0xpolygon/cdk-validium-contracts:elderberry-fork.9-geth1.13.11
ports:
- 8545:8545

Docker container name zkevm-mock-l1-network strongly hints that this container is the L1. This suggestion is backed by the use of port 8545. The associated docker image name 0xpolygon/cdk-validium-contracts:elderberry-fork.9-geth1.13.11 suggests to me that this image is built in repo cdk-validium-contracts. In particular, I expect to be able to clone this repo, edit its contents, rebuild its docker image, then use it as a drop-in replacement for container zkevm-mock-l1-network in the present repo.

Over at repo cdk-validium-contracts I see a couple of proof verification contracts:

I'm able to edit these contracts and rebuild the docker image as per docs. So far so good!

The docker image I built has name hermeznetwork/geth-cdk-validium-contracts. Unfortunately, this name does not match any image names appearing in test/docker-compose.yml. At this point it's not clear which of the containers in test/docker-compose.yml is the intended home of this new container I just built.

I edited test/docker-compose.yml to point container zkevm-mock-l1-network at my new image hermeznetwork/geth-cdk-validium-contracts. But that broke the test: 8/13 containers exited immediately, most of which contained fatal log messages like no contract code at given address.

I see here that the new image hermeznetwork/geth-cdk-validium-contracts specifies chain ID 1001:

https://github.com/0xPolygon/cdk-validium-contracts/blob/d549046514e931c2969ca97e935e711930f32e6b/deployment/deploy_parameters.json.example#L7

From the above cited README I see that chain ID 1001 is the L2, not the L1. This suggests to me that this image is intended for the L2, not the L1.

At this point I decide to seek help: what's the suggested workflow to edit the test docker image for the L1?

Cannot process any tx when AccountQueue limit is reached

System information

zkEVM Node version: CDK: v0.3.1-RC2
OS & Version: Ubuntu
Network: local devnet setup with https://github.com/Snapchain/zkValidium-quickstart/

Steps to reproduce the behaviour

we were using the polygon-cli tool to load test

polycli loadtest --verbosity 700 --chain-id 1001  --concurrency 1 --requests 6000 --rate-limit 6000 --mode i --rpc-url http://<REDACTED>:8123/  --private-key "<REDACTED>" --legacy 

and got

image
ERR Recorded an error while sending transactions error="nonce too high" nonce=5614 request time=552

The error was originated from https://github.com/0xPolygon/cdk-validium-node/blob/develop/pool/pool.go#L496 where AccountQueue was configured in the node config file (64 was used in the quickstart)

Expected behaviour

when AccountQueue limit is hit, the chain should just ignore new transactions from the sender and not add tx to the storage, until the old ones are processed.

the chain should continue to process other senders' transactions normally

Actual behaviour

The chain cannot process any transaction from any accounts. when I send transaction from other accounts, it's stuck in "pending" status on my MM

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.