Giter VIP home page Giter VIP logo

bsc-snapshots's Introduction

bsc-snapshots

1.Legacy Full Node

The snapshot listed below are all PBSS&PebbleDB mode, if you need Hash based snapshot, check out FAQ-Q2

1.1.Endpoints

Path-Base-State-Scheme(recommand)

Multi-Databases-PBSS(new feature) Multi-databases are a new feature in bsc v1.4.6. When the user runs node with the new snapshot of the multi-databases below, the feature will be enabled automatically.

1.2.Usage

Step 1: Preparation

  • Make sure your hardware meets the suggested requirement.
  • A disk with enough free storage, at least twice the size of the snapshot.

Step 2: Download && Uncompress

  • Copy the above snapshot URL.
  • Download: wget -O geth.tar.lz4 "<paste snapshot URL here>" . It will take one or two hours to download the snapshot, you can put it in the backgroud by nohup wget -O geth.tar.gz "<paste snapshot URL here>" &

If you need to speedup download, just use aria2c

aria2c -o geth.tar.lz4 -s14 -x14 -k100M https://pub-c0627345c16f47ab858c9469133073a8.r2.dev/{filename}

But aria2c may fail sometimes, you need to rerun the download command. To make it convient, you can use the following script, save it into file download.sh and run: nohup ./download.sh "<paste snapshot URL here>" <your dir> &

#!/bin/bash
if [ $# -eq 1 ]; then 
        dir=$(pwd)
elif [ $# -eq 2 ]; then 
        dir=$2
else 
        echo "Usage: $0 <uri> [filepath] "
        exit 1
fi
uri=$1
filename=$(basename "$uri")
status=-1
while (( status != 0 ))
do 
        PIDS=$(pgrep aria2c)
        if [ -z "$PIDS" ]; then
                aria2c -d $dir -o $filename -s14 -x14 -k100M $uri
        fi
        status=$?
        pid=$(pidof aria2c)
        wait $pid 
        echo aria2c exit.
        case $status in 
                3)
                        echo file not exist.
                        exit 3
                        ;;
                9)
                        echo No space left on device.
                        exit 9
                        ;;
                *)
                        continue
                        ;;
        esac
done
echo download succeed.
exit 0
  • Uncompress: tar -I lz4 -xvf geth.tar.lz4. It will take more than two hours to uncompress. You can put it in the background by nohup tar -I lz4 -xvf geth.tar.lz4 &
  • You can combine the above steps by running a script:
wget -O geth.tar.lz4  "<paste snapshot URL here>"
tar -I lz4 -xvf geth.tar.lz4
  • If you do not need to store the archive for use with other nodes, you may also extract it while downloading to save time and disk space:
wget -q -O - <snapshot URL> | tar -I lz4 -xvf -

Step 3: Replace Data

  • First, stop the running bsc client if you have one by kill {pid}, and make sure the client has shut down.
  • Consider backing up the original data: mv ${BSC_DataDir}/geth/chaindata ${BSC_DataDir}/geth/chaindata_backup; mv ${BSC_DataDir}/geth/triecache ${BSC_DataDir}/geth/triecache_backup
  • Replace the data: mv server/data-seed/geth/chaindata ${BSC_DataDir}/geth/chaindata; mv server/data-seed/geth/triecache ${BSC_DataDir}/geth/triecache
  • Start the bsc client again and check the logs

1.3.About Snapshot with multi-database

The Chaindata of the snapshot will be divided into three stores, BlockStore, TrieStore, and OriginalStore.

TrieStore: All trie nodes of the current state and historical state data of nearly 9w blocks are stored here. The data is stored in ${BSC_DataDir}/geth/chaindata/state.

BlockStore: Block-related data is stored in this store, including headers, bodies, receipts, difficulties, number-to-hash indexes, hash-to-number indexes, and historical block data. The data is stored in ${BSC_DataDir}/geth/chaindata/block.

If the user intends to store the databases on the same storage disk, they just need to start the client by following the same steps after extracting the snap file, without requiring any additional startup parameters.

If the user want to store different databases on different disks, you can move the folder corresponding to TrieStore or BlockStore to a different directory and create a symbolic link with the same name as the folder using an absolute path in the chaindata directory. For example:

mv ${BSC_DataDir}/geth//chaindata/state <move-directory>
ln -s <move-directory>  ${BSC_DataDir}/geth/chaindata/state

After the symbolic link is created, you can start the bsc client again and check the logs. Due to the larger size of the trie store, we recommend that the trie database be stored on different disks to achieve better performance.

2.Snapshots Provided by Community

Special thanks to BNB48Club on contributing another dump of snapshot, you can also refer here to download.

3.Erigon-BSC Snapshot(Archive Node)

3.1.Endpoints

For more granular upload & download to avoid big files error, split the files into several chunks, so please download them together and concatenate finally.

a.Endpoint(Testnet): update every 6 months

erigon version v1.1.10, Block: 35851654

SHA256 = 7c59f6846eba146a5668e44d3863545375ee52c6c70d3707ab55c2d8fdfdc6bb

testnet_erigon_DB_20231211.tar.lz4

b.Endpoint(Mainnet): update every three week

erigon version v1.2.8

SHA256(mdbx.dat) = 31fdebebe89ab25bf5842fa2055428a14281c8279ce2499eef93fece5a94deea

erigon_data_20240520.lz4.000 md5=5983392858e54bf40e05853294399303

erigon_data_20240520.lz4.001 md5=14ac48c4f866c6f754ac3a2c661d6eb4

erigon_data_20240520.lz4.002 md5=c7c95f96200d35b81e52d50f8ed52ebf

erigon_data_20240520.lz4.003 md5=2f2d918363c992bb9254dd6504d8a541

erigon_data_20240520.lz4.004 md5=9cd5075e8e611a35c44eec3cd31305f0

erigon_data_20240520.lz4.005 md5=0bd256afaf42bfb4b163354142cfda0d

erigon_data_20240520.lz4.006 md5=4f31de846555841326fa04ccd00b5b67

erigon_data_20240520.lz4.007 md5=e2750053a2656b6cbaf743459480c144

3.2.Usage

Step 1: Preparation

Step 2: Download && Concatenate && Uncompress

sudo yum install aria2c
aria2c -s14 -x14 -k100M https://pub-60a193f9bd504900a520f4f260497d1c.r2.dev/erigon_data_20240416.lz4.[0]..erigon_data_20240416.lz4.[7]
lz4 -d erigon_data_20240416.lz4.[0]..erigon_data_20240416.lz4.[7]
cat "erigon_data_20240416"* > mdbx.dat

Step 3: Replace Data And Restart erigon

  • Stop the running erigon client by kill {pid}

  • Backing up the original data: mv ${erigon_datadir}/chaindata/mdbx.dat ${erigon_datadir}/chaindata/mdbx.dat

  • Replace the data: mv ${erigon_snapshot_dir}/erigon/chaindata/mdbx.dat ${erigon_datadir}/chaindata/mdbx.dat

  • Replace the torrent: mv ${erigon_torrent_dir}/snapshots ${erigon_datadir}/

  • Start the erigon client again and check logs

  • mainnet command sample:

./build/bin/erigon --p2p.protocol=66 --txpool.disable --metrics.addr=0.0.0.0 --log.console.verbosity=dbug --db.pagesize=16k --datadir ${erigon_dir/data} --private.api.addr=localhost:9090 --chain=bsc --metrics --log.dir.path ${erigon_dir/log}
  • testnet command sample:
./build/bin/erigon --txpool.disable --networkid=97 --db.pagesize=16k --p2p.protocol=66 --datadir ./data --chain=chapel --sentry.drop-useless-peers --nat=any --log.dir.path ./log --http.port=8545 --private.api.addr=127.0.0.1:9090 --http --ws --http.api=web3,net,eth,debug,trace,txpool --http.addr=0.0.0.0 --torrent.download.rate=256mb --metrics --metrics.addr=0.0.0.0

bsc-snapshots's People

Contributors

artimath avatar ashtenhamilton avatar blxdyx avatar brilliant-lx avatar calmbeing avatar cryptothink629 avatar defidebauchery avatar du5 avatar flywukong avatar fynnss avatar hzysvilla avatar iakisme avatar kai-w-20230331 avatar kyrie-yl avatar matuskysel avatar milewski avatar qinglin89 avatar rumeelhussainbnb avatar setunapo avatar unclezoro avatar unek avatar zlacfzy avatar zzzckck avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

bsc-snapshots's Issues

403:Forbidden

wget https://s3.ap-northeast-1.amazonaws.com/dex-bin.bnbstatic.com/geth-20210615.zip?AWSAccessKeyId=AKIAYINE6SBQPUZDDRRO&Expires=1626406650&Signature=dbE4Dyiq8KHcjbiNCJIftInZmvQ%3D

--2021-06-18 14:47:55-- https://s3.ap-northeast-1.amazonaws.com/dex-bin.bnbstatic.com/geth-20210615.zip?AWSAccessKeyId=AKIAYINE6SBQPUZDDRRO
正在解析主机 s3.ap-northeast-1.amazonaws.com (s3.ap-northeast-1.amazonaws.com)... 52.219.68.219
正在连接 s3.ap-northeast-1.amazonaws.com (s3.ap-northeast-1.amazonaws.com)|52.219.68.219|:443... 已连接。
已发出 HTTP 请求,正在等待回应... 403 Forbidden
2021-06-18 14:47:55 错误 403:Forbidden。

Snap sync VERY slow. 🐌 🐌 🐌

I've downloaded last snapshot, but sync is VERY slow.

t=2021-11-05T18:14:30-0300 lvl=info msg="Imported new chain segment" blocks=5 txs=2426 mgas=395.531 elapsed=8.935s mgasps=44.263 number=12,251,260 hash=0x78cefacb792348786ec1a2263d813a77c7d961be8683ae4107829dfb4542190f age=5d6h25m dirty="1.02 GiB"

I'm using a NVME 4Tb disk and snapshot sync mode.

Is there any way to speed up synchronization?

How about a mirror in Latin America?

Hey Guys,

800Gb is a lot to transfer from US or EU to Latin America. A mirror here would help us to get snapshot data quickly.

In Brazil, Amazon is connected to an IX point, so downloading from Amazon Latin America is very fast and cheap.

can't download snapshot with wget

here is the link i use:
wget -O geth.tar.gz https://s3.ap-northeast-1.amazonaws.com/dex-bin.bnbstatic.com/geth-20211030.tar.gz?AWSAccessKeyId=AKIAYINE6SBQPUZDDRRO&Signature=uiz5Mvjx3fPSyKWLAAtoMQOZIW0%3D&Expires=1638223921

here is the wget log:


root@Ubuntu-2004-focal-64-minimal ~ # tail -f wget-log
--2021-11-04 20:00:43--  https://s3.ap-northeast-1.amazonaws.com/dex-bin.bnbstatic.com/geth-20211030.tar.gz?AWSAccessKeyId=AKIAYINE6SBQPUZDDRRO
Resolving s3.ap-northeast-1.amazonaws.com (s3.ap-northeast-1.amazonaws.com)... 52.219.1.94
Connecting to s3.ap-northeast-1.amazonaws.com (s3.ap-northeast-1.amazonaws.com)|52.219.1.94|:443... connected.
HTTP request sent, awaiting response... 403 Forbidden
2021-11-04 20:00:44 ERROR 403: Forbidden.

Use `tar` format instead of `zip` to enable streaming extraction and save disk space

The zip format does not support streaming uncompression, which means that it takes 2x disk space, i.e., one for the zip, and one for the unzipped. Even though the zip file can be removed after unzipped, the peak disk usage is still 2x. The extra space requirement is a problem especially for VPS.

I suggest that the snapshot can be created with tar. If necessary, tar can also be combined with gz or bz2 to compress. In this case, users can use the following command to download and extract files at the same time, without wasting disk space to store the tar file:

wget -O- https://link/to/snapshot.tar.gz | tar xz

download link for Snapshot Bsc-20210819.zip giving 403

When I try to download the latest snapshot I receive a 403 error. This is the order I try to execute:

wget -O geth.zip https://s3.ap-northeast-1.amazonaws.com/dex-bin.bnbstatic.com/geth-20210819.zip?AWSAccessKeyId=AKIAYINE6SBQPUZDDRRO&Expires=1632019054&Signature=EEcYvH0Ev4oAFUR90UdfM7WpmLY%3D

Expiration is coming to quickly on EU geth snapshot

The snapshot provided through the EU endpoint expired only few hours after release :

<Error>
<Code>ExpiredToken</Code>
<Message>The provided token has expired.</Message>
<Token-0>
(...)
</Token-0>
<RequestId>0D65W7BRJANT9XYB</RequestId>
<HostId>x/DEGv+P2cScgsX0J+VC40OtvoO1EA2LC6jqzqm3N5RfuWMaujX4IkhLe2QG+qdnPlaTQMM9Rvc=</HostId>
</Error>

20211002 is incomplete

Only 11GB when downloaded. Unzip fails with:

server/data-seed/geth/chaindata/16228103.ldb
server/data-seed/geth/chaindata/16299900.ldb
server/data-seed/geth/chaindata/16233099.ldb
server/data-seed/geth/chaindata/16346472.ldb
server/data-seed/geth/chaindata/16320242.ldb

gzip: stdin: unexpected end of file
tar: Unexpected EOF in archive
tar: Unexpected EOF in archive
tar: Error is not recoverable: exiting now

How to clean up the redundant node data

My node data is growing very fast, and now it has almost occupied 1T of hard disk space, but the snapshot downloaded on github is only 500G, how can I clean up the large amount of redundant data? Please advise, thank you!

Imported new chain segment (FOREVER)

./build/bin/geth --ws --rpc --config ./config.toml --datadir /node/bsc --cache 65536 --rpc.allow-unprotected-txs --txlookuplimit 0 --syncmode fast --snapshot=false

Impossible to get synced. Server with 4Tb NVME, only syncing one block per time. age parameter becomes older each log line.

t=2021-11-09T10:05:03-0300 lvl=info msg="Imported new chain segment" blocks=1 txs=445 mgas=75.629 elapsed=29.430s mgasps=2.570 number=12,502,211 hash=0x35fa6e41ab7666460cf5099a37f660ce7d58153c07e43e6ef58087c031acbdae age=45m36s dirty="65.21 MiB"
t=2021-11-09T10:05:25-0300 lvl=info msg="Imported new chain segment" blocks=1 txs=449 mgas=68.740 elapsed=21.045s mgasps=3.266 number=12,502,212 hash=0xb6b7d464b9ae810f361ecb64468c5b7910a8bad03e92a87c124d2733090d2b48 age=45m55s dirty="72.22 MiB"
t=2021-11-09T10:05:53-0300 lvl=info msg="Imported new chain segment" blocks=1 txs=567 mgas=85.426 elapsed=28.797s mgasps=2.966 number=12,502,213 hash=0x63fb582fea846d8f0c1e1d39428693241b1e7849721abc320f4f6a7d86ecf65b age=46m20s dirty="81.05 MiB"

Snapshot 20211005 fail

# wget -O geth.tar.gz "https://s3.ap-northeast-1.amazonaws.com/dex-bin.bnbstatic.com/geth-20211005.tar.gz?AWSAccessKeyId=AKIAYINE6SBQPUZDDRRO&Signature=l2X%2FbLd%2BggkqG%2FJ4DuhmSr1PfWc%3D&Expires=1636045309"
--2021-10-05 19:49:41--  https://s3.ap-northeast-1.amazonaws.com/dex-bin.bnbstatic.com/geth-20211005.tar.gz?AWSAccessKeyId=AKIAYINE6SBQPUZDDRRO&Signature=l2X%2FbLd%2BggkqG%2FJ4DuhmSr1PfWc%3D&Expires=1636045309
Resolving s3.ap-northeast-1.amazonaws.com (s3.ap-northeast-1.amazonaws.com)... 52.219.8.72
Connecting to s3.ap-northeast-1.amazonaws.com (s3.ap-northeast-1.amazonaws.com)|52.219.8.72|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 4096 (4.0K) [application/x-tar]
Saving to: ‘geth.tar.gz’

geth.tar.gz                               100%[====================================================================================>]   4.00K  --.-KB/s    in 0s      

2021-10-05 19:49:42 (102 MB/s) - ‘geth.tar.gz’ saved [4096/4096]

# tar zxvf geth.tar.gz

gzip: stdin: unexpected end of file
server/data-seed/geth/
server/data-seed/geth/chaindata/
server/data-seed/geth/chaindata/16234653.ldb
tar: Unexpected EOF in archive
tar: Unexpected EOF in archive
tar: Error is not recoverable: exiting now

Cant download latest snapshot with wget

wget -O geth.tar.gz https://s3.ap-northeast-1.amazonaws.com/dex-bin.bnbstatic.com/geth-20211026.tar.gz?AWSAccessKeyId=AKIAYINE6SBQPUZDDRRO&Signature=icbYs7FozimvlNSPJC2kfZfBd6I%3D&Expires=1637876547

results in:

Resolving s3.ap-northeast-1.amazonaws.com (s3.ap-northeast-1.amazonaws.com)... 3.5.158.162
Connecting to s3.ap-northeast-1.amazonaws.com (s3.ap-northeast-1.amazonaws.com)|3.5.158.162|:443... connected.
HTTP request sent, awaiting response... 403 Forbidden
2021-10-26 22:16:25 ERROR 403: Forbidden.

If i just click the s3 link it starts to download

Discussion on snapshot creation

Hi ,
I was just wondering how is this snapshot created?
We already have a node running, how can we create a snapshot. if we can create a snapshot within our network internally , we can save time and data transfer cost for downloading the snapshot.

Also, if i spin up a new node using the older chaindata, the syncing is very slow in the new node. How can we make sure this does not happen.

Europe Snapshot is corrupted

gzip: stdin: invalid compressed data--format violated
tar: Unexpected EOF in archive
tar: Unexpected EOF in archive
tar: Error is not recoverable: exiting now

msg="Aborting state snapshot generation"

hello,
my node is synced with 1.1.4, but it's unstable. it's loosing synchronisation every minutn then re-sync again

i got this log:

msg="Aborting state snapshot generation"
msg="Resuming state snapshot generation"

any idea why ?

SyncMode after snapshot

Can't download latest snapshot.
Also, what sync mode should I use with snapshot? fast?

Edit: It was my fault, need to paste link in quotes. Still want to know about syncmode after snapshot.

Snapshot is unavailable

Commit: 979c1cc

$ curl "https://s3.ap-northeast-1.amazonaws.com/dex-bin.bnbstatic.com/geth-20211027.tar.gz\?AWSAccessKeyId\=AKIAYINE6SBQPUZDDRRO\&Signature\=SxziZSEURl9bcrjEjuE7qYDIGwA%3D\&Expires\=1637946832"                         (pancake-eu/bsc)
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>WF8RT67YW1YPR4HQ</RequestId><HostId>13VRAd0nlPCggglUwpRIkQKv1SAztSMv7/E8D1WNm8dNerKnbEQtXfs3Uy3HZLW28/xOEmtVjHY=</HostId></Error>

Lastest snapshot segfaults

Lastest snapshot from Nov 4 2021 first link, https://tf-dex-prod-public-snapshot.s3.amazonaws.com/geth-20211103.tar.gz?AWSAccessKeyId=AKIAYINE6SBQPUZDDRRO&Signature=Tz8cB%2Fp7hN4%2FmqUh5zQcW7xKYuc%3D&Expires=1638571196 produces segfault on startup.

Alpine linux, geth 1.1.3

INFO [11-04|19:43:36.417] Smartcard socket not found, disabling    err="stat /run/pcscd/pcscd.comm: no such file or directory"
INFO [11-04|19:43:36.417] Set global gas cap                       cap=25,000,000
INFO [11-04|19:43:36.417] Allocated trie memory caches             clean=2.40GiB dirty=4.00GiB
INFO [11-04|19:43:36.417] Allocated cache and file handles         database=/ethereum/geth/chaindata cache=6.40GiB handles=524,288
INFO [11-04|19:43:37.333] Opened ancient database                  database=/ethereum/geth/chaindata/ancient readonly=false
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x7a81e5]
goroutine 1 [running]:
github.com/ethereum/go-ethereum/core/rawdb.NewDatabaseWithFreezer(0x1b79480, 0xc000450a00, 0xc0002eb940, 0x20, 0x170df29, 0x11, 0x52e200, 0xc000450a00, 0x0,
    github.com/ethereum/go-ethereum/core/rawdb/database.go:198 +0x405
github.com/ethereum/go-ethereum/core/rawdb.NewLevelDBDatabaseWithFreezer(0xc0002eb920, 0x18, 0x1999, 0x80000, 0xc0002eb940, 0x20, 0x170df29, 0x11, 0x2668800,
    github.com/ethereum/go-ethereum/core/rawdb/database.go:264 +0xf1
github.com/ethereum/go-ethereum/node.(*Node).OpenDatabaseWithFreezer(0xc000dba4e0, 0x17037f4, 0x9, 0x1999, 0x80000, 0x0, 0x0, 0x170df29, 0x11, 0x224e600, ...
    github.com/ethereum/go-ethereum/node/node.go:622 +0x3a6
github.com/ethereum/go-ethereum/node.(*Node).OpenAndMergeDatabase(0xc000dba4e0, 0x17037f4, 0x9, 0x1999, 0x80000, 0x0, 0x0, 0x0, 0x0, 0x170df29, ...)
    github.com/ethereum/go-ethereum/node/node.go:583 +0xb6
github.com/ethereum/go-ethereum/eth.New(0xc000dba4e0, 0xc00027ea00, 0xc0003eca98, 0xc000e7eae0, 0xfe6e10)
    github.com/ethereum/go-ethereum/eth/backend.go:132 +0x336
github.com/ethereum/go-ethereum/cmd/utils.RegisterEthService(0xc000dba4e0, 0xc00027ea00, 0xf, 0x0, 0x2)
    github.com/ethereum/go-ethereum/cmd/utils/flags.go:1803 +0x225
main.makeFullNode(0xc0001c2b00, 0xc000010020, 0x7ffc02d16734, 0xc00017ffe0)
    github.com/ethereum/go-ethereum/cmd/geth/config.go:147 +0x14e
main.geth(0xc0001c2b00, 0x0, 0x0)
    github.com/ethereum/go-ethereum/cmd/geth/main.go:326 +0xf4
gopkg.in/urfave/cli%2ev1.HandleAction(0x14ab600, 0x19d5818, 0xc0001c2b00, 0xc0001f8b40, 0x0)
    gopkg.in/urfave/[email protected]/app.go:490 +0x82
gopkg.in/urfave/cli%2ev1.(*App).Run(0xc0000f11e0, 0xc00003c2c0, 0x29, 0x2c, 0x0, 0x0)
    gopkg.in/urfave/[email protected]/app.go:264 +0x5f5
main.main()
    github.com/ethereum/go-ethereum/cmd/geth/main.go:266 +0x55

europe link?

download speed average of this snapshot is 6mbs/ from my Server in germany.
anyone got a nice link to share in europe ?
thanks !

Snapshot not syncing?

Hi, I downloaded the latest snapshot, geth 1.1.4, and the config.toml for the release 1.1.4 for mainnet, and following the docs it seems to get stuck without doing nothing.

I renamed the original "server" name to node in the same folder where the geth executable is, and run the following:

./geth_1.1.4 --config mainnet_1.1.3/config.toml --syncmode snap --datadir ./node --cache 8000 --rpc.allow-unprotected-txs --txlookuplimit 0
INFO [11-08|09:42:42.660] Starting Geth on Ethereum mainnet...                                                                                                       
INFO [11-08|09:42:42.663] Maximum peer count                       ETH=30 LES=0 total=30                                                                             
INFO [11-08|09:42:42.663] Smartcard socket not found, disabling    err="stat /run/pcscd/pcscd.comm: no such file or directory"                                       
WARN [11-08|09:42:42.663] Option nousb is deprecated and USB is deactivated by default. Use --usb to enable 

(And then there is silence)

Can someone help me to understand what is wrong in this setup?

Thank you!

Configure S3 Cross-Region Replication

As we're all painfully aware, BSC snapshots are currently only hosted in AWS' Asia-Pacific region.

AWS has a Cross-Region Replication (CRR) feature that would make this data available in more local zones.

This should be done in addition to distribution via torrent, as requested in #26, and would likely make torrent availability more feasible, as volunteers would have much easier access to the source data.

BSC snapshot is compressed, that makes for a long decompression time.

@iakisme Snapshots are compressed (compression method: deflated), please consider using no compression/store instead so there's no need for wasted CPU cycles since LevelDB already has built in compression, "Snappy"
With tar:
tar -cvf geth.tar /path/to/your/node
With zip:
zip -r -0 geth.zip /path/to/your/node

unzip -Z -v geth-20210902.zip

---------------------------

  server/data-seed/geth/chaindata/

  offset of local header from start of archive:   80
                                                  (0000000000000050h) bytes
  file system or operating system of origin:      Unix
  version of encoding software:                   3.0
  minimum file system compatibility required:     MS-DOS, OS/2 or NT FAT
  minimum software version required to extract:   1.0
  compression method:                             none (stored)
  file security status:                           not encrypted
  extended local header:                          no
  file last modified on (DOS date/time):          2021 Sep 2 17:01:08
  file last modified on (UT extra field modtime): 2021 Sep 2 13:01:08 local
  file last modified on (UT extra field modtime): 2021 Sep 2 17:01:08 UTC
  32-bit CRC value (hex):                         00000000
  compressed size:                                0 bytes
  uncompressed size:                              0 bytes
  length of filename:                             32 characters
  length of extra field:                          24 bytes
  length of file comment:                         0 characters
  disk number on which file begins:               disk 1
  apparent file type:                             binary
  Unix file attributes (040755 octal):            drwxr-xr-x
  MS-DOS file attributes (10 hex):                dir

  The central-directory extra field contains:
  - A subfield with ID 0x5455 (universal time) and 5 data bytes.
    The local extra field has UTC/GMT modification/access times.
  - A subfield with ID 0x7875 (Unix UID/GID (any size)) and 11 data bytes:
    01 04 00 00 00 00 04 00 00 00 00.

  There is no file comment.

Central directory entry #3:
---------------------------

  server/data-seed/geth/chaindata/14652061.ldb

  offset of local header from start of archive:   170
                                                  (00000000000000AAh) bytes
  file system or operating system of origin:      Unix
  version of encoding software:                   3.0
  minimum file system compatibility required:     MS-DOS, OS/2 or NT FAT
  minimum software version required to extract:   2.0
  compression method:                             deflated
  compression sub-type (deflation):               normal
  file security status:                           not encrypted
  extended local header:                          no
  file last modified on (DOS date/time):          2021 Sep 2 03:09:42
  file last modified on (UT extra field modtime): 2021 Sep 1 23:09:42 local
  file last modified on (UT extra field modtime): 2021 Sep 2 03:09:42 UTC
  32-bit CRC value (hex):                         327b26b5
  compressed size:                                1960350 bytes
  uncompressed size:                              2137977 bytes
  length of filename:                             44 characters
  length of extra field:                          24 bytes
  length of file comment:                         0 characters
  disk number on which file begins:               disk 1
  apparent file type:                             binary
  Unix file attributes (100644 octal):            -rw-r--r--
  MS-DOS file attributes (00 hex):                none

  The central-directory extra field contains:
  - A subfield with ID 0x5455 (universal time) and 5 data bytes.
    The local extra field has UTC/GMT modification/access times.
  - A subfield with ID 0x7875 (Unix UID/GID (any size)) and 11 data bytes:
    01 04 00 00 00 00 04 00 00 00 00.

  There is no file comment.

err="the method miner_start does not exist/is not available"

Hello
i am syncing with 1.1.3 and the snapshot from here.
i'm not miner or validator. just full node.
i am still in sync process, and got only 3 block max /seconds ( AX61 NVME gen4)
i got lots of error like this:

err="the method miner_start does not exist/is not available"
and also:
lvl=eror msg="diff layers too new from current"

any idea why ?
thanks !

Latest snapshot changes the tar path

From the README: server/data-seed/geth/...
From the archive: server/validator/geth/...

This breaks expectations for folks who may have this scripted. Could we request that this either stays consistent or some other mechanism to know when this path changes?

Create torrent of snapshot

Hello, Its 3 days I'm trying to download the snapshot but every time the download speed gets so slow that the eta sky rockets.

I currently download the snapshot at 120 KB/s which is unbelievable.

Can you manage to create a torrent of the next snapshot so that we can solve the bandwidth problem?

currupted zip or corrupted upload ?

After few downloadings, I have this checksum 1998c48f54a6bf477a068973ae800c8b

and few corrupted files in the zip.

Am I alone with this issue ?

服务器无法下载快照

@wangkai1994

[root@bsc nodebsc]# wget https://s3.ap-northeast-1.amazonaws.com/dex-bin.bnbstatic.com/geth-20210628.zip?AWSAccessKeyId=AKIAYINE6SBQPUZDDRRO&Expires=1627549648&Signature=3mXkds54XRh2pTNi0cV5ex9UlTg%3D
[1] 5601
[2] 5602
[root@bsc nodebsc]# --2021-06-30 22:05:54-- https://s3.ap-northeast-1.amazonaws.com/dex-bin.bnbstatic.com/geth-20210628.zip?AWSAccessKeyId=AKIAYINE6SBQPUZDDRRO
正在解析主机 s3.ap-northeast-1.amazonaws.com (s3.ap-northeast-1.amazonaws.com)... 52.219.1.6
正在连接 s3.ap-northeast-1.amazonaws.com (s3.ap-northeast-1.amazonaws.com)|52.219.1.6|:443... 已连接。
已发出 HTTP 请求,正在等待回应... 403 Forbidden
2021-06-30 22:05:54 错误 403:Forbidden。

[1]- 退出 8 wget https://s3.ap-northeast-1.amazonaws.com/dex-bin.bnbstatic.com/geth-20210628.zip?AWSAccessKeyId=AKIAYINE6SBQPUZDDRRO
[2]+ 完成 Expires=1627549648

How to reduce db size?

Hi.

I started a full sync from scratch, downloaded almost the entire blockchain (except for the last 3 months so far) and the folder is over 2TB in size.

But your snapshot of the full blockchain is less than 1TB.

How is this possible? Do you have any secret how to reduce the size of the database by almost 2 times? Can you share?

Snapshot google drive

Is it possible to you to upload snapshot to google drive in addition or instead of AWS?
I'm located in Europe and have veeeery slow download speed from US AWS, but my bandwidth 500 mbps.
I'm sure I'm not only one with this problem, while i'm downloading archive you are usually update it 1-2 times already.

Sync ????????????

The node is really garbage. There are always errors in snap synchronization. When an error occurs, restart and rollback resynchronization. It hasn't been synchronized for a month

What does verified mean?

The latest snapshot is marked as "verified", does it mean someone use them to sync successfully on a new machine?

Or may you test it on a new machine and provide your system info/hardware details, settings, config.toml, start command and more details? @iakisme

20211003 Snapshot is incomplete

# wget -O geth.tar.gz "https://s3.ap-northeast-1.amazonaws.com/dex-bin.bnbstatic.com/geth-20211003.tar.gz?AWSAccessKeyId=AKIAYINE6SBQPUZDDRRO&Signature=CAR0%2FNhw8JVq6JPvtwfYh%2BTY084%3D&Expires=1635872506"
--2021-10-04 14:21:16--  https://s3.ap-northeast-1.amazonaws.com/dex-bin.bnbstatic.com/geth-20211003.tar.gz?AWSAccessKeyId=AKIAYINE6SBQPUZDDRRO&Signature=CAR0%2FNhw8JVq6JPvtwfYh%2BTY084%3D&Expires=1635872506
Resolving s3.ap-northeast-1.amazonaws.com (s3.ap-northeast-1.amazonaws.com)... 52.219.68.88
Connecting to s3.ap-northeast-1.amazonaws.com (s3.ap-northeast-1.amazonaws.com)|52.219.68.88|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 4096 (4.0K) [application/x-tar]
Saving to: ‘geth.tar.gz’

geth.tar.gz         100%[===================>]   4.00K  --.-KB/s    in 0s      

2021-10-04 14:21:16 (170 MB/s) - ‘geth.tar.gz’ saved [4096/4096]

# tar xzvf geth.tar.gz 

gzip: stdin: unexpected end of file
server/data-seed/geth/
server/data-seed/geth/chaindata/
server/data-seed/geth/chaindata/16234653.ldb
tar: Unexpected EOF in archive
tar: Unexpected EOF in archive
tar: Error is not recoverable: exiting now

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.