Giter VIP home page Giter VIP logo

go-opera's Introduction

Opera

EVM-compatible chain secured by the Lachesis consensus algorithm.

Building the source

Building opera requires both a Go (version 1.14 or later) and a C compiler. You can install them using your favourite package manager. Once the dependencies are installed, run

make opera

The build output is build/opera executable.

Running opera

Going through all the possible command line flags is out of scope here, but we've enumerated a few common parameter combos to get you up to speed quickly on how you can run your own opera instance.

Launching a network

You will need a genesis file to join a network, which may be found in https://github.com/Fantom-foundation/lachesis_launch

Launching opera readonly (non-validator) node for network specified by the genesis file:

$ opera --genesis file.g

Configuration

As an alternative to passing the numerous flags to the opera binary, you can also pass a configuration file via:

$ opera --config /path/to/your_config.toml

To get an idea how the file should look like you can use the dumpconfig subcommand to export your existing configuration:

$ opera --your-favourite-flags dumpconfig

Validator

New validator private key may be created with opera validator new command.

To launch a validator, you have to use --validator.id and --validator.pubkey flags to enable events emitter.

$ opera --nousb --validator.id YOUR_ID --validator.pubkey 0xYOUR_PUBKEY

opera will prompt you for a password to decrypt your validator private key. Optionally, you can specify password with a file using --validator.password flag.

Participation in discovery

Optionally you can specify your public IP to straighten connectivity of the network. Ensure your TCP/UDP p2p port (5050 by default) isn't blocked by your firewall.

$ opera --nat extip:1.2.3.4

Dev

Running testnet

The network is specified only by its genesis file, so running a testnet node is equivalent to using a testnet genesis file instead of a mainnet genesis file:

$ opera --genesis /path/to/testnet.g # launch node

It may be convenient to use a separate datadir for your testnet node to avoid collisions with other networks:

$ opera --genesis /path/to/testnet.g --datadir /path/to/datadir # launch node
$ opera --datadir /path/to/datadir account new # create new account
$ opera --datadir /path/to/datadir attach # attach to IPC

Testing

Lachesis has extensive unit-testing. Use the Go tool to run tests:

go test ./...

If everything goes well, it should output something along these lines:

ok  	github.com/Fantom-foundation/go-opera/app	0.033s
?   	github.com/Fantom-foundation/go-opera/cmd/cmdtest	[no test files]
ok  	github.com/Fantom-foundation/go-opera/cmd/opera	13.890s
?   	github.com/Fantom-foundation/go-opera/cmd/opera/metrics	[no test files]
?   	github.com/Fantom-foundation/go-opera/cmd/opera/tracing	[no test files]
?   	github.com/Fantom-foundation/go-opera/crypto	[no test files]
?   	github.com/Fantom-foundation/go-opera/debug	[no test files]
?   	github.com/Fantom-foundation/go-opera/ethapi	[no test files]
?   	github.com/Fantom-foundation/go-opera/eventcheck	[no test files]
?   	github.com/Fantom-foundation/go-opera/eventcheck/basiccheck	[no test files]
?   	github.com/Fantom-foundation/go-opera/eventcheck/gaspowercheck	[no test files]
?   	github.com/Fantom-foundation/go-opera/eventcheck/heavycheck	[no test files]
?   	github.com/Fantom-foundation/go-opera/eventcheck/parentscheck	[no test files]
ok  	github.com/Fantom-foundation/go-opera/evmcore	6.322s
?   	github.com/Fantom-foundation/go-opera/gossip	[no test files]
?   	github.com/Fantom-foundation/go-opera/gossip/emitter	[no test files]
ok  	github.com/Fantom-foundation/go-opera/gossip/filters	1.250s
?   	github.com/Fantom-foundation/go-opera/gossip/gasprice	[no test files]
?   	github.com/Fantom-foundation/go-opera/gossip/occuredtxs	[no test files]
?   	github.com/Fantom-foundation/go-opera/gossip/piecefunc	[no test files]
ok  	github.com/Fantom-foundation/go-opera/integration	21.640s

Also it is tested with fuzzing.

Operating a private network (fakenet)

Fakenet is a private network optimized for your private testing. It'll generate a genesis containing N validators with equal stakes. To launch a validator in this network, all you need to do is specify a validator ID you're willing to launch.

Pay attention that validator's private keys are deterministically generated in this network, so you must use it only for private testing.

Maintaining your own private network is more involved as a lot of configurations taken for granted in the official networks need to be manually set up.

To run the fakenet with just one validator (which will work practically as a PoA blockchain), use:

$ opera --fakenet 1/1

To run the fakenet with 5 validators, run the command for each validator:

$ opera --fakenet 1/5 # first node, use 2/5 for second node

If you have to launch a non-validator node in fakenet, use 0 as ID:

$ opera --fakenet 0/5

After that, you have to connect your nodes. Either connect them statically or specify a bootnode:

$ opera --fakenet 1/5 --bootnodes "enode://[email protected]:5050"

Running the demo

For the testing purposes, the full demo may be launched using:

cd demo/
./start.sh # start the Opera processes
./stop.sh # stop the demo
./clean.sh # erase the chain data

Check README.md in the demo directory for more information.

go-opera's People

Contributors

andrecronje avatar arrivets avatar calvinchengx avatar champii avatar cyberbono3 avatar dev10 avatar devintegral avatar devintegral2 avatar devintegral7 avatar dzeckelev avatar hadv avatar josephg avatar mauleyzaola avatar maxime2 avatar mkong avatar mpitid avatar okislitsin avatar orangetest1 avatar peak3d avatar quan8 avatar rishflab avatar roanbrand avatar rus-alex avatar s4kibs4mi avatar samuelmarks avatar sfxdxdev avatar shdown avatar stanislavstoyanov avatar thaarok avatar uprendis avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

go-opera's Issues

Proposal to decrease message overheads.

###Proposal to decrease message overheads. (Bounty/Challenge by Fantom Foundation)

Bounty
$250,000 in DAI

Challenge Description
''The scalability of BFT-based protocols is a major concern. One of the main factors negatively affecting the
scalability and performance of BFT-based protocols is that they require n×n broadcast for n replicas (quadratic message complexity). This high communication overhead is to guarantee that consensus will always be reached even after under Byzantine failures. Typically the protocols guarantee that agreement will be reached if the total number of Byzantine nodes is less than a third of the total number of nodes. However, the BFT protocols usually do not distinguish the cases where there are failures.'' [1]

Lachesis aBFT message overhead uses gossip to propagate messages across Byzantine nodes, but also suffers from increased message overhead from failures. We propose a challenge to the developer community to help us reduce message overhead for Lachesis aBFT.

Challenge Requirements
The functional requirements of this challenge are:

  1. Introduce a new implementation of Lachesis aBFT wherein message overheads are smaller.

Judging Criteria
If multiple valid submissions are received and if scored equally, the winning team will be the one that shipped first.

References:

[1] Mohammad M. Jalalzai, Costas Busch, Golden G. Richard III in ''Proteus: A Scalable BFT Consensus Protocol for Blockchains''

Why does the updateStakeTokenizerAddress method of sfc fail to execute?

Hello.
I built the v01.01 version of the opera network. Next, I released the StakeTokenizer contract. When I tried to execute the updateStakeTokenizerAddress method of the sfc contract (0xFC00FACE00000000000000000000000000000000), the execution failed.
The parameter address is the StakeTokenizer contract address that I released.

I called the isOwner method of the sfc contract, and the return is true. The address where I execute the transaction should have executable permissions for sfc.

The StakeTokenizer contract used is
https://github.com/Fantom-foundation/opera-sfc/blob/69d631bd7ca25f81ce943648c30ff09720e5b25b/contracts/sfc/StakeTokenizer.sol
The sfc contract used is
https://github.com/Fantom-foundation/opera-sfc/blob/69d631bd7ca25f81ce943648c30ff09720e5b25b/contracts/sfc/SFC.sol

Segmentation violation error on MacOS

Describe the bug
Running a read-only node on MacOS Big Sur results in segmentation violation error.

To Reproduce
Steps to reproduce the behavior:

  1. Compile 10ca669 (current HEAD in master).
  2. Run build/opera --nodiscover --genesis mainnet.g

Expected behavior
Opera should start syncing.

Actual behavior
Opera crashes with segmentation violation error:

INFO [09-19|16:46:28.872] Maximum peer count                       total=50
INFO [09-19|16:46:29.841] Genesis is already written               hash=0x4a53c5445584b3bfc20dbfb2ec18ae20037c716f3ba2d9e1da768a9deca17cb4
WARN [09-19|16:46:29.841] Sanitizing invalid parameter MinPrice of gasprice oracle provided=0 updated=0
INFO [09-19|16:46:30.231] Loaded local transaction journal         transactions=0 dropped=0
INFO [09-19|16:46:30.232] Regenerated local transaction journal    transactions=0 accounts=0
INFO [09-19|16:46:30.232] Starting peer-to-peer node               instance=go-opera/v1.0.2-rc.5-10ca6692-1630082373/darwin-amd64/go1.17.1
INFO [09-19|16:46:30.232] Transaction pool price threshold updated price=15,000,000,000
INFO [09-19|16:46:30.378] New local node record                    seq=7 id=c45bf06507c806b6 ip=127.0.0.1 udp=5050 tcp=5050
INFO [09-19|16:46:30.378] Started P2P networking                   self=enode://99e23f783bb135cd9f31332dbce9d9a8f5586df0305fc4a98dc1bf6d3b6e60fbbeca9571fd0b2f301e4d366ccb98615bcfb9fe949a9c7e20ae4a185f6313d482@127.0.0.1:5050
INFO [09-19|16:46:30.379] IPC endpoint opened                      url=/Users/abiro/Library/Lachesis/opera.ipc
WARN [09-19|16:46:30.380] Loaded snapshot journal                  diskroot=c48257..332820 diffs=missing
fatal error: unexpected signal during runtime execution
[signal SIGSEGV: segmentation violation code=0x1 addr=0xb01dfacedebac1e pc=0x7fff2033ac9e]

runtime stack:
runtime: unexpected return pc for runtime.sigpanic called from 0x7fff2033ac9e
stack: frame={sp:0x70000f22ce58, fp:0x70000f22cea8} stack=[0x70000f1ad338,0x70000f22cf38)

Full logs: https://pastebin.com/WBaLnMaM

Software version:

  • MacOS Big Sur 11.6 (20G165) Intel
  • go1.17.1 darwin/amd64

web3 getPastLogs with topic filter returns wrong transaction index

Describe the bug
When using the web3 method getPastLogs (in my case on the latest block), the transactionIndex in the topic filtered result is different to the un-topic-filtered result. Not sure if this is web3 or opera related!

To Reproduce

  • Connect to node via web3 and ipc
  • Subscribe to newBlockHeader event via web3
  • At new block arrival, call web3 getPastLogs twice to compare the results:
  • first call with options fromBlock and toBlock set to newBlockHeader.number
  • second call with options fromBlock and toBlock set to newBlockHeader.number and additionally with topic filter (e.g. transfer sig hash)
  • Compare the two results. The transactionIndex of the topic-filtered result is different (in my case 0) to the transactionIndex (1) of the same transaction in the unfiltered result.

My tests were on block number 22877323 and tx 0x98ff13b2f29d381490f3dacc80cbb1f3fd7c35c508151b1eaa6c80907f6a492d

Expected behavior
The transactionIndex in both results should be equal.

Screenshots
image

Desktop (please complete the following information):

  • OS: Ubuntu 20 LTS

Additional context
Add any other context about the problem here.

Cannot open database

My system crashed. When I try to open opera again, I receive this message.

Fatal: Failed to make engine: failed to open existing databases: dirty state: gossip-23111: DE001695004CFEBA6D8516950050E46F8622

Is it possible to try to fix it? Or must I start from zero again?

Thanks in advance

getPastLogs returns validated logs when "pending" is provided

Good day! I noticed the following during testing with a remote node.

Problem
When using getPastLogs (https://web3js.readthedocs.io/en/v1.5.2/web3-eth.html?highlight=logs#getpastlogs) and providing pending to both fromBlock and toBlock, the RPC node returns validated transaction logs instead of pending logs.

Reproduction steps
Although pending logs are requested, the following request returns logs from validated transactions:

curl https://rpc.ftm.tools -X POST --data '{"method":"eth_getLogs","params":[{"fromBlock": "pending", "toBlock": "pending"}],"id":1,"jsonrpc":"2.0"}'

make opera error 128

I download the go-opera master ZIP, unzip the file in Ubuntu 18.04. Running make opera produces the following:

GIT_COMMIT=`git rev-list -1 HEAD` && \
GIT_DATE=`git log -1 --date=short --pretty=format:%ct` && \
go build \
    -ldflags "-s -w -X github.com/Fantom-foundation/go-opera/cmd/opera/launcher.gitCommit=${GIT_COMMIT} -X github.com/Fantom-foundation/go-opera/cmd/opera/launcher.gitDate=${GIT_DATE}" \
    -o build/opera \
    ./cmd/opera
fatal: not a git repository (or any of the parent directories): .git
make: *** [Makefile:6: opera] Error 128

Transaction processing is too slow

server :Ubuntu 18.04 LTS
CPU: Intel(R) Core(TM) i7-6700 CPU @ 3.40GHz
RAM:16G

[Opera.Emitter]
VersionToPublish = "1.0.0-rc.0"

I used fakenet to build 3 nodes to execute transactions. The number of transactions in txpool is very slow. How can I improve the processing speed of pending transactions in txpool?

Proposal to optimize gossip propagation and routing.

###Proposal to optimize gossip propagation and routing (Bounty/Challenge by Fantom Foundation).

Bounty
$500,000 in DAI

Challenge Description
Lachesis aBFT message overhead can cause nodes with lesser hardware specifications to fall behind and bottleneck the network. The core development team wishes to propose to the developer community to optimize gossip propagation and routing in Lachesis aBFT to reduce the load on Byzantine nodes and to increase network robustness through it.

Challenge Requirements
The functional requirements of this challenge are:

  • Optimize Lachesis aBFT to propagate gossip more efficiently, whether it be through different topological ordering or different message routing, etc

Judging Criteria
If multiple valid submissions are received and if scored equally, the winning team will be the one that shipped first.

Reliable way to time emission

Describe the bug
My team struggles to time token emission on Fantom.
Our emission policy is to issue tokens steadily every day so that the total supply reaches N after a year. The formula takes in current time to calculate emission.
We tried to rely on average block time but it has been very volatile and the emission quickly got out of schedule.
This is not an issue on other blockchains. We're commited to use Fantom.

How do we best reference time on ftm? Is there/will there be constant block time? What guarantees are provided for timestamps, perhaps they are more reliable than on other chains?

To Reproduce
https://ftmscan.com/chart/blocktime
https://ftmscan.com/chart/tx

Expected behavior
Other chains provide steady blocktime.
https://etherscan.io/chart/blocktime
https://bscscan.com/chart/blocktime

Block hash in NewHeads websocket differs from eth_block json rpc

Describe the bug
The block hash field from new heads websocket and eth_block json rpc is different.

The response from subscribing to NewHeads on the eth_subscribe endpoint returns a block such as:

{'parentHash': '0x0000da2a000008caf2a4ba43e98a1daf2983047e0a462afc1cfebe3ecac49131', 'sha3Uncles': '0x0000000000000000000000000000000000000000000000000000000000000000', 'miner': '0x0000000000000000000000000000000000000000', 'stateRoot': '0x4dd64b26736479be6fbc7fc84a1724cf8bfa0114bc1edebcca4272894ae6e19f', 'transactionsRoot': '0x96b2510a1d26e1188f40c71d741cd504913e5ec08b5a7912020cf8e4ddae01db', 'receiptsRoot': '0x0000000000000000000000000000000000000000000000000000000000000000', 'logsBloom': '0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000', 'difficulty': '0x0', 'number': '0x15eed83', 'gasLimit': '0xffffffffffff', 'gasUsed': '0x30b6e5', 'timestamp': '0x619e448e', 'extraData': '0x0000da2a000008d49b448b0d46eb0088dd8b8a098234cda07ed59590623a2902', 'mixHash': '0x0000000000000000000000000000000000000000000000000000000000000000', 'nonce': '0x0000000000000000', 'hash': '0x76f6d690164cd4704d241bbb20e204bf8fc5d090dcc4988ec609065037bbbbeb'}

Here the block hash is in the extraData field - while the hash in the hash field is not clear what it is.

The response of eth_block json rpc returns the same block in this structure (txs omitted for brevity):

{'difficulty': 0, 'extraData': HexBytes('0x'), 'gasLimit': 281474976710655, 'gasUsed': 3192549, 'hash': HexBytes('0x0000da2a000008d49b448b0d46eb0088dd8b8a098234cda07ed59590623a2902'), 'logsBloom': HexBytes('0x0020008000040000000000008000820202800000080000000200000000020100008020601000020004800000000000000000000800108000100000000020000140200080000400000080000800400120000200801800000080000008800000004000000002000000405800200000080000000000000802010000001000001000000000000000000000020000000000000000000101000008000020400000000016000000008000020080041080010000000001800000000a08000008000081000000000200080000008010200082000000000000200000180000000000402000009000000000000800000000084001000100000004400252000040000000000a'), 'miner': '0x0000000000000000000000000000000000000000', 'mixHash': HexBytes('0x0000000000000000000000000000000000000000000000000000000000000000'), 'nonce': HexBytes('0x0000000000000000'), 'number': 22998403, 'parentHash': HexBytes('0x0000da2a000008caf2a4ba43e98a1daf2983047e0a462afc1cfebe3ecac49131'), 'receiptsRoot': HexBytes('0x0000000000000000000000000000000000000000000000000000000000000000'), 'sha3Uncles': HexBytes('0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347'), 'size': 5394, 'stateRoot': HexBytes('0x4dd64b26736479be6fbc7fc84a1724cf8bfa0114bc1edebcca4272894ae6e19f'), 'timestamp': 1637762190, 'timestampNano': '0x16ba8008f69e1c37', 'totalDifficulty': 0, 'transactions': [...], 'transactionsRoot': HexBytes('0x96b2510a1d26e1188f40c71d741cd504913e5ec08b5a7912020cf8e4ddae01db'), 'uncles': []}

To Reproduce
compare output of new heads websocket with eth_block json rpc

Expected behavior

  • the hash field in both responses should be the same
  • the extraData field in both responses should be the same

Running archive node with --syncmode full --gcmode archive throws error 'flag provided but not defined: -syncmode'

Describe the bug
Trying to run Go-opera as full archive node. Assuming this is similar to starting a regular Geth (ethereum) archive node, which seems to be right judging from available commandline-options, I should run go-opera with (among other things) --syncmode full --gcmode archive

Doing this however throws immediately with error: flag provided but not defined: -syncmode

To Reproduce
Steps to reproduce the behavior:
Run: /root/go-opera/build/opera --syncmode full --gcmode archive --nousb

Expected behavior
Go-opera should run in full archive node

Desktop (please complete the following information):

  • OS: Ubuntu 20.04
  • Go Opera: 1.0.1-rc.1

trace_transaction for txs creating contracts via proxy shows the implementation as the context/creator

With a tx where a proxy delegateCalls to a factory contract which creates a child contract, on FTM's trace_transaction the creator is the implementation contract, instead of the proxy contract itself.

This conceptually differs vs OE's trace_transaction (and Geth trace) where the trace result points back to the proxy contract as the originating context, hence the proxy contract being the creator of the child contract.

E.g. the below two txs:

https://kovan.etherscan.io/vmtrace?txhash=0xc4aa87dfabd34ef4a0ef677b7c3e272b5b67bc4d7e6d97c76d0d7c225f9a08ea&type=parity#raw

https://ftmscan.com/vmtrace?txhash=0xf0fe376bf2bc801d5d7c571f823e0b3681f201ae5f4dbc22d24667d72a6d5eb8&type=parity#raw

More specifically - Note the "from" and "to":
image

Since delegateCall refers to the proxy contract as the context, should the proxy contract not be the contract creator?

Much thanks. 🙏

Datadir snapshot / backup and state flush

I'm trying to backup the datadir of a read-only node and I run into issues with data consistency:

Fatal: Failed to make engine: failed to open existing databases: dirty state: genesis: 0016C495DD4EEDF6A9,
gossip: DE0016C495E2BCC3CF4A16C495E2E9B1D823

My understanding is that this can happen if the backup of the underlying storage takes place during a state flush. Is it correct?

Is it there a way to take a consistent backup of the datadir? Would it be possible to expose an /admin/state/flush endpoint to be able to (externally) trigger the flush prior to the backup?

the pendingTransactions event alway after transaction over.

i have subscribe pendingTransactions and contract log at the same time,but i find pendingTransactions log always after the transaction over.
for example, i subscribe sushi dex sync topic
["logs", { "address": "0x3Ac28d350C59ef9054B305DFe9078fADc3cecABe","topics": ["0x1c411e9a96e071241c2f21f7726b17ae89e3cab4c78be50e062b03a9fffbbad1"] }]
when send swap transaction ,the sushi sync log always in front of pendingTransactions, but the sync log means transaction is over.
i have test with myself readonly node and public rpc wss://wsapi.fantom.network, each the result is same. when i use with eth network,it is ok,first log is pendingTransactions .

Missing parent trace

Describe the bug
There are transactions in which traces are not consecutive, i.e. some of the traces don't have a parent trace

To Reproduce
Example of such of tx: 0xc2e019c1ea85b92ef0a34b1f1b16d6eced3046846eb5bce1b5fb1bf0ef62968a

Below is a script that extract trace addresses for traces from this tx:

import requests

res = requests.post(
    'https://rpcapi-tracing.fantom.network/',
    json={
        'jsonrpc': '2.0',
        'method': 'trace_transaction',
        'params': ['0xc2e019c1ea85b92ef0a34b1f1b16d6eced3046846eb5bce1b5fb1bf0ef62968a'],
        'id': 1
    }
).json()['result']

trace_addresses = [trace['traceAddress'] for trace in res]

for address in trace_addresses:
    print(address)

The output of a script is the following:

[]
[0]
[1]
[2]
[3]
[4]
[4, 0]
[4, 1]
[4, 1, 0]
[4, 1, 1]
[4, 1, 2]
[4, 1, 3]
[4, 1, 3, 0]
[4, 1, 3, 1]
[4, 1, 3, 2]
[4, 1, 4]
[4, 1, 5]
[4, 1, 5, 0]
[4, 1, 5, 1]
[4, 1, 5, 2]
[4, 1, 6]
[4, 2]
[4, 3]
[5, 3]

Expected behavior
The trace with traceAddress [5, 3] should have trace address [5] or there should be traces with trace addresses [5], [5, 0], [5,1], [5,2]`

support web3.eth.subscribe('logs'..) with multiple topics (OR)

Is your feature request related to a problem? Please describe.

Go-opera doesn't seem to support web3.eth.subscribe('logs',...) with multiple topics (OR). Instead only the first passed topic is filtered on. The ability to filter on multiple topics (OR) is discussed here (https://web3js.readthedocs.io/en/v1.2.11/web3-eth-subscribe.html#subscribe-logs) which is something that Fantom probably wants to follow.

Describe the solution you'd like
As per above

Describe alternatives you've considered
no other altternatives. Fantom should strive to fully implement the web3 API imho

Additional context
First brought up in Discord, but to be sure this reaches the team, I created this issue

trace_transaction reverting on subtrace even when the subtrace successfully created a contract

Describe the bug
And I'm back! 😁 (sorry)

In summary: When we run trace_transaction on a specific tx (https://ftmscan.com/tx/0x9d8c6117d0671041df7d76ea44dfe406dc1455e8d204fc2bee82908d6955a58b), subtraces creating contracts show up as "reverted", however contracts were indeed created and can be confirmed with the presence of on-chain bytecode.

Trace from FTMscan: https://ftmscan.com/vmtrace?txhash=0x9d8c6117d0671041df7d76ea44dfe406dc1455e8d204fc2bee82908d6955a58b&type=parity#raw

The subtrace in question being the two "create" subtraces, at index [3, 6, 9] and [3, 7]:

image

image

These two traces create contracts on both https://ftmscan.com/address/0xcbba8c0645ffb8aa6ec868f6f5858f2b0eae34da and https://ftmscan.com/address/0x18f1de40c5056ffe75df809f1f9441f539376e1b (can be confirmed with eth_getCode), which unfortunately wouldn't show up as contracts on FTMscan as the revert means the result.address attribute wouldn't be available for consumption.

Both contracts having on-chain bytecode:
image

OTOH, this seems to be traced correctly on Tenderly: https://dashboard.tenderly.co/tx/fantom/0x9d8c6117d0671041df7d76ea44dfe406dc1455e8d204fc2bee82908d6955a58b

Feel free to let me know if there might be any additional information required, much thanks!

Fast node syncing or pruning

Hi,

Is there any plan to add pruning or fast node syncing like original geth? Right now we can only make a full node sync that is very slow and uses lots of ssd space.

sfc.getDelegation return 0 for all timestamps

Describe the bug
the SFC method getDelegation does not return correct timestamps since update to opera.

To Reproduce

  1. opera attach, then execute :

sfc.getDelegation('0x467af86ba64afc01ea769065cb4763c01d810848', 30)
return :
{
address: "0x467af86ba64afc01ea769065cb4763c01d810848",
amount: 4690000000000000000,
createdEpoch: 0,
createdTime: 0,
deactivatedEpoch: 0,
deactivatedTime: 0,
toStakerID: 30
}

Expected behavior
createdTime should have the correct timestamp

web3 send tx always time out

from 30 hour ago,when we use web3 send the tx, the only response is timeout, but use metamask is ok,i dont konw why,many people have the same question,we talk in the discord developers-chat, but no one support to this delay。i think there is some problem in Fantom NET,please check it ,and reply us in the discord developers-chat,thank you!

by the way ,there is somebody Spamming Fantom network,you can see this address https://ftmscan.com/address/0x3a3871182c1e38f30870e2bf4711b519c590fe49, receive little ftm everytime,only two day, it have produce 578900 transactions , maybe it cast the delay.

Running in light mode

Is your feature request related to a problem? Please describe.
Trying to run in light mode but the flag is not available

Describe the solution you'd like
A way of running a go-opera light node

Describe alternatives you've considered
Just running a normal node

Additional context
Would be nicer not to have to sync the entire chain

Proposal to optimize stake based emission speeds.

###Proposal to optimize stake based emission speeds. (Bounty/Challenge by Fantom Foundation)

Bounty
$250,000 in DAI

Challenge Description
During the recent network incident, one of the biggest validators slowed down the block emission, causing a second big validator to slow down emission as well. The other validators kept producing blocks, but the two lagging ones were not able to catch up. These two validators are big enough to represent more than 1/3W of stake, and they caused a domino effect that halted new block confirmations.

We want to propose a challenge to the developer community to help us optimize and implement an optimization for emission speeds per validator node, based on stake and validating power, to increase the reliability of the network.

Challenge Requirements
The functional requirements of this challenge are:

  1. Develop an optimized version of go-opera, with a new implementation of staked based emission to be more resistant to network throttling.

Judging Criteria
If multiple valid submissions are received and if scored equally, the winning team will be the one that shipped first.

Add --datadir.ancient like geth

Geth 1.9.0 added this option to specify the directory of the ancient data. This is great to move it to cheaper storage, so it would be great if fantom plans to add it.

Genesis file opera

Hi,
Setting up local network in opera but how to customize the genesis file?
Where the genesis file stored?

Proposal to implement state snapshotting and booting.

###Proposal to implement state snapshotting and booting. (Bounty/Challenge by Fantom Foundation).

Bounty
$500,000 in DAI

Challenge Description
State snapshotting and booting allows us to generate a snapshot of the current state of the chain, allowing Fantom to re-start the chain from a fresh genesis upon doing network upgrades, which allows us to reduce state bloat.

We want to propose a challenge to the developer community to help us implement state snapshotting and booting.

Challenge Requirements
The functional requirements of this challenge are:

  • Implement state snapshotting and booting for Fantom Opera Mainnet.

Judging Criteria
If multiple valid submissions are received and if scored equally, the winning team will be the one that shipped first.

Default ports and help output confusing

Running release/1.0.0-rc.2 from the instructions:
https://github.com/Fantom-foundation/lachesis_launch/blob/master/docs/setup-readonly-node.sh

Running opera --help outputs:

API AND CONSOLE OPTIONS:
  --http.port value                   HTTP-RPC server listening port (default: 8545)
  --ws.port value                     WS-RPC server listening port (default: 8546)

NETWORKING OPTIONS:
  --port value                        Network listening port (default: 30303)

ALIASED (deprecated) OPTIONS:
  --rpcport value                     HTTP-RPC server listening port (deprecated, use --http.port) (default: 8545)
  --wsport value                      WS-RPC server listening port (deprecated, use --ws.port) (default: 8546)

MISC OPTIONS:
  --port value                         Network listening port (default: 5050)
  --http.port value                    HTTP-RPC server listening port (default: 18545)
  --rpcport value                      HTTP-RPC server listening port (deprecated, use --http.port) (default: 18545)
  --ws.port value                      WS-RPC server listening port (default: 18546)
  --wsport value                       WS-RPC server listening port (deprecated, use --ws.port) (default: 18546)

These defaults are all over the place 😄

It looks like most of the "real" values and defaults are expressed under the MISC OPTIONS output (according to documentation). But the geth defaults are also output, which is confusing.

web3.eth.getPastLogs doesn't support multiple topics (OR)

Describe the bug
Fantom doesn't seem to support web3.eth.getPastLogs() calls with multiple topics (OR), as described here https://web3js.readthedocs.io/en/v1.2.11/web3-eth.html?highlight=getPastLogs#getpastlogs

To Reproduce
Test-code below (ex the web3 construction)

web3.eth.getPastLogs({ 
  fromBlock: 20230000, 
  toBlock: 20235000, 
  topics: [
    [
      "0xdccd412f0b1252819cb1fd330b93224ca42612892bb3f4f789976e6d81936496",
      "0x4c209b5fc8ad50758f13e2e1088ba56a560dff690a1c6fef26394f4c03821c4f",
      "0xd78ad95fa46c994b6551d0da85fc275fe613ce37657fb8d5e3d130840159d822",
    ]
  ]  
}).then(events => {
  events.forEach(e => console.log(e.topics[0]));
})

Expected behavior
log events which match any of the topics for topics[0]

Actual behavior
only logs events where topics[0] === '0xdccd412f0b1252819cb1fd330b93224ca42612892bb3f4f789976e6d81936496'

Context

  • OS: Ubuntu 20.04
  • Go Opera version: 1.0.1-rc.1-e544db76-1620199110

newPendingTransactions subscription receiving already validated transactions

Describe the bug
The newPendingSubscriptions filter/subscription should emit tx hashes that are still to be validated, but instead, it emits transactions that are already validated and have a block number assignated.

To Reproduce
Subscribe to newPendingTransactions and check received tx hashes.

It looks like the code emitting the txhashes is in the EndBlock function, where transactions have already been executed and so on

Logs 'new event' and 'new block' should be on different log levels

Is your feature request related to a problem? Please describe.
Currently logging progress (verbosity = 3 / normal) logs each and every event. Logging to file consumes a lot of disk space when setup this way. However, moving to verbosity = 2, will not give me any feedback anymore.

Describe the solution you'd like
I'd like 'new block' to be logged at the current level (verbosity = 3). This allows me as a user to quickly check progress.
I'd like 'new event' to be moved down a level (verbosity = 4): it's there when you need it, but under normal conditions you're probably not interested in seeing it

Describe alternatives you've considered

  1. setting up logrotation
  2. pruning the log after the fact with some linux commandline magic. Might work
  3. someone runs to stdout in a docker which effectively means: don't save logs

All are a hassle

Proposal to optimize stake based emission speeds.

###Proposal to optimize stake based emission speeds. (Bounty/Challenge by Fantom Foundation)

Bounty
$250,000 in DAI

Challenge Description
During the recent network incident, one of the biggest validators slowed down the block emission, causing a second big validator to slow down emission as well. The other validators kept producing blocks, but the two lagging ones were not able to catch up. These two validators are big enough to represent more than 1/3W of stake, and they caused a domino effect that halted new block confirmations.

We want to propose a challenge to the developer community to help us optimize and implement an optimization for emission speeds per validator node, based on stake and validating power, to increase the reliability of the network.

Challenge Requirements
The functional requirements of this challenge are:

Develop an optimized version of go-opera, with a new implementation of staked based emission to be more resistant to network throttling.

Judging Criteria
If multiple valid submissions are received and if scored equally, the winning team will be the one that shipped first.

Proposal to completely rebuild client.

###Proposal to completely rebuild client. (Bounty/Challenge by Fantom Foundation).

Bounty
$1,000,000 in DAI

Challenge Description
We propose this challenge to developers who would like to complete all three challenges laid out in issue #50, #51, and #52. This involves a full client rebuild, with all the optimizations laid out in the other challenges.

Challenge Requirements
The functional requirements of this challenge are:

  • Completely rebuild the go-opera client, completing the challenges laid out in issue #50, #51, and #52.

Judging Criteria
If multiple valid submissions are received and if scored equally, the winning team will be the one that shipped first.

Block hash received via `newHeads` subscription is not correct

Describe the bug

When using eth_subscribe to newHeads it sends blocks with an incorrect hash.

For example, here is a head received via the websocket subscription to newHeads:

{
   "jsonrpc":"2.0",
   "method":"eth_subscription",
   "params":{
      "subscription":"0x308c8699caefc45e5050fcdb4289afae",
      "result":{
         "parentHash":"0x0000a7fa00000062f3310a6c54aad9f86f820a83546a57a73637d5131d5fc769",
         "sha3Uncles":"0x0000000000000000000000000000000000000000000000000000000000000000",
         "miner":"0x0000000000000000000000000000000000000000",
         "stateRoot":"0x1f9f0341b25c9dcba815a57fbb66dde16682a6f5fa36d26fa530f05f2207b172",
         "transactionsRoot":"0xe021374eef3f78756b610480a17174117ac5ceb15f5beb8035a01c6abd4dc3f5",
         "receiptsRoot":"0x0000000000000000000000000000000000000000000000000000000000000000",
         "logsBloom":"0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
         "difficulty":"0x0",
         "number":"0x11f54b5",
         "gasLimit":"0xffffffffffff",
         "gasUsed":"0x11c07c",
         "timestamp":"0x61642db2",
         "extraData":"0x0000a7fa00000069e4cb85c4e50b9e4c2da2f56caa1d2344717ce01e682d676f",
         "mixHash":"0x0000000000000000000000000000000000000000000000000000000000000000",
         "nonce":"0x0000000000000000",
         "hash":"0xfffe9bc432fbfb662e0682a4dc54c3520cc3c4b00425543222eda9ae8a2fc3d4"
      }
   }
}

This is block number 18830517. Notice that it lists the block hash as 0xfffe9bc432fbfb662e0682a4dc54c3520cc3c4b00425543222eda9ae8a2fc3d4

On the explorer however, this block is listed with the following hash:
0x0000a7fa00000069e4cb85c4e50b9e4c2da2f56caa1d2344717ce01e682d676f

See: https://ftmscan.com/block/18830517

To Reproduce

  1. Open a websocket connection
  2. Subscribe to new heads with {"id": 1, "method": "eth_subscribe", "params": ["newHeads"]}
  3. Receive at least one head
  4. Check the hash of the received head and compare to the block hash on https://ftmscan.com/, you will see that it is different

Expected behavior

I would expect the received block hash to be the same as the block hash listed on ftmscan.com, and for the subsequent block's parent hash to point to this one.

Incoming event rejected

Describe the bug
Since today, I'm receiving this message on my node

WARN [09-20|07:38:10.095] Incoming event rejected event=36378:1:613a6c creator=57 err="wrong event epoch hash"
WARN [09-20|07:38:10.096] Incoming event rejected event=36378:1:a29ace creator=54 err="wrong event epoch hash"
WARN [09-20|07:38:26.017] Incoming event rejected event=36378:1:01c642 creator=50 err="wrong event epoch hash"
WARN [09-20|07:38:26.018] Incoming event rejected event=36378:1:0f51f8 creator=56 err="wrong event epoch hash"
WARN [09-20|07:38:26.018] Incoming event rejected event=36378:1:1bf11d creator=55 err="wrong event epoch hash"
WARN [09-20|07:38:26.018] Incoming event rejected event=36378:1:55ba5d creator=59 err="wrong event epoch hash"
...... etc, all the time

And the node is doing nothing. What is happening?

Docker image fails to build

Then building the docker image from the docker directory, the image fails to build on step 6:
RUN go mod download
Error:
go: github.com/dvyukov/[email protected] (replaced by github.com/guzenok/[email protected]): version "v0.0.0-20210103140116-f9104dfb626f" invalid: unknown revision f9104dfb626f

Quick/fast sync

Is there any way to do a quick/fast sync similar to "fast sync" with GETH? I don't see anything in the command line help. My "normal" sync created a full node which is not what I'm looking for.

codahale/hdrhistogram repo url has been transferred under the github HdrHstogram umbrella

Problem

The codahale/hdrhistogram repo has been transferred under the github HdrHstogram umbrella with the help from the original author in Sept 2020 (new repo url https://github.com/HdrHistogram/hdrhistogram-go). The main reasons are to group all implementations under the same roof and to provide more active contribution from the community as the original repository was archived several years ago.

The dependency URL should be modified to point to the new repository URL. The tag "v0.9.0" was applied at the point of transfer and will reflect the exact code that was frozen in the original repository.

If you are using Go modules, you can update to the exact point of transfer using the @v0.9.0 tag in your go get command.

go mod edit -replace github.com/codahale/hdrhistogram=github.com/HdrHistogram/[email protected]

Performance Improvements

From the point of transfer, up until now (mon 16 aug 2021), we've released 3 versions that aim support the standard HdrHistogram serialization/exposition formats, and deeply improve READ performance.
We recommend to update to the latest version.

Opera local network

Hi,
How to create a custom genesis file in opera local network?
Can anyone explain this?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.