celestiaorg / celestia-core Goto Github PK
View Code? Open in Web Editor NEWCelestia node software based on Tendermint.
Home Page: https://celestia.org/
License: Apache License 2.0
Celestia node software based on Tendermint.
Home Page: https://celestia.org/
License: Apache License 2.0
As we are building the lazyledger app using the latest version of the SDK (celestiaorg/cosmos-sdk#11), we should update ll-core to use the equivalent of 0.34.1 of tendermint.
The ipfs node (see #118) needs to load a IPLD plugin that can read and write Namespaced Merkle trees (incl. tree-nodes).
This depends on celestiaorg/nmt#9.
A very simple example of such a plugin for a regular binary Merkle tree, can be found here.
Note: depending on if use DagPut or directly use a NodeAdder to add to the DAG, the parser part of the plugin might be optional (if we pass in the nodes directly there is no need to decode them via the plugin).
We need to implement block propagation during consensus differently: validators need all block data and fullnodes only a part of the data.
Previously, we thought of replacing the tendermint block propagation with a variant that uses erasure-coding instead. While we still want that in some way, the design shifted slightly with the decision that validators need all data. The concrete design will be captured in #389. The original thinking is kept below for historical context:
Currently, block propagation works roughly as follows, the block is split into equal sized chunks and then gossiped to peers:
https://github.com/lazyledger/lazyledger-core/blob/541b6e7f4e1c2941c7467ca7e0f89a1d35d18854/consensus/reactor.go#L193
https://github.com/lazyledger/lazyledger-core/blob/541b6e7f4e1c2941c7467ca7e0f89a1d35d18854/consensus/reactor.go#L504-L513
Also described here: tendermint/spec#78 (comment)
As we tackle supporting validating blocks via data availability proofs (#65), we could as well consider changing block propagation to erasure-coded chunks.
ref: tendermint/spec#78 (comment)
This could be interesting for the tendermint core team as well.
In Tendermint, the BlockID is comprised of the Header
hash and the PartSetHeader
.
In LazyLedger, we do not need the PartSetHeader as we will remove the concept of "parts" with an erasure coded variant (see tendermint/spec#163). Hence, the BlockID
needs to be changed too.
The BlockID needs to be simplified into the header's hash as the PartsSetHeader will be replaced with the DA header.
This seemingly simple change will result in a relatively large PR as the BlockID is used a lot, everywhere. Preferably, we should split this up into at least to PRs / self-contained packages. Changing the header hash into a simple hash (instead of as currently merkelizing it) can be done separately. Then, after using the DA header everywhere, we can remove the PartSetHeader together with the PartsSet struct itself. From the perspective of getting LazyLedger ready, switching from merkelizing to a simple hash is mostly a cosmetic change (with low priority), while using the DA header instead of the PartsetHeader (as a single commitment to data) is absolutely required. Update: We could actually continue using the PartSetHeader as an implementation details during consensus but we should still remove it from the BlockID (as it would be a second commitment to the block data additional to the DAHeader).
The work ideally should be split up like this: #205 (comment)
There are also plans to simplify it in tendermint, too. See for instance:
Also related:
After specifying all public APIs for the different kind of nodes (ref: celestiaorg/celestia-specs#22), we should start thinking on howto implement the different kind of networking / p2p related aspects of the system.
This issue is for discussing different approaches, e.g. which libraries and existing protocols to re-use and howto integrate them into lazyledger-core.
Currently, this issue just aims to give a complete overview on what is missing from a highlevel perspective. Where it makes sense, the discussion will be broken up into dedicated issues (and further down PRs).
We need to add (at least) two p2p related features to vanilla tendermint/lazyledger-core:
One is the random sampling LazyLedger light clients do, the other is a p2p filesharing network from which also fullnodes can recover the columns and rows of the extended erasure coded matrix M
.
Copy and pasting the relevant pieces from the two academic papers here for now (TODO add refs):
tl;dr LL light client randomly sample parts of the erasure coded block
The protocol between a light client and the full nodes that it is connected to works as follows:
h_i
from one of the full nodes it is connected to, and a set of row and column roots R = (rowRoot_i^1, rowRoot_i^2, ...,rowRoot_i^{2k}, columnRoot_i^1, columnRoot_i^2, ...,columnRoot_i^{2k})
. If the check root(R) = dataRoot_i
is false, then the light client rejects the header.(x, y)
coordinates S = {(x_0, y_0) (x_1, y_1), ..., (x_n, y_n)}
where 0 < x <= matrixWidth_i
and 0 < y <= matrixWidth_i
, corresponding to points on the extended matrix, and sends them to one or more of the full nodes it is connected to.S
and their associated Merkle proofs, then for each coordinate (x_a, y_b)
the full node responds with M_i^{x_a,y_b}, {M_i^{x_a,y_b} → rowRoot_i^a}
or M_i^{x_a,y_b}, {M_i^{x_a,y_b} → columnRoot_i^b}
. Note that there are two possible Merkle proofs for each share; one from the row roots, and one from the column roots, and thus the full node must also specify for each Merkle proof if it is associated with a row or column root.M_i^{x_a,y_b}
that the light client has received, the light client checks VerifyMerkleProof(M_i^{x_a,y_b}, {M_i^{x_a,y_b} → rowRoot_i^a}, rowRoot}_i^a)
is true}
if the proof is from a row root, otherwise if the proof is from a column root then VerifyMerkleProof( M_i^{x_a,y_b}, {M_i^{x_a,y_b} → columnRoot}_i^b}, columnRoot_i^b,)
is true}
.2 * δ
no fraud proofs for the block's erasure code is received.tl;dr nodes need to recover erasure coded blocks and we need a DHT-like structure/protocol to request shares from the network in a decentralized way.
Given a full node that wants to recover a matrix M_i
associated with block i
,the extraction process proceeds as follows:
k+1
recovered shares, then recover the whole row and/or column with recover
(see spec / todo).M_i
has greater than k+1
recovered shares, then recover the whole row or column with recover
. Repeat this step until M_i
does not change and no new rows or columns are recoverable.Note that in step 5. (above), it is also mentioned that light clients gossip shares to full nodes that do not have them.
We want to generalize the share gossiping mechanism among full nodes as a peer-to-peer file-sharing network (such as BitTorrent or IPFS) to enable full nodes to download shares that they do not have, as an alternative to step 1.
Current mempool.ReapMaxDataBytes
logic doesn't seem like the right approach for LL core as we need to reap as many Tx as shares would fit into the max square size.
The current mechanism needs to be based on the max square size instead of gas only:
https://github.com/lazyledger/lazyledger-core/blob/6d99bdda1b59d1b1e5b6c83635ad879bfdffb19e/state/execution.go#L103-L110
One rough idea @adlerjohn and I talked about is the following approach:
This probably terminates after one or two iterations but it would be could if can formalize this a bit further and have small proof for the upper bound of iterations.
In the spec though: https://github.com/lazyledger/lazyledger-specs/blob/master/specs/data_structures.md#hashing
Note that spec was preferring sha3 before.
Tendermint currently stores the block including all data in its own block store (interface).
For LazyLedger only storing the header + DA header is sufficient. The block data, or rather more often only a portion of the data, will be stored on ipfs. See #163, #178
Storing the data additionally in tendermint's store will be redundant as it is already stored (and pinned) on ipfs. Also, some nodes won't even have the whole block data.
Update the block store accordingly. The design should be fleshed out upfront.
This depends on #178 and a corresponding "read equivalent" where the block data can be sampled or be fully reconstructed from the network using ipfs (or, more generally speaking, a DHT + some IPLD block-exchange protocol, e.g. graph-sync later down the road).
Note that fleshing out the design and starting the implementation can still be done independently. Only full integration of the feature and replacing the existing storage mechanism depends on adding the possibility to read/write to ipld.
Catch up with https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-055-protobuf-design.md (drops amino and full proto support)
as soon as this lands in master 🎉
Can we avoid merge conflicts and decrease maintenance costs for updating to upstream (tendermint) master by using replace
?
Update codeowners accordingly, or, decide if we don't want to use it:
Plugin name currently is: module github.com/lazyledger-core/p2p/ipld/plugin
but should rather be module github.com/lazyledger/lazyledger-core/p2p/ipld/plugin
.
see: #144 (comment)
Currently, we're trying to replace tendermint for lazyledger-core in the lazyerleger-app #4 and in the lazyledger fork of the cosmos-sdk #2. We run into the same error if we attempt to change the import by using either the go modules' replace directive, or by using a script to manually replace every import from "github.com/tendermint/tendermint"
to "github.com/lazyledger/lazyledger-core"
.
error:
go: github.com/cosmos/cosmos-sdk/baseapp imports
github.com/tendermint/tendermint/abci/types imports
github.com/lazyledger/lazyledger-core/crypto/ed25519: github.com/lazyledger/[email protected]: parsing go.mod:
module declares its path as: github.com/tendermint/tendermint
but was required as: github.com/lazyledger/lazyledger-core
After digging through the dependency trees of the libraries, I found that lazyledger-core and the cosmos sdk are both importing different versions of the cosmos/iavl, and those two versions of the iavl each import their own version of tendermint. So, connecting the cosmos-sdk to lazyledger-core brings us to a special level of dependency hell, go.mod dependency hell, which stops go modules from reconciling the different module declariation of tendermint and its fork, lazyledger-core. better explained in this post.
I think we're going to have to simultaneously update our fork of the cosmos-sdk, lazyledger-core, and a new fork of cosmos/iavl to new non-existent versions in order to break the import cycle. as described here
This issue is to track changes needed (and tradeoffs) which will enable to update the AppHash (state root) before consensus on a block.
Roughly:
Related: tendermint/tendermint#2483
Also:
Before implementing the erasure coding of the data itself (see #23), we need to arrange the data into a square matrix to do so. A naive approach would be to split the data in same sized shares and order that into a matrix. It would be better to implement this directly by following the message layout of the spec s.t. the implementation evolves closer to the spec and potentially inform give feedback for the spec itself.
Specification: https://github.com/lazyledger/lazyledger-specs/blob/master/specs/data_structures.md#arranging-available-data-into-shares
Rationale document: https://github.com/lazyledger/lazyledger-specs/blob/master/rationale/message_block_layout.md
As far as I understand, go plugins need to be defined in a main package to work properly (-buildmode=plugin requires exactly one main package).
That's the approach we took in #144:
https://github.com/lazyledger/lazyledger-core/blob/dbd2daf493669489abd16e94c315596d6504c667/p2p/ipld/plugin/plugin.go#L1
It seems possible to load the plugin using loader.Preload
though:
https://github.com/ipfs/go-ipfs/blob/7588a6a52a789fa951e1c4916cee5c7a304912c2/plugin/loader/loader.go#L28
That would only work if the Plugin was in an importable lib/package (not main). Ideally, both would be possible.
After we have migrated the current merged tendermint version to 0.34.x line I would suggest we use https://github.com/marketplace/actions/github-repo-sync to automatically keep the repo in sync.
For the merging of 0.34.x into LL there are two options.
Is there a preference?
Intermediate state roots need to be written into Block.Data. This is achievable with the preprocess block mechanism a (see #59). Intermediate state roots are slightly different as they play into the execution model (see #3). As mentioned below, we should first implement keeping track of intermediate state roots without touching the current (deferred) execution model.
related: #77
Specify a way to distinguish between Tx
and Messages
in ll-core / abci.
From the point of view of lazyledger-core: Tx = opaque blobs that get executed by the abci app (like Tx in tendermint), and, Messages = blobs with no impact on the state.
While both (Messages and Tx) have to be part of the Block.Data
, Tx
do not explicitly carry a namespace ID (the reserved ID for Tx is only added when computing the shares to not add redundant data) but Messages
essentially include their own namespace ID (e.g. by prefixing).
To be able to compute and commit the namespaced shares properly to the namespaced merkle tree, lazyledger-core needs a way to understand the difference between blob that will have a reserved namespace and blob that provides it's own namespace.
Although messages do not change state they also need to be present at the abci-app level: a tx is only valid if the message that it is pointing to is present (and the commitment in the Tx matches the message).
Several ideas and questions arose that we need clarification. Here is an incomplete list:
Another thing we need to think about is if we want to keep the messages in the same (mem)pool as Tx (that get executed).
For some context of the mempool and it's connection to the app:
The mempool reactor gets a blob, it decodes it here and then asks the app (via CheckTx) to include the tx or not here.
Required reading:
This tendermint fork was renamed and lives under a different organization. Some trivial changes are necessary to make this repository useable (and not still use the orig tendermint
github.com/lazyledger/lazyledger-core/*
and deal with go.mod@marbar3778 has a branch with an (almost) working version of CI using github actions (see https://github.com/marbar3778/tendermint/pull/5/files). We should help fixing this as we are currently not using any CI.
Current configuration is using circleci, here:
https://github.com/LazyLedger/lazyledger-core/blob/da745371227f54aa90c609845cd4cc2f36a152f1/.circleci/config.yml#L1-L450
I can confirm that the current implementation of the ipld plugin does not work properly with bitswap as mentioned here:
#152 (comment)
I've tested this by spinning up two droplets on DO that otherwise can retrieve data from each other (e.g. via dag get) if the Cid is a regular sha256 one but not if the namespaces are included in the cid.
It seems like defining one encoding for the nodes of the NMT:
https://github.com/lazyledger/lazyledger-core/blob/b83e6766973c314991ec8ba9f6f246fc422fb605/p2p/ipld/plugin/plugin.go#L26
and then simply squeezing the namespaces into the CIDs directly:
https://github.com/lazyledger/lazyledger-core/blob/b83e6766973c314991ec8ba9f6f246fc422fb605/p2p/ipld/plugin/plugin.go#L341-L345
does not work with bitswap. @Wondertan mentioned this a source for the problem (when receiving a block the cid gets recomputed locally):
https://github.com/ipfs/go-bitswap/blob/47b99b1ce34a8add8e5f38cf2eec6bea1559b035/message/message.go#L217-L222
Notice that pref.Sum seems to only accept 32 bytes:
https://github.com/ipfs/go-cid/blob/e530276a7008f5973e7da6640ed305ecc5825d27/cid.go#L563-L577
Remove @musalbas as otherwise he'll automatically be added as a reviewer for each PR here.
Similarly dependabot should be updated to pick reviewers: https://github.com/lazyledger/lazyledger-core/blob/908fe8ac2a43ce0bd372be4938ad25ebd79384fe/.github/dependabot.yml#L23
This issue drafts changes of a first iteration of modifying the block header to support data availability proofs.
Basically be a variant of what we spec'ed out in https://github.com/LazyLedger/lazyledger-specs:
with as little changes as possible to tendermint core.
As a side note, we should track and document theses decisions and changes in the repository itself. We can track them in the code directly via so called ADRs (we used these at tendermint: https://github.com/tendermint/tendermint/tree/master/docs/architecture).
Note that the spec in it's current form expects immediate execution (ref: #3) which doesn't really affect this issue besides that the mentioned intermediate state roots included in block of height h+1 reflects the state induced by transactions included in block of height h (as described in #3 (comment)).
We will have to experiment with block times and tendermint consensus related timeout to find a reasonable middle ground.
Update: Increasing the timeouts is no longer relevant but what we instead really we want is: tendermint/tendermint#5911
We should help the tendermint team to get that feature shipped.
The current default timeouts can be found here:
https://github.com/lazyledger/lazyledger-core/blob/b68e2dc394c0fc1840d57b1a763c77c1ef7a13cd/config/config.go#L869-L874
For immediate execution, we might need to increase TimeoutPropose
as a proposer would need to execute Tx against the state before proposing. Also, we'd like the block data to propagate through the p2p network (reaching at least some storage nodes (ref: celestiaorg/celestia-specs#3).
Also we likely would need to increase the (this is no longer planned)TimeoutPrevote
to allow validators to validate blocks via DA proofs (see #65).
We could start with the current defaults and experiment our way up to defaults where consensus nodes don't end up timing out and starting new rounds without making blocks.
Modify the below lines to post in slack when the e2e tests are failing
https://github.com/lazyledger/lazyledger-core/blob/master/.github/workflows/e2e-nightly.yml#L38-L45
A current limitation of Tendermint is that before a block has consensus from the validator set, it is only distributed among the validator set. This makes it not easily possible for these validators to validate the block via a data availability proof, because the minimum number of light client assumption is less likely to hold. Consider publishing blocks to the wider network before they have consensus.
Tendermint light-clients use a list of RPC nodes to talk to and to ask for "light-blocks".
LazyLedger light-clients will operate similarly to tendermint light-client but will do DA checks though (#65). For this, they need to be able to discover and add peers. For this, we'd need a light-client related p2p reactor.
Note that the tendermint team experimented with a similar reactor (without DA checks) but abandoned the idea for now:
tendermint/tendermint#4508
In this context also, we should keep an eye on the progress of the p2p-refactor: https://github.com/tendermint/tendermint/blob/master/docs/architecture/adr-061-p2p-refactor-scope.md
An alternative to using tendermint's p2p layer, could be a libp2p-based solution. This would be more work intensive but in the light of the optimint work where we will also use libp2p (cc @Wondertan @tzdybal).
lazyledger-core still uses the same package declaration in its proto files as tendermint, which causes some types to be registered more than once when it is imported along with tendermint. This is the case with the cosmos-sdk, which imports tendermint/iavl, and iavl depends on old versions of tendermint.
2021/01/12 11:08:17 proto: duplicate proto type registered: tendermint.crypto.Proof
2021/01/12 11:08:17 proto: duplicate proto type registered: tendermint.crypto.ValueOp
2021/01/12 11:08:17 proto: duplicate proto type registered: tendermint.crypto.DominoOp
2021/01/12 11:08:17 proto: duplicate proto type registered: tendermint.crypto.ProofOp
2021/01/12 11:08:17 proto: duplicate proto type registered: tendermint.crypto.ProofOps
We could use different package declarations in lazyledger-core's proto files or, at least for now, could ignore these warnings. Not only will we replace iavl soon, but the types that are registered twice are identical in ll-core and tendermint, so it doesn't matter which we use.
We've decided to merge the ADR 002 (#170) and update it to keep in sync with the implementation in various PRs.
There are few things that I'd like to capture here so they don't get lost:
Integrate erasure coding. A first version should simply use Mustafa's rsm2d library.
TODO: add more details before starting the implementation
Catch up with: https://forum.cosmos.network/t/cosmos-mainnet-security-advisory-lavender/3511
when released (tomorrow).
In building LazyLedger, we would like to use Tendermint for two purposes, that are somewhat independent from each other, that requires a different set of modifications:
The Tendermint features we need in (1) are a subset of the ones we need in (2). We need to think about what would be the best way to structure these repositories, and if we want to have one or two forks of Tendermint.
The two options are:
Add in the missing constants from the spec into lazyledger-core.
https://github.com/LazyLedger/lazyledger-specs/blob/master/specs/consensus.md#constants
Not all belong here (e.g. GENESIS_COIN_COUNT
would be defined outside of LL-core in the app) but others (NAMESPACE_ID_BYTES , SHARE_SIZE , SHARE_RESERVED_BYTES
) either need to be const for one chain as they will be used in the erasure coding scheme.
It would allow us to more cleanly write the intermediate state roots from the (abci) app into the Block.Data.
Also, clarify if we can also use this mechanism to write the (LL applications') messages into the block data (otherwise we'd have potentially large blobs in the mem pool for no reason).
ref: https://github.com/tendermint/tendermint/issues/2639 (tendermint issue)
ref: https://github.com/marbar3778/tendermint/pull/17 (a first stab on an implementation by @marbar3778)
The original blockchain reactor (v0) is a mess and hard to reason about (e.g. look at this: https://github.com/tendermint/tendermint/blob/ef56e6661121a7f8d054868689707cd817f27b24/blockchain/v0/reactor.go#L214-L366).
There has been quite some effort to refactor the current reactor and there are plans to switch to v2
:
tendermint/tendermint#4595
The main blocker seems to be that the latest version isn't well tested. IMO we should start using the lastest version as soon as the integration tests work at least (see: tendermint/tendermint#4640). It will also make changes for immediate state execution simpler (ref #3).
Currently tendermint uses simple merkle tree to compute "data hash" (what we currently call availableDataRoot
in the spec).
LazyLedger uses a Namespaced Merkle Tree instead. Also the leafs of that aren't just the Tx (as currently in tendermint) but everything under Block.Data
extended through erasure erasure coding and arranged in a 2-dim Matrix. There are 2 such trees used (one for the row- and one for column-data).
This issue tracks the NMT implementation only.
Copy and pasting a simple example of a namespaced merkle tree from https://arxiv.org/abs/1905.09274
EDIT:
Integrate a tree that implements the following interface:
type NameSpacedMerkleTree interface {
// add (raw)leaf data with the associated namespaceID
Push(namespaceID []byte, data []byte)
// Compute and return the merkle root and the overall tree global or (root node associated)
// min-/max- namespaceID
Root() (minNamespaceID, maxNamespaceID, root []byte)
}
Different than the example (Fig. 2) above, we hash leafs as
(i) nsid||nsid||hash(leafPrefix||leaf)
, where leaf = rawData
as opposed to (in Figure 2):
(ii) nsid||nsid||hash(leafPrefix||leaf)
, where leaf = nid || rawData
Marked this as a "good first issue", because it should not be too difficult to integrate this into tendermint (replace the current use of SimpleHashFromByteSlices
with the NMT for Block.Data
. Also, the main logic is just a slightly modified hasher. An implementation that could easily changed to implement above interface
and only needs to update what goes into the leaf (to adhere to (i)) can be found here.
Store block Data using IPLD API.
The proposer needs to store data into its local Merkle dag s.t. it can be sampled or fully downloaded by other peers.
Implement the "read portion" of adr 002 and keep the ADR in sync with the actual implementation.
@evan-forbes already started working on this in #178.
We should update the Readme with a paragraph that explains that this will become what is essentially "lazyledger-core" (with links to tendermint and lazyledger).
Even if this is currently on par with the original tendermint repo (and only 2 additional unmerged branches) we should do that early to avoid confusion. We'll probably have to do quite some very essential changes to vanilla tendermint anyways.
Update: How we move on with this depends on celestiaorg/celestia-specs#178.
In order to sample for Data Availability, we need to change some of the core data structures that are exchanged in between nodes during consensus steps.
This has been discussed in celestiaorg/celestia-specs#126 and celestiaorg/celestia-specs#127 and is also touched up on in #163.
The spec sufficiently describes the necessary changes to the consensus messages.
A major portion of this work can start before we wrap up #178.
To be able to evict the mempool depending on the data that was written into the block.
Currently, tendermint evicts the mempool using the Mempool.Update method:
https://github.com/lazyledger/lazyledger-core/blob/6d99bdda1b59d1b1e5b6c83635ad879bfdffb19e/mempool/mempool.go#L38-L47
This works because in tendermnt the Tx that enter the mempool are the same as the data that gets written into the block.
In LL, this is likely going to be different: the tx that enter the mempool aren't those that end up in the block. Hence identifying Tx by their hash to see if they were included in the block wouldn't work.
Note that this is a general problem that needs to be solved, if tendermint allows for pre-poroccessing / mutating Tx before proposing a block. See: https://github.com/tendermint/tendermint/issues/2639#issuecomment-713026731
Let the abci-app return IDs for Tx potenially on CheckTx and/or on deliverTx. These IDs can then be used to evict Tx in the mem-pool.
TODO: add in more details.
ref: tendermint/spec#154
Fold the validator roots (Header.ValidatorsHash
and Header.NextValidatorsHash
) into the single state root.
Tendermint is designed in a way that it does not need to understand how the app that is using it is computing the state root (Header.AppHash
). Only a minimal subset of the state needed by tendermint to function properly. It includes the validator set changes (computed by the app). The commitment to that part of state is computed by tendermint (and not by the app). This makes sense as this works independently on how the app tracks state and leaves and apps can define whatever business logic they want.
For LL we want to avoid several roots as every additional root means a new kind of fraud proof that needs to be defined and implemented. Hence, we want to "fold" the commitment to the validator sets into the state root (which is usually computed by the app and an opaque blob to tendermint).
There are basically two ways we can go about this (both blur the line between the app and tendermint):
Both approaches have some pros and cons that we need to understand and discuss here first. One observation is that while (2.) even further blurs the line between app and tendermint, it is closer to what is already there and might have less implications on the cosmos-sdk app we will build (i.e. we can probably reuse more of the existing SDK modules without big changes).
ref: https://github.com/lazyledger/lazyledger-core/compare/ismail/unsafe_removal_ofvalhashes?expand=1
ref: celestiaorg/celestia-specs#78
The purpose of this issue is to start a discussion about adding support for optimistic rollups to the Cosmos SDK, and what exactly this means and entails.
There are two key questions: a) what does it mean to add optimistic rollup support for Cosmos and b) what components or modifications to Tendermint and Cosmos would this require?
We want to make it possible for people to create blockchains using the Cosmos SDK, and deploy these chains as an optimistic rollup that uses another chain (such as LazyLedger) as a consensus and data availability layer.
Concretely, this means that instead of Cosmos chains using Tendermint BFT for consensus, they would have a single or multiple aggregator that creates blocks and posts them to the data availability layer - no consensus is required from the Cosmos app side. However, these chains would still require their own peer-to-peer network with their own mempools to propagate transactions to others nodes and aggregators, which may generate fraud proofs.
Sub-question: Do we want to implement optimistic rollup support for Tendermint more generally, or just Cosmos SDK? Would it even make sense to add optimistic rollup support for Tendermint? It seems that it might not, because Tendermint itself doesn't define an execution environment, it uses ABCI to communicate with the execution environment. An optimistic rollup is an execution environment in itself, which is what we need to define. Specifically, an optimistic rollup execution environment needs to have a standardised fraud proof system.
It seems that there are three main components to think about to implement Cosmos SDK chains as optimistic rollups. Other components may be missing that are not listed here, but this what comes to mind immediately.
At minimum this would require modifying Cosmos block output in some way to include intermediate state roots, which can be used for fraud proofs. We need to investigate if this would require any modifications to Tendermint (or ABCI client). It seems like this can be done purely on the Cosmos SDK side, by appending intermediate state roots to transactions.
Furthermore, we would need to ensure that there is only one commitment in the block header that commits to state. This may require removing other state-related commitments such as validator set etc (which we wouldn't need for a chain that doesn't have its own consensus anyway). This may require Tendermint/ABCI client modification.
Question: is ABCI compatible with the use case of requiring intermediate state roots to be added to transactions after the user has submitted them: presumably after CheckTx
passes, the intermediate state root can be appended to the transaction before DeliverTx
?
Optimistic rollup chains do not require their own consensus, as they use the data availability layer for ordering. Thus, the ABCI client that an optimistic rollup-based Cosmos chain uses should allow aggregators to create blocks, rather than pass blocks through Tendermint BFT. There seems to be two ways to approach this:
When Tendermint or the ABCI client receives blocks, it needs to check that these blocks have been made available on an external data availability layer such as LazyLedger, using e.g. data availability proofs. This would require integrating a LazyLedger light client into the ABCI client.
Use the nmt and the rsmt2d libraries to implement: https://github.com/lazyledger/lazyledger-specs/blob/master/specs/data_structures.md#2d-reed-solomon-encoding-scheme
See also:
depends on #38 (at least going from Block.Data -> list of fixed sized, namespace prefixed shares is a requirement)
ref: #24
ref: #22
ref: #23
ref: #38
ref: celestiaorg/celestia-specs#69
For #85 (comment) and #35 (comment), we need to run an ipfs node that understand to read and write Namespace Merkle Tree (tree)nodes. The latter will be achieved by a IPLD plugin for a NMT.
To communicate with the node, we can either use go-ipfs-api or, try to directly get the API object from a config and environment (basically what the ipfs commands do, e.g. see DagGetCmd). The latter might have the advantage of having slightly more control over how to insert the nodes in the ipld dag.
Either way, it would be ideal if the user can configure which node to use.
Archival storage nodes should pin the data they store: https://docs.ipfs.io/concepts/persistence/#persistence-permanence-and-pinning.
In the long-run, and as an optimization, these nodes could only store the orig. data square and still advertise that they have the other parts of the square (as they can be recomputed on demand).
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.