anoma / anoma-archive Goto Github PK
View Code? Open in Web Editor NEWReference implementation of the Anoma protocols in Rust.
Home Page: https://anoma.net
License: GNU General Public License v3.0
Reference implementation of the Anoma protocols in Rust.
Home Page: https://anoma.net
License: GNU General Public License v3.0
Cross-chain settlement will be essential for dealing with assets originating on other chains and interfacing between local instances of the Anoma protocol.
There are several methods, which are not mutually exclusive, but a single method will likely make most sense for a class of use-cases.
┆Issue is synchronized with this Asana task by Unito
The initial page is at tech-specs/src/explore/design/crypto-primitives.md
To describe:
┆Issue is synchronized with this Asana task by Unito
depends on #5
We need a way to measure storage access (reads/writes, bytes difference before and after storage update).
To start with maybe we can have a version of storage API in which each function additionally returns its gas cost (the precise values are not important for now) and bytes difference, if any.
This is to be used when accessing storage from transactions and validity predicates.
Some parts of this depend on https://github.com/heliaxdev/rd-pm/pull/37 (what data will be passed from transactions to VPs and the wasm environment APIs).
This guide is a good intro to wasm memory: https://radu-matei.com/blog/practical-guide-to-wasm-memory/#exchanging-strings-between-modules-and-runtimes
steps:
tx.data
to tx code calltx.data
, prior and posterior account storage sub-space state and/or storage modifications for the accountledger/vm/src/lib.rs:140
)ledger/vm/src/lib.rs:154
)ledger/vm/src/lib.rs:35
)
switch from the deprecated Rust ABCI implementation https://github.com/tendermint/rust-abci to https://github.com/informalsystems/tendermint-rs/tree/master/abci
The transaction types is define in rust with prost,
instead it should be define in the protobuf file src/proto/types.proto
to defines all encoded types in place.
The current Basic and Validator addresses can be removed. Instead, we should have only a single address type.
An address could be generated on chain from a e.g. hash of some counter and block hash or a nonce, first time it’s written from a transaction.
A nice address scheme could e.g. use base58 encoding with some constant prefix
depends on #16
storage read-only API (chain and block metadata such as block height)
This is tracking issue for the gossip layer prototype. The goals and deliverables alongside with any notes will live in this repo under https://github.com/heliaxdev/rd-pm/blob/master/tech-specs/src/explore/prototypes/gossip-layer.md.
reopened from https://github.com/heliaxdev/rd-pm/issues/6
Steps:
HashMap<IntentId, Intent>
), where the IntentId
is hash of the intentThe initial page is at tech-specs/src/explore/design/upgrade-system.md
To describe:
┆Issue is synchronized with this Asana task by Unito
The initial database is using a simple tree schema: https://github.com/heliaxdev/anoma-prototype/blob/6c59f0dc2c8859ce564a37e5923d1a87e7867ffa/ledger/src/bin/anoma-node/shell/storage/db.rs#L3
The h/balance/address
key is temporary. Instead of balance, accounts will have validity predicates and some generic storage subspace, an address and a counter (pending on https://github.com/heliaxdev/rd-pm/issues/25)
The keys are built from key segments (KeySeg
trait https://github.com/heliaxdev/anoma-prototype/blob/6c59f0dc2c8859ce564a37e5923d1a87e7867ffa/ledger/src/bin/anoma-node/shell/storage/types.rs#L61)
A better key schema options should be explored. Ideally, the keys should be std::cmp::Ord
and key segments for custom types should have the same order as their raw data (e.g. Address
).
If the keys can be made of only integers (by e.g. having 1-to-1 mapping of key segments to integers of a fixed size, then the default key comparator can be used (https://github.com/facebook/rocksdb/wiki/Basic-Operations#comparators). Otherwise, a custom key comparator should be provided to rocksdb options, to fix https://github.com/heliaxdev/anoma-prototype/blob/6c59f0dc2c8859ce564a37e5923d1a87e7867ffa/ledger/src/bin/anoma-node/shell/storage/db.rs#L212
The initial page is at tech-specs/src/explore/design/ledger/tx-execution.md
To describe:
┆Issue is synchronized with this Asana task by Unito
We're currently using this library https://github.com/heliaxdev/sparse-merkle-tree/tree/tomas/encoding. We only store the hashes (H256
) of the key and values in the tree (more details in https://github.com/heliaxdev/rd-pm/blob/master/tech-specs/src/explore/design/db.md). The fork has added a borsh encoding.
TODOs:
┆Issue is synchronized with this Asana task by Unito
In order to implement the incentive logic it's mandatory to modify the intent a certain way before gossiping to other node
this part should allow us to implement such a thing:
https://docs.rs/libp2p-gossipsub/0.28.0/libp2p_gossipsub/trait.DataTransform.html
The gossipsub behaviour needs a message_id for each propageded msg so the message_id should stay the same output after the modification of the intent. This allow us to fully use the the gossipsub network, where each nodes can request a specific message by that id.
┆Issue is synchronized with this Asana task by Unito
ledger/src/lib/config.rs
using https://crates.io/crates/configledger/src/bin/anoma-node/shell/mod.rs:34
and src/bin/anoma-client/cli.rs
)If we want to, we can write a very flexible combinatorial auction proof-of-stake system using our account paradigm:
If implemented this way, this is very flexible because the details (e.g. unbonding period) are no longer "hard-coded", so e.g.
This may be too complex, and certainly some fixed assumptions (e.g. minimum unbonding period) are helpful for other chains interacting with Anoma (running light clients), we don't necessarily want everything to be dynamically settled. Merits discussion.
┆Issue is synchronized with this Asana task by Unito
It would be nice to have a CI !
Lets use this issue to report all stuff we want the CI to check
The initial page is at tech-specs/src/explore/design/gossip/intent.md
to describe
┆Issue is synchronized with this Asana task by Unito
The initial page is at tech-specs/src/explore/design/ledger/front-running.md
To describe:
┆Issue is synchronized with this Asana task by Unito
Let's plan to discuss this next Thursday, March 4th. Beforehand, I'll write up a basic proposal, and we should all read through:
┆Issue is synchronized with this Asana task by Unito
from @cwgoes
We'll almost certainly want a standardised
DB
interface that we can insert cache layers between, e.g.read : key -> maybe value write : key -> value -> () delete : key -> ()Then e.g. LRU cache layers interspersed can keep internal state (cache hashmap) and have additional functions to flush the cache to the lower layer, clear it, etc.
For some inspiration see https://github.com/cosmos/cosmos-sdk/blob/master/store/types/store.go#L190
The initial page is at tech-specs/src/explore/design/fractal-scaling.md
To describe:
┆Issue is synchronized with this Asana task by Unito
The current implementation flushes the memtable per putting data like the below.
https://github.com/heliaxdev/anoma-prototype/blob/b6086d8f67a1acbc771c009e1f9a7cbfca8f85fa/ledger/src/bin/anoma-node/shell/storage/db.rs#L108
We can remove the db.flush()
and give WriteOptions
by using db.put_opt()
, db.write_opt()
to sync the WAL to the disk.
https://github.com/facebook/rocksdb/wiki/Basic-Operations#synchronous-writes
https://docs.rs/rocksdb/0.15.0/rocksdb/struct.WriteOptions.html#method.set_sync
RocksDB (and LSMTree based DB) can buffer data on the memtable for efficient large write to the disk. That's flush
to write data on the memtable to an SSTable on the disk.
The memtable is on the memory, but the DB stores WAL to the disk at the same time.
WAL can be used to recover data after the DB crashes.
That's why we have to just only wait for storing WAL to the disk for our requirements.
https://github.com/facebook/rocksdb/wiki/RocksDB-Overview#3-high-level-architecture
as described in design spec http://localhost:3000/explore/design/ledger/tx-execution.html#tx-life-cycle
some sub-commands (e.g. anoma client ...
and anoma node ...
) don't work properly
For prototyping, using unwrap
s is fine, but eventually we should choose a standard way to handle errors, possibly using these 2 crates that make it simple:
Any other suggestions are welcome!
Not urgent, but at some point, maybe via a Github action, to specs.anoma.network
or something.
This is tracking issue for the base ledger prototype version 2 (a follow-up to base ledger prototype version 1 https://github.com/heliaxdev/rd-pm/issues/5).
I think if we're happy with prototype version 2, we could start the next phase described in https://github.com/heliaxdev/anoma-prototype/tree/master/tech-specs/src/explore/prototypes#advancing-a-successful-prototype
Let's agree on prototype plan anoma/anoma#63 first.
Steps:
cargo audit
reports some warnings. None of them seem of high importance to me. Nevertheless, leaving it here for more qualified people to triage:
Fetching advisory database from `https://github.com/RustSec/advisory-db.git`
Loaded 263 security advisories (from /home/george/.cargo/advisory-db)
Updating crates.io index
Scanning Cargo.lock for vulnerabilities (267 crate dependencies)
Crate: dirs
Version: 1.0.5
Warning: unmaintained
Title: dirs is unmaintained, use dirs-next instead
Date: 2020-10-16
ID: RUSTSEC-2020-0053
URL: https://rustsec.org/advisories/RUSTSEC-2020-0053
Dependency tree:
dirs 1.0.5
└── term 0.5.2
Crate: net2
Version: 0.2.37
Warning: unmaintained
Title: `net2` crate has been deprecated; use `socket2` instead
Date: 2020-05-01
ID: RUSTSEC-2020-0016
URL: https://rustsec.org/advisories/RUSTSEC-2020-0016
Dependency tree:
net2 0.2.37
├── miow 0.2.2
└── mio 0.6.23
Crate: term
Version: 0.4.6
Warning: unmaintained
Title: term is looking for a new maintainer
Date: 2018-11-19
ID: RUSTSEC-2018-0015
URL: https://rustsec.org/advisories/RUSTSEC-2018-0015
Dependency tree:
term 0.4.6
Crate: term
Version: 0.5.2
Warning: unmaintained
Title: term is looking for a new maintainer
Date: 2018-11-19
ID: RUSTSEC-2018-0015
URL: https://rustsec.org/advisories/RUSTSEC-2018-0015
Dependency tree:
term 0.5.2
Crate: pin-project-lite
Version: 0.2.4
Warning: yanked
Dependency tree:
pin-project-lite 0.2.4
├── tracing 0.1.23
│ ├── tendermint-rpc 0.18.1
│ │ └── Anoma 0.1.0
│ ├── tendermint-abci 0.18.1
│ │ └── Anoma 0.1.0
│ └── hyper 0.14.4
│ └── tendermint-rpc 0.18.1
├── tokio 1.2.0
│ ├── tendermint-rpc 0.18.1
│ ├── hyper 0.14.4
│ └── Anoma 0.1.0
└── futures-util 0.3.12
├── hyper 0.14.4
├── futures-executor 0.3.12
│ └── futures 0.3.12
│ ├── tendermint-rpc 0.18.1
│ ├── tendermint 0.18.1
│ │ └── tendermint-rpc 0.18.1
│ └── Anoma 0.1.0
└── futures 0.3.12
warning: 5 allowed warnings found
A few reading sources for inspiration:
┆Issue is synchronized with this Asana task by Unito
depends on #16
Make it possible to update VPs on chain via a transaction, which should be validated by the current VP version (before the update)
This is half research, half coding task.
The relevant spec page is at https://github.com/heliaxdev/rd-pm/blob/master/tech-specs/src/explore/libraries/logging.md
We currently have env_logger
in place (albeit used sporadically), which is simple and works, but we'll probably want to switch to either slog
or tracing
.
Steps:
ANOMA_LOG
)info
For compiled wasm runtime (using wasmer), we need to be able to measure the cost of compiling the wasm codes (transactions and VPs)
an example how this is done in NEAR: https://github.com/near/nearcore/blob/a74eea626bf896733f88dc7dec8d7ecd0a5d0291/runtime/near-vm-logic/src/logic.rs#L2417
depends on anoma/anoma#16
Steps:
ed25519
is being added in anoma/anoma#160[profile.release] panic = "abort"
to Cargo.toml, which tells to compiler to simply abort on panicspanic
for explicit panics from wasmpanic!
macro works┆Issue is synchronized with this Asana task by Unito
The initial page is at tech-specs/src/explore/design/gossip/incentive.md
Inspiration reading :
To describe :
┆Issue is synchronized with this Asana task by Unito
Allow to sanitize the wasm code for non-deterministic behavior (e.g. float and NaN).
More details in https://github.com/heliaxdev/anoma-prototype/blob/master/tech-specs/src/explore/libraries/wasm.md#wasm-runtime.
Consider to also run wasmparser::validate
before attempting to decode wasm code.
┆Issue is synchronized with this Asana task by Unito
depends on #16
add a logging facility for wasm common environment
In order to benefit from the n-party settlement system, we should plug certain special accounts into the state machine, namely:
┆Issue is synchronized with this Asana task by Unito
The TODO marker at ledger/src/bin/anoma/cli.rs:20
:
CARGO
is set (https://doc.rust-lang.org/cargo/reference/environment-variables.html#environment-variables-cargo-sets-for-3rd-party-subcommands)cargo run
too (e.g. for Command::new("anomad")
with Command::new("cargo run --bin anomad")
)"dev"
feature is set (conditional compilation)A further complication for gas metering is that transactions will start out encrypted if we use a DKG/TPKE, and validators will have to commit to an execution order prior to decryption (this is what prevents front-running). Thus there will need to be a separate (not-encrypted) "gas payer" account which can be checked prior to committing to executing the transaction (to prevent DoS).
One way to simplify this logic, and to provide more predictability for transaction execution times / guaranteed throughput for particular applications, would be to use the n-party settlement system in conjunction with validator accounts to run combinatorial auctions for future block space, where users or relayers can bid for some amount of block space (storage & compute) across time, and validators can allocate that space in advance (for incentive compatibility, so that validators actually execute the transactions, the bid amounts would be put in escrow & only half paid out if no transactions are actually included in the allocated space).
I haven't thought through any secondary consequences, so we should think about this carefully.
┆Issue is synchronized with this Asana task by Unito
Related to anoma/namada#3
The initial page is at tech-specs/src/explore/design/ledger/vp.md
To describe:
┆Issue is synchronized with this Asana task by Unito
The initial page is at tech-specs/src/explore/design/ledger/accounts.md
To describe:
┆Issue is synchronized with this Asana task by Unito
Setup DB benchmarks as described in https://github.com/anoma/anoma/blob/master/docs/src/explore/design/ledger/storage.md#benchmarks, maybe using criterion crate.
Some similar benches:
┆Issue is synchronized with this Asana task by Unito
we can use https://github.com/wasmerio/wasmer/tree/master/lib/middlewares, an example:
alternatively, there's also https://crates.io/crates/pwasm-utils, with an example how it's used in:
Steps:
To prevent spam and appropriately price execution costs, the base ledger's trade settlement system must charge fees proportional to the database read/write and compute costs incurred by transaction execution (all phases). Fees merely proportional to transaction execution costs, however, misalign long-term interests of system users and stakeholders, since proof-of-stake consensus security must be greater than (and thus proportional to) the amount of value transacted, since this is what could be gained by subverting intended operations (e.g. by bribing validators). For this reason, we should aim to architect a fee model which is at least in part proportional to the value of a settled trade (more similar to existing custody institutions, in a sense). That way expected long-term value accrual of the staking token will end up being proportional to trade volume (in an approximate sense) instead of execution volume, which is more likely to fund the requisite security.
Note: One could object to this argument on the basis that transaction fees will converge to trade value as block space becomes scarce - as one sees happening on Ethereum at the moment, to some extent - however we do not wish to rely on this, as it requires constraining system throughput (artificially, assuming we could do otherwise) and thus also prevents many otherwise-possible (and otherwise fee-paying) transactions from being settled.
What exactly the proportion is we can determine later - likely it will be quite small - but the tricky part from the design perspective is to measure the value of a trade in the first place. This problem certainly cannot be solved in general for a ledger which supports arbitrary state transitions and places no constraints on data semantics (i.e. value), and privacy features exacerbate the difficulty, e.g. private MASP transactions provide no information about the value transacted, and any attempt at a proportional charge by circuit change will result in data leakage if the fraction is known and the fee is sent to a public address. That said, it is much less critical to charge value-proportionally for transfers (though we should keep thinking about this), and much critical to charge value-proportionally for trades, which may often have partially public data (e.g. price, amount, tokens) by necessity of counterparty negotiation at the intent discovery layer. If there is some roughly proportional relation between trade volume and value of custodied assets, charging proportionally for most trades should be sufficient to achieve the value-capture / security proportionality which we need (this is a hand-wavy argument that should be formalised).
To that end, keeping in mind the additional design constraints of avoiding hardcoded order semantics on the base ledger and assuming that market conditions will change rapidly, one idea I have is to enact a kind of collective negotiation between the validator set and particular contracts (with particular validity predicates) on a sort of "fee-sharing" based on the particular semantics of that contract. The basic setup is as follows:
This process can happen quite rapidly and requires no ledger upgrades, although we should keep UX in mind. Another point to consider is the particulars of how contracts ought to "fee share" and whether they need to build in some flexibility in advance with this system in mind.
This is just brainstorming, thoughts welcome.
┆Issue is synchronized with this Asana task by Unito
We need a way to measure gas cost of transactions and accumulate the total gas cost of a transaction. We should build an interface for a gas counter.
Everything that happens in a transaction application should be accounted for, e.g.:
We want to shut down an Anoma node gracefully.
Currently, I got some errors when shutting down a node with Ctrl-C.
I[2021-03-11|18:36:48.047] captured interrupt, exiting... module=main
I[2021-03-11|18:36:48.047] Stopping Node service module=main impl=Node
I[2021-03-11|18:36:48.048] Stopping Node module=main
E[2021-03-11|18:36:48.049] Stopping abci.socketClient for error: read message: EOF module=abci-client connection=consensus
I[2021-03-11|18:36:48.049] Stopping socketClient service module=abci-client connection=consensus impl=socketClient
I[2021-03-11|18:36:48.048] Stopping EventBus service module=events impl=EventBus
I[2021-03-11|18:36:48.049] Stopping PubSub service module=pubsub impl=PubSub
I[2021-03-11|18:36:48.049] Stopping IndexerService service module=txindex impl=IndexerService
I[2021-03-11|18:36:48.049] Stopping P2P Switch service module=p2p impl="P2P Switch"
I[2021-03-11|18:36:48.049] Stopping BlockchainReactor service module=blockchain impl=BlockchainReactor
I[2021-03-11|18:36:48.049] Stopping Consensus service module=consensus impl=ConsensusReactor
E[2021-03-11|18:36:48.049] consensus connection terminated. Did the application crash? Please restart tendermint module=proxy err="read message: EOF"
I[2021-03-11|18:36:48.049] Stopping State service module=consensus impl=ConsensusState
I[2021-03-11|18:36:48.049] Stopping TimeoutTicker service module=consensus impl=TimeoutTicker
E[2021-03-11|18:36:48.049] Stopping abci.socketClient for error: read message: EOF module=abci-client connection=query
I[2021-03-11|18:36:48.050] Stopping socketClient service module=abci-client connection=query impl=socketClient
I[2021-03-11|18:36:48.049] Stopping baseWAL service module=consensus wal=.anoma/tendermint/data/cs.wal/wal impl=baseWAL
E[2021-03-11|18:36:48.050] Stopping abci.socketClient for error: read message: EOF module=abci-client connection=snapshot
E[2021-03-11|18:36:48.049] Stopping abci.socketClient for error: read message: EOF module=abci-client connection=mempool
I[2021-03-11|18:36:48.050] Stopping socketClient service module=abci-client connection=snapshot impl=socketClient
I[2021-03-11|18:36:48.050] Stopping socketClient service module=abci-client connection=mempool impl=socketClient
I[2021-03-11|18:36:48.065] Stopping Group service module=consensus wal=.anoma/tendermint/data/cs.wal/wal impl=Group
I[2021-03-11|18:36:48.066] Stopping Evidence service module=evidence impl=Evidence
I[2021-03-11|18:36:48.066] Stopping StateSync service module=statesync impl=StateSync
I[2021-03-11|18:36:48.066] Stopping PEX service module=pex impl=PEX
I[2021-03-11|18:36:48.066] Stopping AddrBook service module=p2p book=.anoma/tendermint/config/addrbook.json impl=AddrBook
I[2021-03-11|18:36:48.066] Stopping Mempool service module=mempool impl=Mempool
I[2021-03-11|18:36:48.066] Saving AddrBook to file module=p2p book=.anoma/tendermint/config/addrbook.json size=0
E[2021-03-11|18:36:48.066] Stopped accept routine, as transport is closed module=p2p numPeers=0
I[2021-03-11|18:36:48.066] Closing rpc listener module=main listener="&{Listener:0xc0000b2678 sem:0xc00010aae0 closeOnce:{done:0 m:{state:0 sema:0}} done:0xc00010ab40}"
I[2021-03-11|18:36:48.066] RPC HTTP server stopped module=rpc-server err="accept tcp 127.0.0.1:26657: use of closed network connection"
A basic account can have some balance of a token (#67) and a public key to authorize transfers from its balance.
also related to anoma/namada#3
related issue #14
Similarly to #14, we can use pwasm-utils as in https://github.com/near/nearcore/blob/b973883a7fcf5a4534248c2bad89c9825bac019f/runtime/near-vm-runner/src/prepare.rs#L68 and use it to limit the stack height with some configurable variable.
steps:
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.