Giter VIP home page Giter VIP logo

miden-base's Introduction

Miden Rollup protocol

LICENSE test build RUST_VERSION GitHub Release

Description and core structures for the Miden Rollup protocol.

WARNING: This project is in an alpha stage. It has not been audited and may contain bugs and security flaws. This implementation is NOT ready for production use.

Overview

Miden is a zero-knowledge rollup for high-throughput and private applications. Miden allows users to execute and prove transactions locally (i.e., on their devices) and commit only the proofs of the executed transactions to the network.

If you want to join the technical discussion or learn more about the project, please check out

Status and features

Polygon Miden is currently on release v0.4. This is an early version of the protocol and its components. We expect to keep making changes (including breaking changes) to all components.

Feature highlights

  • Private accounts. The Miden Operator tracks only commitments to account data in the public database. The users are responsible for keeping track of the state of their accounts.
  • Public accounts. With public accounts users are be able to store the entire state of their accounts on-chain, thus, eliminating the need to keep track of account states locally (albeit by sacrificing privacy and at a higher cost).
  • Private notes. Like with private accounts, the Miden Operator tracks only commitments to notes in the public database. Users need to communicate note details to each other via side channels.
  • Public notes. With public notes, the users are be able to store all note details on-chain, thus, eliminating the need to communicate note details via side-channels.
  • Local transactions. Users can execute and prove transactions locally on their devices. The Miden Operator verifies the proofs and if the proofs are valid, updates the state of the rollup accordingly.
  • Standard account. Users can create accounts using a small number of standard account interfaces (e.g., basic wallet). In the future, the set of standard smart contracts will be expanded.
  • Standard notes. Can create notes using standardized note scripts such as Pay-to-ID (P2ID) and atomic swap (SWAP). In the future, the set of standardized notes will be expanded.
  • Delegated note inclusion proofs. By delegating note inclusion proofs, users can create chains of dependent notes which are included into a block as a single batch.

Planned features

  • More storage types. In addition to simple storage slots and storage maps, the accounts will be able to store data in storage arrays.
  • Transaction recency conditions. Users will be able to specify how close to the chain tip their transactions are to be executed. This will enable things like rate limiting and oracles.
  • Network transactions. Users will be able to create notes intended for network execution. Such notes will be included into transactions executed and proven by the Miden operator.
  • Encrypted notes. With encrypted notes users will be able to put all note details on-chain, but the data contained withing the notes would be encrypted with the recipients key.

Project structure

Crate Description
objects Contains core components defining the Miden rollup protocol.
miden-lib Contains the code of the Miden rollup kernels and standardized smart contracts.
miden-tx Contains tool for creating, executing, and proving Miden rollup transaction.
bench-tx Contains transaction execution and proving benchmarks.

Make commands

We use make to automate building, testing, and other processes. In most cases, make commands are just wrappers around cargo commands with specific arguments. You can view the list of available commands in the Makefile, or just run the following command:

make

Testing

To test the crates contained in this repo you can use Make to run the following command present in our Makefile:

make test

Some of the functions in this project are computationally intensive and may take a significant amount of time to compile and complete during testing. To ensure optimal results we use the make test command. It enables the running of tests in release mode and using specific configurations replicates the test conditions of the development mode and verifies all debug assertions.

License

This project is MIT licensed

miden-base's People

Contributors

al-kindi-0 avatar ashucoder9 avatar bitwalker avatar bobbinth avatar cuiyourong avatar dagarcia7 avatar diegomrsantos avatar dominik1999 avatar empieicho avatar frisitano avatar fumuran avatar grjte avatar gubloon avatar h3lio5 avatar hackaugusto avatar hsutaiyu avatar igamigo avatar johnsoncarl avatar kmurphypolygon avatar mfragaba avatar mountcount avatar nlok5923 avatar partylikeits1983 avatar phklive avatar plafer avatar polydez avatar techcrafter avatar tomyrd avatar vuittont60 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

miden-base's Issues

Create multi-threaded seed generator

A valid account seed must contain a certain number of trailing zero's. The trailing zeros serve as account creation proof of work. To improve the efficiency of seed generation we should implement a multi-threaded seed generator.

related: #33

Block Producer [WIP]

This is a placeholder issue for the Miden Block Producer

The Miden Node will orchestrate three modules - The Transaction Prover, the Transaction Aggregator, and the Block Producer.

The Block Producer

The Block Producer is a module of the Miden Node and will produce the Miden Rollup blocks. In Miden, there is proof for every transaction. Transaction proofs will be aggregated in batches by the Transaction Aggregator. Batches will be aggregated into blocks by the Block Producer Module.

Specification

[WIP]

Where is it needed?

Block Producer

Write Miden assembly for `P2ID` and `P2IDR` scripts

We should define some set of "well-known" scripts which user could use to transfer assets between their accounts. Two of the simplest such scripts could be:

  • Pay to account ID (P2ID).
  • Pay to account ID with recall (P2IDR).

These scripts are described in detail below.

Pay to account ID

This script can be used when a note is intended to deposit all of its assets into a specific account. The pseudocode for this script looks as follows:

begin
  assert(account.get_id() == note.inputs[0])
  for a in note.assets
    account.receive_asset(a)
  end 
end

The script assumes that a note comes with a single input which contains ID of the recipient's account.

Pay to account ID with recall

This script can be used when a note is intended to deposit all of its assets into a specific account within some period. If this time expires, another user (presumably the send of the note) will be able to claim all of the note's assets. The pseudocode for this script looks as follows:

begin
  if chain.get_block_height() > note.inputs[0]
    assert(account.get_id() == note.inputs[1] || account.get_id() == note.inputs[2])
  else
    assert(account.get_id() == note.inputs[1])
  end
  
  for a in note.assets
    account.receive_asset(a)
  end
end

The script assumes that a note comes with the following inputs:

  • input[0] contains block height after which the note can be reclaimed by the sender.
  • input[1] contains the recipient's account ID.
  • input[2] contains the sender's account ID.

Required procedures

These scripts require the following kernel procedures to exist:

Procedure Description Context
get account ID Returns ID of the account in the current transaction. account, note
get note assets Returns all assets in the current note. note
get note inputs Returns all inputs for the current note note
get block height Returns current block height for the transaction. note, account

They also assume that the recipient's and sender's accounts expose receive_assets procedure which can be used to deposit a single asset into the account's vault.

Transaction Pool for the Block Producer [WIP]

This is a placeholder issue for the Miden Transaction Pool for the Block Produer

The Miden Node will orchestrate three modules - The Transaction Prover, the Transaction Aggregator, and the Block Producer.

The Miden Transaction Pool for the Block Producer

The Block Producer is a module of the Miden Node and will produce the Miden Rollup blocks. The Block Producer has an endpoint (RPC) for Miden Clients and others using RPC requests. Clients will send transactions and proofs thereof to the Block Producer. The Transactions need to be collected and queued in the Transaction Pool before being processed.

Specification

[WIP]

Where is it needed?

Add script inputs to note object

As mentioned in #7 (reply in thread), it would be beneficial to add one more component to the Note object. This component would define a set of values which would be put on the top of the stack before note scripts start executing. Internally, this would be just a vector of field elements (similar to how we have a vector of assets).

Adding this component would affect how we compute note hash and note nullifier. In both cases, I think we should compute the hash of inputs first, and then include this hash in the note hash and nullifier computations.

Wallet Database [WIP]

The Miden Client must be able to store state data itself. Miden accounts can live either on-chain or off-chain. For on-chain accounts, the full account state is always recorded on-chain - we mean on Miden. For off-chain accounts, only the commitment to the account state (i.e., state hash) is recorded on-chain.

So the Miden Client for the testnet must be able to store all account data itself in a database.

Implement `CompiledTransaction` object

A CompiledTransaction object is the result of compiling a set of note scripts in the context of an account. This process would be performed by a TransactionCompiler and would look as follows:

image

Compiled transaction consists of the following components:

Component Type Description
Account ID 1 element Identifier of the account involved in the transaction.
Consumed notes Vec<Note> A list of objects for all notes consumed in the transaction (if any).
Tx script root Digest MAST root of the tx script for the transaction (if any).
Tx program Program An executable program describing the transaction.

Notice that CompiledTransaction object does not actually contain any of the account data (except for Account ID). Thus, we assume that it will be passed to a component which can look up all required account data somewhere before processing the transaction further.

Transaction Aggregator [WIP]

This is a placeholder issue for the Miden Transaction Aggregator

The Miden Node will orchestrate three modules - The Transaction Prover, the Transaction Aggregator, and the Block Producer.

The Transaction Aggregator

The Transaction Aggregator is a module of the Miden Node and will batch Transaction Proofs together. In Miden, there is proof for every transaction. Transaction proofs will be aggregated in batches by the Transaction Aggregator.

Specification

Where is it needed?

  • The Miden Node has the Transaction Aggregator as one of its three modules
  • The Block Producer requires the Transaction Aggregator
  • The Transaction Aggregator requires the Batch Kernel (which facilitates Recursive Verification, see https://github.com/0xPolygonMiden/air-script/milestone/1)

image

Workspace inheritance

As it stands all dependency versions are specified in the Cargo.toml of the respective crate. This results in some duplication of dependencies and awkward management. Instead it may be more convenient to specify the versions at the workspace level such that all crates in the repo can source the versions from the a single location. This eases dependency management. https://doc.rust-lang.org/nightly/cargo/reference/specifying-dependencies.html#inheriting-a-dependency-from-a-workspace

block header refactor - bookkeeping / state_root + account_root

We should consider refactoring the implementation of the block header. This was initially proposed by @bobbinth here.

A couple of thoughts for the future:

  • Should we dedicate one element to bookkeeping info? For example: version, timestamp?
  • Should we split state_root into account_root and nullifier_root?

One potential argument for splitting state_root into account_root and nullifier_root is that we could add a couple more procedures to the kernel - e.g., miden::sat::tx:get_account_root and miden::sat::tx::get_nullifier_root.

I agree and think it makes sense to implement these changes.

Metadata

  • version
  • timestamp

Data format for non-fungible asset data

Currently, data for non-fungible assets is stored as a simple vector of bytes. It could be good to add some structure to this (see #2 (comment)). The key properties that we'd like to have for NFA data are:

  • The data should be serializable into bytes.
  • Hash of the data should depend on semantic properties rather than formatting. For example, whitespace or field order changes should not affect data hash.
  • The data should be reducible to a display string which reflects internal content. For example, we could iterate over internal fields and show a value for each field.

Batch Kernel [WIP]

This is a placeholder issue for the Miden Batch Kernel.

The Miden Batch Kernel

The Batch Kernel is responsible for batching transaction proofs into single proofs using recursive verification. Basically, it proofs that it verified transaction proofs correctly using the Miden VM.

... [WIP]

Specification

WIP

Where is this needed

image

`miden-lib`: Create `memory` module

I propose we introduce a memory module which holds procedures relevant to calculating memory addresses relevant to library. As all of the constants we use in the library are related to the memory layout I think we should be able to centralise all constants in this single memory module.

`prologue`: Note inclusion proof in note db

Overview

As part of the transaction prologue we need to authenticate that the notes being consumed in the transaction exist in the note db. This involves:

  • An MMR inclusion proof to authenticate that the block that the note was created in is included in the known chain history. This should be authenticated against the most recent known block which is one of the global inputs to transaction / prologue.
  • A merkle inclusion proof to authenticate that the note was produced in the block identified above from the chain history.

We currently have a TODO placeholder in the prologue here.

Below we see a diagram for how this data is structured:
image

Dependencies

This is dependent on having:

  • Mmr proof in masm
  • Merkle proof in masm

Consider reducing account ID size to 16 bytes or smaller

Currently, account IDs are 21 bytes long (technically 24 bytes, but the last 3 bytes are guaranteed to be all zeros). There is really no good reason for why they need to be this long. This is because we can prevent account collisions at account creating time.

Specifically, when a new account is being created, we must check that an account with such an ID does not already exits. This check is needed regardless of whether the IDs are 32, 21, or 16 bytes long. Thus, in theory, we could make account IDs pretty small. But we probably don't want to make them too small for two reasons:

  1. If we make account ID too small and when there are enough IDs in the system, it may become difficult to find a new "unoccupied" ID.
  2. When computing non-fungible asset hash, we use account IDs as one of the inputs, and if the IDs are too small, it may be possible to come up with hash collisions.

Given the above, my thinking is that we could safely reduce account ID size to 16 bytes (2 field elements). We would not be unique in this - Diem account addresses are 16 bytes long.

With grinding, we might even be able to go further: account IDs would be 16 bytes, but we could force 3 bytes to be zeros - thus, giving us ID size of 13 bytes. Though, we should probably think this through a bit more.

Creating new accounts

Our current transaction kernel assumes that the account against which a transaction is being executed already exists. Originally, I was thinking that we'd need a separate kernel to handle account creation, but now I wonder if we could use the same kernel with a few minor modifications. These modifications would be:

Stack inputs for the transaction would be changed from:

[BH, acct_id, IAH, NC, ...]

To something like:

[BH, acct_id, is_new_acct, IAH, NC, ...]

This new is_new_account input would be set to 0 if the account already exists and to 1 if to doesn't. Based on this input, the prologue would do the following:

  • 0: verify that that the account already exists (by checking against account_root).
  • 1: verify that the account doesn't yet exists (again, by checking against account_root), and:
    • verify that the user hash the required seed to create an account with this ID (the seed would be provided via the advice provider).
    • Enforce that certain account fields are initialized properly. For example, the nonce is set to 0 and the vault is empty (the code and the storage could be initialized to arbitrary values by the user).

Another thing we'd need to change is how we derive account ID from a seed. The reason for this is that if an account is created via a public transaction (i.e., not proven locally), a malicious operator could steal the seed and create their own account with the same ID. So, what we want to do is bind the account ID to the code and storage with which the account is initialized. So, the procedure could look like this:

  1. Compute digest = hash(code_root, storage_root, nonce).
  2. Set account_id = digest[0] and make sure it complies with various account ID rules (same as now).
  3. Enforce the correct number of trailing zeros in digest[3] based on the account type: 24 for regular accounts and 32 for faucet accounts (same as now).

Basically, the user would try different nonce values (nonce could be 1 field element) until all the rules are satisfied. If someone else wanted to get the same account ID for different code_root and storage_root combination, they'd need find a partial pre-image for the digest. I believe the work required would be:

  • For regular accounts 88 bits (64 + 24).
  • For faucet account 96 bits (64 + 32).

I think the above is probably more than sufficient because the attack window is very specific and short (i.e., once an account has been created, finding a different seed which results in the same account ID is meaningless).

Implement note serialization

In #5 we introduced a Note object. This object can be relatively big and so we cannot reduce it to something as simple as a Word which worked for AccountId and Asset. Thus, we need to implement serialization of notes similar to what we've done for Miden assembly.

Specifically, I'm thinking we should use the same approach with Serializable/Deserializable traits. The only thing is that it is a bit weird that these traits currently live in miden-assembly. My thinking is that they should be in miden-core. So, maybe the first thing here is to move these traits into miden-core and then implement serialization here.

Tracking Issue: Miden Client [WIP]

Miden Client

Users use Miden Clients to interact in the network. The backend of any wallet that is used in Miden will be a Miden Client. Miden Clients consist of several components.

  • Transaction Prover (shared with the Miden Node)
  • Signature Module
  • Wallet Interface
  • Wallet Database

Prerequisites #29 (When the transaction kernel is done, we can start with the transaction prover module).

Transaction Prover

The Transaction Prover will be able to prove the correctness of a transaction. This is needed for Client-side proving our USP. See #50

Signature Module

The Client must be able to sign transactions and messages using the optimized Falcon verification. #56

Wallet Interface (UI / Smart Contract)

The Miden Client needs an interface to an account on the Miden rollup. It must be able to read on-chain data. Additionally, the Miden Client should provide an interface to a user or an API to an existing wallet like MetaMask Snaps. Should be covered in #22

Wallet Database

The Miden Client needs to be able to store all necessary account data. See #57

Miden-Client

Tracking issue: Transaction kernel implementation

Goal(s)

  • Implement the transaction kernel we'll need for the Miden rollup.

Details

Most of the transaction kernel implementation was specified in #3. Next, we need to implement a minimal working tx kernel, including the prologue script, note setup script, and epilogue scripts, as well as any other required issues that come up along the way.

The dependencies for this are roughly the following:

  • Sparse Merkle tree (0xPolygonMiden/crypto#7):
    • implemented in Rust & Miden assembly
    • requires new decorators in the VM
    • may require some changes to the advice provider
    • requires a refactor of mtree_cmw to mtree_cset with slightly different semantics
  • Merkle Mountain Range (0xPolygonMiden/crypto#51):
    • implemented in Rust & Miden Assembly
    • may require new decorators in the VM
  • Prologue, epilogue, and note setup scripts written in miden assembly
    • Finalize object structures
  • Tx program builder (takes data about account code & script code and combines them into single program that can be executed)
    • May require modifications to Miden Assembler
  • Kernel procedures (nice to haves)

Tasks

  1. kernels
    frisitano

Working group:

@hackaugusto, @frisitano, @bobbinth, @grjte,

Workflow
  • Discussion should happen here or in the related sub-issues.
  • PRs should only be merged by the coordinator, to ensure everyone is able to review.
  • Aim to complete reviews within 24 hours.
  • When a related sub-issue is opened:
    • add it to the list of sub-issues in this tracking issue
  • When opening a related PR:
    • request review from everyone in this working group
  • When a sub-issue is completed:
    • close the related issue with a comment that links to the PR where the work was completed

Coordinator: @hackaugusto

Workflow

The working group coordinator ensures scope & progress tracking are transparent and accurate. They will:

  • Merge approved PRs after all working group members have completed their reviews.
    • add the PR # to the relevant section of the current tracking PR.
    • close any completed sub-issue(s) with a comment that links to the PR where the work was completed
  • Monitor workflow items and complete anything that slips through the cracks.
  • Monitor scope to see if anything is untracked or unclear. Create missing sub-issues or initiate discussion as required.
  • Monitor progress to see if there's anything which isn't moving forward. Initiate discussion as required.
  • Identify PRs with especially significant changes and add @grjte and @bobbinth for review.

Implement note setup script

Note setup script is a program which run right before a note's script is implemented. For example, for a transaction which consumes a single note (note0), the complete transaction MAST could look as follows:

image

And for a transaction consuming two notes (note0 and note1), the complete transaction MAST could look as follows:

image

The note setup script is exactly the same for all notes, and it needs to accomplish the following tasks:

  1. Update the pointer to the note which is currently being processed. This pointer could be located in the bookkeeping memory section of the root context.
  2. Put note inputs onto the stack.
  3. We could also put the actual "unhashing" of note details here instead of the prologue. It is not clear to me yet which approach is better.

Tracking Issue: Miden Node [WIP]

This is a placeholder issue for the Miden Node

The Miden Node will orchestrate three modules - The Transaction Prover, the Transaction Aggregator, and the Block Producer. The centralized operator of the Miden Rollup will use the Miden Node.

Miden Node

In the future, the Miden Node must also be able to put epoch proofs to Ethereum. But this is not needed for the first testnet.

Implement `ProvenTransaction` object

A ProvenTransaction object is the result of executing and proving a transaction. It should contain the minimal amount of data needed to verify that a transaction was executed correctly. The object should consist of the following:

Component Size Description
Account ID $1$ element The identifier of the account involved in the transaction.
Initial account hash $4$ elements Hash of the account state at the beginning of the transaction.
Final account hash $4$ elements Hash of the account state at the end of the transaction.
Consumed note info $8n$ elements A list of tuples (nullifier, script_root) for all notes consumed by the transaction.
Created note info $6n$ elements A list of tuples (note_hash, note_meta) for all notes created during the transaction.
tx script root $4$ elements MAST root of the transaction script for the transaction (if any).
Block reference $4$ elements Hash of the last known block at the time the transaction was created.
Proof variable STARK proof attesting to the correct execution of the transaction program.

A verifier would use the above information as follows:

  1. Compute a MAST root of the transaction program using note script_roots, tx_script_root, and components of transaction kernel (i.e., prologue, epilogue, note setup script etc.).
  2. Verify that the above program was executed correctly against the following stack inputs/outputs.
Inputs:  [block_ref, acct_id, init_acct_hash, input_notes_hash, ...]
Outputs: [final_acct_hash, created_notes_hash, ...]

In the above, input_notes_hash is a sequential hash of all (nullifier, script_root) tuples of consumed notes, and created_notes_hash is a sequential hash of all (note_hash, note_meta) tuples of created notes.

Implement block header format

We should implement the block format in the rust codebase and also explore where we should unhash the block data in the kernel.

See the table below and the following comment for more insight on the format.

Field Description
prev_hash Hash of the previous block's header (32 bytes).
block_num Unique sequential number of the current block (4 bytes should be enough).
chain_root A commitment to an MMR of the entire chain where each block is a leaf (32 bytes).
state_root A combined commitment to Account, and Nullifier databases (32 bytes).
note_root A commitment to all notes created in the current block (32 bytes).
batch_root A commitment to a set of transaction batches executed as a part of this block (32 bytes).
proof_hash Hash of a STARK proof attesting to the correct state transition (32 bytes).

Tracking issue: finalize transaction kernel

Goal

Implement the transaction kernel we'll need for the Miden rollup.

Details

Transaction kernel is the foundational component for executing and proving transactions. This issue is specifically for the single account transaction kernel - i.e., where each transaction touches only one account.

The tasks below involve finishing the kernel prologue, epilogue, and note setup script and implementing kernel user procedures.

Tasks

  1. enhancement kernels
  2. kernels

Working group:

@frisitano, @hackaugusto, @bobbinth, @grjte,

Coordinator: @frisitano

Miden State Databases [WIP]

This is a placeholder issue for the Miden State Databases

The Miden Node will orchestrate three modules - The Transaction Prover, the Transaction Aggregator, and the Block Producer.

The Miden State Databases will be used by the Block Producer

The Block Producer is a module of the Miden Node and will produce the Miden Rollup blocks. The Block Producer keeps track of the state in the Miden Rollup. The module maintains three databases to describe the state:

  1. A database of accounts.
  2. A database of notes.
  3. A database of nullifiers for already consumed notes.

State

These databases are represented by authenticated data structures (e.g., Merkle trees), such that we can easily prove that items were added to or removed from a database, and a commitment to the database would be very small.

Specification

[WIP]

Account database

Note database

Nullifier database

Where is it needed?

Consider increasing account nonce size

Currently, account nonce is specified to be just a single field element (64 bits). However, as described in #22 (comment), it may be desirable to let users set account nonces to random values. However, we also want to minimize a chance of a user accidentally using the same nonce more than once as it may lead to some security problems. Thus, it is probably a good idea to increase the size of the nonce.

Assuming account ID is reduced to a single field element as suggested in #8, we have the account consisting of the following components:

  • Account ID - 1 field element.
  • Account vault root - 4 field elements.
  • Account storage root - 4 field elements.
  • Account code root - 4 field elements.
  • Account nonce - ?

Thus, if size of the nonce is 3 or fewer elements, we can compute hash of the account state in 2 permutations of the hash function.

So, to summarize, we can increase nonce size to 2 or 3 field elements without any impact on efficiency of computing account state commitment. I think 2 elements is probably enough, but open to other suggestions too.

`NoteMetadata` changes (Rust + masm)

Rust

We must implement a NoteMetadata struct that should include sender (1 element), num_assets (1 element) and a tag (1 element). We also need a method to convert to a Word. We should update ConsumedNotesInfo and Note to use this struct.

The struct should look as follows:

struct NoteMetadata {
    sender: Felt,
    tag: Felt,
    num_assets: Felt,
}

The word representation should be:

[sender, tag, num_assets, 0]

Miden

Currently the number of assets associated with a consumed note or created note has it's own slot in memory. With the proposed change above we will be including the num_assets in the note metadata. We should modify transaction kernel to reflect this change.

Prologue

Consumed notes

  1. Introduce slot to store note metatdata(it currently doesnt exist).
  2. Remove slot that currently stores number of assets.
  3. Update logic that has a dependency on the number of assets.
  4. Fix tests.

Note Setup

From initial assessment no changes required. Should confirm.

Epilogue

Memory slot for

Created notes

Slot for metadata already exists.

  1. Remove slot that currently holds number of assets.
  2. Change logic that depends on the number of assets. I suspect it's only compute_created_note_vault_hash procedure. Should confirm.

Implement kernel procedure access checks

Ability to call user-accessible kernel procedures (listed in #67) should be restricted based on the following factors:

  • Account type: some procedures should be available only to certain accounts. For example, mint and burn procedures can be called only by faucets. This check can be performed simply by looking at the first two bits of the account's ID.
  • Calling context: some procedures can be called only from the account context (i.e., not from note context). This check can be performed by using a caller instruction to get the hash of the caller and then checking if the caller is in Merkle tree committed to by the account's code_root.

Consumed note asset padding

When ingesting asset data associated with a consumed note from the advice provider as part of the prologue (#14) it will be helpful if the asset data is padded to be a multiple of the rate width. This would involve padding with an additional word if the number of assets is an odd number.

Encoding additional info in account IDs

Currently, account IDs encode info which help us identify whether an account is a regular account, fungible asset faucet, or a non-fungible asset faucet. But there is additional info about an account which might be useful to encode into it as well. Specifically, I'm thinking about the following properties:

  • Whether account state is stored on-chain or off-chain (this is similar to how we do it for notes where if the first bit is 0 then the note is assumed to be stored off chain, and if it is 1, then it is assumed to be stored on chain).
  • Whether code for the account can be updated (i.e., immutable vs. mutable code). My initial thinking is that this should be allowed only for regular accounts - i.e., asset faucets always have immutable code.

Refactor note data memory layout

We should consider refactoring both consumed and created note data layout such that data is structured in such a way that aligns with kernel access / usage patterns. See comments below:

comment link

I wonder if we should arrange this a bit differently so that metadata and hash are next to each other. Specifically, it could go like this:

const.CONSUMED_NOTE_METADATA_OFFSET=0
const.CONSUMED_NOTE_HASH_OFFSET=1
const.CONSUMED_NOTE_CORE_DATA_OFFSET=2
const.CONSUMED_NOTE_SERIAL_NUM_OFFSET=2
const.CONSUMED_NOTE_SCRIPT_ROOT_OFFSET=3
const.CONSUMED_NOTE_INPUTS_HASH_OFFSET=4
const.CONSUMED_NOTE_VAULT_ROOT_OFFSET=5
const.CONSUMED_NOTE_ASSETS_OFFSET=6

It seems a bit cleaner and may be useful in the future for computing parent node of note metadata and hash.

comment link

To keep consistency with the previous comment, maybe the order should be:

const.CREATED_NOTE_METADATA_OFFSET=0
const.CREATED_NOTE_HASH_OFFSET=1
const.CREATED_NOTE_RECIPIENT_OFFSET=2
const.CREATED_NOTE_VAULT_HASH_OFFSET=3
const.CREATED_NOTE_ASSETS_OFFSET=4

Something else to consider is the way in which we compute the commitment for consumed notes. This is computed as a sequential hash over all (nullifier, script_root) tuples. I wonder if we could modify the layout / hashing patterns such that nullifier and script root are stored next to each other. This may allow us to use the mem_stream operation when computing the consumed notes commitment.

const.CONSUMED_NOTE_METADATA_OFFSET=0
const.CONSUMED_NOTE_HASH_OFFSET=1
const.CONSUMED_NOTE_NULLIFIER_OFFSET=2
const.CONSUMED_NOTE_CORE_DATA_OFFSET=2
const.CONSUMED_NOTE_SCRIPT_ROOT_OFFSET=3
const.CONSUMED_NOTE_SERIAL_NUM_OFFSET=4
const.CONSUMED_NOTE_INPUTS_HASH_OFFSET=5
const.CONSUMED_NOTE_VAULT_ROOT_OFFSET=6
const.CONSUMED_NOTE_ASSETS_OFFSET=7

Block Kernel [WIP]

This is a placeholder issue for the Miden Block Kernel.

Specification

WIP

Where is this needed

  • The Miden Block Kernel is used by the Miden Block Producer to produce blocks
  • The Miden Block Kernel required the Miden Batch Kernel

image

`miden-lib`: cache expensive procedure results

There are certain properties that are relatively expensive to compute, e.g. account::get_current_hash and tx:get_output_notes_hash as they required a large amount of hashing to be performed. We should implement a caching mechanic that allows us to only recompute the value when it has changed. link to original suggestion is here.

RPC endpoint for the Block Producer [WIP]

This is a placeholder issue for the Miden RPC endpoint

The Miden Node will orchestrate three modules - The Transaction Prover, the Transaction Aggregator, and the Block Producer.

The Miden RPC endpoint for the Block Producer

The Block Producer is a module of the Miden Node and will produce the Miden Rollup blocks. The Block Producer has an endpoint (RPC) for Miden Clients and others using RPC requests. We need to define an endpoint for that.

Specification

[WIP]

Where is it needed?

Update `process_consumed_notes_data` to use `advice_map`

We have recently introduced support for the advice_map in the advice provider. As such we should update the miden::sat::prologue::process_consumed_notes_data procedure to leverage this. As it currently stands it uses the advice_stack.

Stablecoins on Miden using Asset Accounts

Summary

On Miden we need to provide stablecoins. We cannot deploy ERC20 contracts on Miden. We can mimic most of the features also in Miden Token Accounts. The main problem might be how to handle the case if an address is being blacklisted on Ethereum but has already bridged stablecoins to Miden.


Miden needs stablecoins for DeFi

DeFi is one of the most widespread use case categories on blockchains, especially on Ethereum. Stablecoins drive DeFi. If Miden wants to become a relevant smart contract blockchain, it should provide the ability for stablecoins.

In theory, Miden enables all existing stablecoin projects to launch their stablecoin on the platform. The projects have already gained the user's trust and have ways to deal with the regulators. But it is also possible for Polygon to launch its stablecoin on Miden.

Existing stablecoins use ERC-20 (or similar) token contracts

The most used stablecoins are smart contracts following the ERC-20 token contract standard (or the BEP-20 on Binance), see USDC, USDT, DAI, WeTH, BUSD. Those token contracts can be transferred onto other EVM-like blockchains, but not so easy to systems like Miden.

Stablecoins differ in the way how they keep a stable value. Fiat or Collateral backed stable coins like USDT or USDC might be easier to rebuild on Miden than stablecoins with more complex stability mechanisms like DAI.

In Miden, there are no token contracts but token accounts

In Miden, there are Asset accounts that issue assets; see 0xPolygonMiden/miden-vm#339.

EXAMPLE: Ways to rebuild (fiat-backed, centralized stablecoins) USDC on Miden

Let's collect some ideas on how to provide USDC on Miden to Miden users.

There are other stablecoins and projects. I picked USDC as an example because it is the most used stablecoin by far - https://dune.com/hagaetc/stablecoins.

Some background on USDC:

  • USD Coin (USDC) is a cryptocurrency designed to maintain a constant value of $1 USD.
  • Each USDC is redeemable for one dollar and is backed by one dollar or a dollar-denominated asset with the equivalent value held in accounts at regulated U.S. financial institutions.
  • USDC first launched on the Ethereum blockchain as an ERC-20 token, but has since expanded to other blockchains, including Solana, Stellar, and Algorand.
  • USD Coin is managed by a consortium called Centre, which Circle and Coinbase founded

Some ideas on how to rebuild the features

There are two ways for USDC to be deployed on Miden:

Mint and release

Other rollups, optimistic or zk, mint tokens on the rollup when those tokens are deposited on the Ethereum side of the token bridge. Those tokens get burnt whenever a user wants to bridge the tokens back.

In theory, there can be a USDC Asset account issuing as many USDC as there are in the rollup token bridge on Ethereum. On Asset accounts, one can set and change the admin, who can have the ability to mint more. Also, the query of the total supply should be an easy thing to do.

Native USDC

Circle wants to release native implementations of their tokens also on Rollups. The difference might be (it is not clear yet) that Circle has control of the token contract and can blacklist users and issue tokens.

Let's try to continue with the easier Mint and release approach. Easier because we don't need to align with Circle on how to implement that token.

Necessary features for USDC:

For reference, the USDC smart contract (https://etherscan.io/token/0xa0b86991c6218b36c1d19d4a2e9eb0ce3606eb48#code)

  • Upgradeability - allows to change the implementation address to which it will delegate
  • Set / Change Admin - Contract combines an upgradeability proxy with an authorization mechanism for administrative tasks.
  • isMinter - checks if the address can issue new tokens
  • totalSupply - returns total supply on that chain
  • balanceOf - returns the balance of an address
  • isBlacklisted - lists addresses that cannot move tokens anymore; there is also a blacklister who can define addresses to be blacklisted

There might be more necessary features. From the listed features, the most problematic features seem to be to get the balance of some other user and to blacklist someone. Assets are stored in the accounts directly - there is no global list. And in Miden, it is possible to hold account data off-chain.

Getting the balance of a user balanceOf

  1. We could say that if a user uses USDC, she/he cannot store the account information off-chain. Not a very elegant solution, e.g. it would be unnecessarily expensive to use USDC compared to other assets.
  2. Users could get queried directly, and they could provide a proof that their USDC balance is above a certain amount. For most use cases, where smart contracts or other users need to check the balance of a certain account, that should work.
  3. ...

Blacklisting a user isBlacklisted

Since USDC is a centralized stablecoin - partly regulated under the SEC - there must be the ability to prevent users from using USDC (e.g. money laundering).

  1. All USDC payments could always return to the USDC Asset account that automatically sends the asset to the specified receiver. That way, the USDC Asset Account could still have a blacklist. But users on Miden would sacrifice some features to be able to use USDC
  2. In theory, it might be enough to rely on the blacklisting feature of the USDC contract on Ethereum. There are two possible cases of blacklisting:
    2.1) An address is already blacklisted and tries to bridge to Miden: The bridging transaction on Ethereum will fail.
    2.2) An address bridges to Miden and gets blacklisted on Ethereum afterwards: The user can use the USDC tokens issued on Miden and potentially swap those. The user cannot get USDC out of Miden because that transaction would happen on Ethereum. However, the tokens can be swapped against other tokens on which the user is not blacklisted. It is unclear how other rollups / sidechains handle that case.
  3. ...

Note / Asset / Input Limits

Currently, we set the maximum number of assets in the note at 1000. This number is somewhat arbitrary, and I'm thinking that a different number would work better.

  • First, to encode number of assets we'd need to use 10 bits, and thus, most likely 2 bytes where most of the second byte is "wasted".
  • Second, I am thinking that 1000 distinct assets is probably too much. For one, it would take up 32KB to encode.

My thinking is that at the very least we should reduce the maximum number of assets to 256. This way, the number can be encoded using exactly 1 byte. We could go even further and set the maximum at 16 - but I wonder if that's too restrictive.

For note script inputs (see #9), we should probably set a reasonable maximum as well. I am thinking 16 would probably be a good number as it doesn't require messing around with the overflow table and could make recursive proof verification simpler.

Tracking issue: Transaction kernel specs

Goal(s)

  • Define the specification for the transaction kernel we'll need for the Miden rollup.

Details

There have been a lot of discussions, but we need to bring all this info into one place and hammer out the details so we can move forward with the rollup implementation.

Discussion references

Tasks

Tasks

Working group:

@bobbinth, @grjte, @vlopes, @hackaugusto

Workflow
  • Discussion should happen here or in the related sub-issues.
  • PRs should only be merged by the coordinator, to ensure everyone is able to review.
  • Aim to complete reviews within 24 hours.
  • When a related sub-issue is opened:
    • add it to the list of sub-issues in this tracking issue
  • When opening a related PR:
    • request review from everyone in this working group
  • When a sub-issue is completed:
    • close the related issue with a comment that links to the PR where the work was completed

Coordinator: @bobbinth

Workflow

The working group coordinator ensures scope & progress tracking are transparent and accurate. They will:

  • Merge approved PRs after all working group members have completed their reviews.
    • add the PR # to the relevant section of the current tracking PR.
    • close any completed sub-issue(s) with a comment that links to the PR where the work was completed
  • Monitor workflow items and complete anything that slips through the cracks.
  • Monitor scope to see if anything is untracked or unclear. Create missing sub-issues or initiate discussion as required.
  • Monitor progress to see if there's anything which isn't moving forward. Initiate discussion as required.
  • Identify PRs with especially significant changes and add @grjte and @bobbinth for review.

Transaction Prover [WIP]

This is a placeholder issue for the Miden Transaction Prover. The Miden Client (user facing) and the Miden Node will use the Transaction Prover to prove the correctness of a transaction execution (using the tx kernel).

Specification

WIP

Where is it needed?

  • The Transaction Prover is a module of the Miden Client (Clients and Nodes can prove transactions)
  • The Transaction Prover is a module of the Miden Node (Clients and Nodes can prove transactions)
  • The Block Producer - another module of the Miden Node - requires the Transaction Prover
  • The Transaction Prover requires the Transaction Kernel

image

Implement basic wallet interface

This issue describes the basic wallet interface which we should implement for the testnet. It could evolve into a reference wallet implementation, though, at this stage it is rather simplistic.

The interface defines 4 methods:

  1. receive_asset
  2. send_asset
  3. auth_tx
  4. set_pub_key

The first two of the above methods should probably be an interface on their own, and we should recommend that most accounts implement these methods.

The goals is to provide a wallet with the following capabilities

  • The wallet is controlled by a single key. The signature scheme is assumed to be Falcon.
    • However, sending assets to the wallet does not require knowing which signature scheme is used by the recipient.
  • The user can send, receive, and exchange assets stored in the wallet with other users. All operations (including receiving assets) must be authenticated by the account owner.
  • The user can update the public key associated with the account as long as they are still in possession of the current private key.

Not supported in this implementation:

  • Multi-sig wallets.
  • Ability to create notes with multiple assets.

The implementation probably could go into Miden lib, maybe under miden::wallets::simple namespace - but there could be other options as well.

Interface method description

Below, we provide high-level details about each of the interface methods.

receive_asset method

The purpose of this method is to add a single asset to an account's vault. Pseudo-code for this method could look like so:

receive_asset(asset)
    self.add_asset(asset)
end

In the above, add_asset is a kernel procedure miden::sat::account::add_asset described in #3 (comment).

Note: this method does not increment account nonce. The nonce will be incremented in auth_tx method described below. Thus, receiving assets requires authentication.

send_asset method

The purpose of this method is to create a note which sends a single asset to the specified recipient. Pseudo-code for this method could look like so:

send_asset(asset, recipient)
    self.remove_asset(asset)
    tx.create_note(recipient, asset)
end

In the above, remove_asset is a kernel procedure miden::sat::account::remove_asset and create_note is a kernel procedure miden::sat::tx::create_note, both described in #3 (comment).

recipient is a partial hash of the created note computed outside the VM as hash(hash(hash(serial_num), script_hash), input_hash). This allows computing note hash as hash(recipient, vault_hash) where the vault_hash can be computed inside the VM based on the specified asset.

Note: this method also does not increment account nonce. The nonce will be incremented in auth_tx method described below. Thus, sending assets requires authentication.

auth_tx method

The purpose of this method is to authenticate a transaction. For the purposes of this method we make the following assumptions:

  • Public key of the account is stored in account storage at index 0.
  • To authenticate a transaction we sign hash(account_id || account_nonce || input_note_hash || output_note_hash) using Falcon signature scheme.

Pseudo-code for this method could look like so:

auth_tx()
    # compute the message to sign
    let account_id = self.get_id()
    let account_nonce = self.get_nonce()
    let input_notes_hash = tx.get_input_notes_hash()
    let output_notes_hash = tx.get_output_notes_hash()
    let m = hash(account_id, account_nonce, input_notes_hash, output_notes_hash)

    # get public key from account storage and verify signature
    let pub_key = self.get_item(0)
    falcon::verify_sig(pub_key, m)

    # increment account nonce
    self.increment_nonce()
end

It is assumed that the signature for falcon::verify_sig procedure will be provided non-deterministically via the advice provider. Thus, the above procedure can succeed only if the prover has a valid Falcon signature over hash(account_id || account_nonce || input_note_hash || output_note_hash) for the public key stored in the account.

All procedures invoked as a part of this method, except for falcon::verify_sig have equivalent kernel procedures described in #3 (comment). We assume that falcon::verify_sig is a part of Miden standard library.

Open question: should the signed message be different? For example, maybe we should include hash of the entire account state (initial and final) into the message hash as well?

set_pub_key method

The purpose of this method is to rotate an account's public key (i.e., replace the current key with a new value). For the purposes of this method we make the following assumptions:

  • Public key of the account is stored in account storage at index 0.
  • To authenticate the update we sign hash(account_id || account_nonce || old_key || new_key) using Falcon signature scheme.

Pseudo-code for this method could look like so:

set_pub_key(new_key)
    # compute message to sign
    let account_id = self.get_id()
    let account_nonce = self.get_nonce()
    let old_key = self.get_item(0)
    let m = hash(account_id, account_nonce, old_key, new_key)

    # verify signature
    falcon::verify_sig(old_key, m)

    # update the key at storage location 0 to a new value
    self.set_item(0, new_key)

    # increment account nonce
    self.increment_nonce()
end

It is assumed that the signature for falcon::verify_sig procedure will be provided non-deterministically via the advice provider. Thus, the above procedure can succeed only if the prover has a valid Falcon signature over hash(account_id || account_nonce || old_key || new_key) for the public key stored in the account.

All procedures invoked as a part of this method, except for falcon::verify_sig have equivalent kernel procedures described in #3 (comment). We assume that falcon::verify_sig is a part of Miden standard library.

Usage examples

Examples of using the above interface are described below.

Receiving funds

To receive funds into an account we'd need a note which invokes receive_asset method. Script for this note could look something like this (this is actually identical to P2ID script):

note_script()
  let target_account_id = self.get_input(0)
  assert(account.get_id() == target_account_id)
  for asset in self.get_assets()
    account.receive_asset(asset)
  end
end

The above script assumes that the recipient account ID is specified via note inputs.

In addition to the note, transaction consuming it would need to have a tx script which invokes auth_tx method like so:

tx_script()
  account.auth_tx()
end

To execute this transaction, the user will need to provide a signature over hash(account_id || account_nonce || input_note_hash || output_note_hash) against the public key stored in the account.

Sending funds

To send funds from an account we'd need to create a transaction which invokes send_asset method as a part of its tx script. Tx script for such a transaction could look like so:

tx_script()
  account.send_asset(<asset1>, <recipient1>)
  account.send_asset(<asset2>, <recipient2>)
  account.auth_tx()
end

To execute this transaction, the user will need to provide a signature over hash(account_id || account_nonce || input_note_hash || output_note_hash) against the public key stored in the account.

Swapping assets

We can also combine receive_asset and send_asset methods to execute an atomic swap. A script for a note involved in the swap could look like so:

note_script()
  let target_account_id = self.get_input(0)
  assert(account.get_id() == target_account_id)

  account.receive_asset(<asset1>)
  account.send_asset(<asset2>, <recipient>)
end

In the above case, asset1, asset2, and recipient are hardcoded into the note script. Anyone consuming this note will add asset1 into their account and will have to create carrying asset2 addressed to the specified recipient.

To consume this note in a transaction, the transaction will also need to include a tx script which looks something like this:

tx_script()
  account.auth_tx()
end

To execute this transaction, the user will need to provide a signature over hash(account_id || account_nonce || input_note_hash || output_note_hash) against the public key stored in the account.

Updating public key

To update public key of an account, we'd need to create a transaction which invokes set_pub_key method as a part of its tx script. Tx script for such a transaction could look like so:

tx_script()
  account.set_pub_key(<new_key>)
end

To execute this transaction, the user will need to provide a signature over hash(account_id || account_nonce || old_key || new_key) against the public key stored in the account prior to the update.

Implement transaction prologue script

Transaction prologue is a program which is executed at the beginning of a transaction (before note scripts and tx script are executed). The prologue needs to accomplish the following tasks:

  1. "Unhash" inputs and lay them out in root context's memory.
  2. Build a single vault ("tx vault") containing assets of all inputs (input notes and initial account state).
  3. Verify that all input notes are present in the Note DB.

Laying out inputs in memory

Before a transaction is executed, the stack is initialized with all inputs required to execute a transaction. I'm thinking these inputs could be arranged like so (from the top of the stack):

  1. Global inputs
    a. Last block number (1 element) - a unique sequential number of the last known block.
    b. Note DB commitment (4 elements) - commitment to the note database from the last known block.
  2. Account ID (either 2 or 3 elements) - ID of the account for this transaction.
  3. Account commitment (4 elements) - hash of the initial account states.
  4. Nullifier commitment (4 elements) - sequential hash of nullifiers of all notes consumed in this transaction.

The shape of global inputs still requires more thought - so, I'll skip it for now - but how the rest works is fairly clear. Specifically, we need to read the data for account and notes from the advice provider, write it to memory, hash it, and verify that the resulting hash matches the commitments provided via the stack.

Overall, the layout of root context's memory could look as follows:

image

Bookkeeping section

The bookkeeping section is needed to keep track of variables which are used internally by the transaction kernel. This section could look as follows:

Memory address Variable Description
$0$ tx_vault_root Root of the vault containing all asset in the transaction.
$1$ num_executed_notes Number of notes executed so far during transaction execution.
$2$ num_created_notes Number of notes created so far during transaction execution.

There will probably be other variables which we need to keep track of, but I'll leave them for the future.

Global inputs section

As mentioned above, I'm skipping this for now.

Account data section

This section will contain account details. Assuming $a$ is the memory offset of the account data section, these can be laid out as follows:

Memory address Variable Description
$a$ acct_hash Hash of the account's initial state.
$a + 1$ acct_id ID of the account. Only the first 2 - 3 elements of the word are relevant.
$a + 2$ acct_code_root Root of the account's code Merkle tree.
$a + 3$ acct_store_root Root of the account's storage Sparse Merkle tree.
$a + 4$ acct_vault_root Root of the account's asset vault.
$a + 5$ acct_nonce Account's nonce. Only the first element of the word is relevant.

Consumed notes data section

This section will contain details of all notes to be consumed. The layout of this section could look as follows:

image

Assuming $c$ is the memory offset of the consumed notes section, the meaning of the above is as follows:

Memory address Variable Description
$c$ num_notes Number of notes to be consumed in this transaction.
$c + 1$ nullifiers A list of nullifiers for all notes to be consumed in this transaction.
$c + 1024$ notes A list of all notes to be consumed in this transaction.

Here, nullifier for note0 is at memory address $c+1$, a nullifier for note1 is at memory address $c + 2$ etc. The assumption is that a single transaction can consume at most $1023$ notes. The choice of this number is somewhat arbitrary.

At address $c + 1024$ the, the actual note detail section starts. In this section we lay out all details of individual notes one after another in $1024$ address intervals. That is, details of note0 start at address $c + 1024$, details of note1 start at address $2048$ etc.

Assuming, $n$ is the memory offset of the $n$-th note, the layout of note details could look as follows:

Memory address Variable Description
$n$ note_hash Hahs of the note.
$n + 1$ serial_num Serial number of this note.
$n + 2$ script_hash MAST root of this note's script.
$n + 3$ input_hash Sequential hash of this note's inputs.
$n + 4$ vault_hash Sequential hash of this note's assets.
$n + 5$ num_assets Number of assets contained in this note's vault.
$n + 6$ assets A list of all note assets laid out one after another.

Here, each asset occupies 1 word, and thus the first asset in the note will be at address $n + 6$, the second asset will be at address $n + 7$ etc.

We do not "unhash" inputs because they are needed only when we start executing a given note - so, we can unhash them then.

Created notes data section

This section will contain data of notes created during execution of a transaction. It is not affected by transaction prologue - so, I'll skip it for now.

Building tx vault

To build a unified transaction vault we can do the following:

  1. Make a copy of the account's vault.
  2. As we unhash note assets, add each asset to the copy of the account's vault.

To implement we need to have compact Sparse Merkle tree implemented in Miden assembly.

Verify that notes are in Notes DB

One of the inputs into transaction kernel will be a commitment to notes DB. To verify that notes are in the notes DB we'd need to do the following:

  • As we lay out note detail in memory, we need to compute note hash.
  • Then, we need to prove that a note with this hash exists in the Note's DB.

To do this, we need to have Merkle Mountain Range implemented in Miden assembly, and also define better the shape of global inputs.

Implement asset-preservation logic in transaction kernel

We need to make sure that the set of input assets into a transaction is the same as the set of output assets. This can be achieved as follows:

  • During prologue, we can make a copy of the account vault and add assets of all input notes to it (we can call it input_vault).
  • As the transaction executes, account vault gets updated and new notes get created.
  • During the epilogue, we make a copy of the account vault and add assets of all output notes to it (we can call it `output_vault).
  • Finally, we compare the roots of both vaults to make sure they are the same.

One open question is how to handle mint and burn procedures of faucet accounts. One option is to keep track of minted/burned assets and process them accordingly during the epilogue. Another option is to modify the input_vault with each call to mint/burn procedures - this way, not additional work is needed during the epilogue.

`miden-lib`: Asset Vault

Overview

An asset vault is an object that is used to manage assets. It is backed by a sparse merkle tree. The sparse merkle tree allows for authentication of assets via inclusion proofs. We need to implement the asset vault in miden assembly. It should provide the following standard functionality:

Procedure
add_asset
remove_asset
get_balance
has_nfasset
get_commitment

Use cases

This will be used by an account to manage it's assets. Furthermore it will be used by the transaction prologue and epilogue to ensure that there is no net change in asset balances across a transaction.

`miden-lib`: Kernel Procedures

The full list of kernel procedures now is as follows:

Procedure Context Status
miden::sat::account::get_id account, note
miden::sat::account::get_nonce account, note
miden::sat::account::get_initial_hash account
miden::sat::account::get_current_hash account
miden::sat::account::incr_nonce account
miden::sat::account::add_asset account
miden::sat::account::remove_asset account
miden::sat::account::get_balance account
miden::sat::account::has_nfasset account
miden::sat::account:get_item account
miden::sat::account:set_item account
miden::sat::account::set_code updatable account
miden::sat::faucet::mint faucet account
miden::sat::faucet::burn faucet account
miden::sat::note::get_assets note
miden::sat::note::get_sender note
miden::sat::tx::get_block_number account, note
miden::sat::tx::get_block_hash tx
miden::sat::tx::get_input_notes_hash account, note
miden::sat::tx::get_output_notes_hash account, note
miden::sat::tx::create_note account
miden::sat::tx::add_asset_to_note account

Implement transaction epilogue script

Transaction epilogue is a program which is executed at the end of a transaction (after note scripts and tx script are executed). The epilogue needs to accomplish the following tasks:

  1. Compute hash of the final account state and put it on the stack.
    a. If the initial and final account states are different, we also need to make sure that account nonce has been incremented by one.
  2. Compute a sequential hash of script roots of all consumed notes and put the result onto the stack.
  3. Compute a sequential hash of all created notes and put the result onto the stack.
  4. Build a unified vault of all outputs (i.e., the final account state and created notes) and make sure it is equal to the unified input vault.
  5. Ensure that the stack depth has been truncated to 16 elements.

Thus, by the end of the epilogue, stack state of the VM would look like this:

image

Computing account hash

Computing account hash should be fairly straightforward. The layout of account data in root context's memory is described in #14 - so, I won't go into in much detail here.

Computing consumed note script hash

Computing a sequential hash of script roots of all consumed notes should also be pretty straightforward. Consumed notes are laid out as described in #14, and we just need to read MAST roots for each consumed note (located at memory offset $n + 2$), and then absorb them into a hasher state.

The reason why we need to do this is to bind the sequence of executed note scripts with the appropriate note inputs/assets. Basically, to prevent the prover from executing a note script of one note against inputs/assets of another note.

Computing created note hash

To compute hash of all created notes, we first need to compute hashes of all individual notes, and then sequentially hash all the individual note hashes together.

As described in #14, the overall layout of root context's memory would look something like this:

image

Thus, created note data section would start at memory offset $r$. And data of individual notes could be laid out one after another in $1024$ address increments. Thus, data for note0 would start at $r$, data for note1 would start at $r + 1024$ etc. (the choice of $1024$ is somewhat arbitrary here). Note that we don't store the total number of created notes here because it is already stored in the bookkeeping section at address $2$ as described in #14.

Assuming $n$ is the memory offset of the $n$-th note, the layout of the note detail section could look as follows:

Memory address Variable Description
$n$ recipient Note's recipient computed by hashing, the note's serial number, script hash, and input hash.
$n + 1$ num_assets Number of assets contained in this note's vault.
$n + 2$ assets A list of all note assets laid out one after another.

Here, each asset occupies 1 word, and thus the first asset in the note will be at address $n+2$, the second asset will be at address $n+3$ etc.

To compute a note's hash, we first need to compute the hash of the note's vault, and then compute hash(recipient, vault_hash). Then, once we have all individual note hashes, we can hash them sequentially to get a single commitment to all created notes. It also may be possible to interleave computation of note hashes and the overall commitment - but I'm not sure if this is going to be more efficient than doing it in stages.

Building a unified vault

This should be pretty simple: we can take the vault of the final account state and add all note assets to it one-by-one. This can be interleaved with computing vault hashes for the created notes as when we compute these hashes we need to place assets onto the stack anyway.

Consider adding `sender` field to notes

It may be convenient to add a sender field to notes. This field will contain account ID of the account which created the note. This field could then be used by note consumers as a reliable way determining note origin.

The sender field would need to be set in the transaction kernel at the time a note is created (i.e., via create_note procedure).

To add this field we'll need to re-define note hash and nullifier computations as follows:

note hash: hash(hash(hash(hash(serial_num, [0; 4]), script_hash), sender), vault_hash)
nullifier: hash(serial_num, script_hash, vault_hash, sender)

The main drawback of this change is that it requires one extra hash when computing a note hash (however, nullifier computation complexity remains the same).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.