rchain / rchip-proposals Goto Github PK
View Code? Open in Web Editor NEWWhere RChain improvement proposals can be submitted
License: Apache License 2.0
Where RChain improvement proposals can be submitted
License: Apache License 2.0
Description
Strategic Category: Reduce Risk - 3
Background:
To reduce the risk of 'all' validators abruptly leaving the platform, it is advisable to throttle the rate at which unbonding requests can be processed. In order to specify a rate, it is necessary to specify what unit the rate will use. There are 2 options:
Subtasks are attached. Please vote on which one should be used.
https://rchain.atlassian.net/browse/RIP-1
Created October 9, 2018, 5:31 PM
Governance of the Blockchain - Draft
People can delegate their REV to a staking pool. Their REV does not need to leave their wallet.
Reference: https://medium.com/anonstake/staking-tezos-xtz-how-to-delegate-tezos-with-trustwallet-f3b97e542711
No
The RChip should outline a template or framework for RChain community members to follow who wish to do development work.
Example:
The community member rileyge#0339 would like to develop an .net SDK for RChain
What RChain needs?
Get a degree of scope
Make a requirement list
Based on work with Algorand?
In the future have other .net work
Bridge with Microsoft
Bridge to Azure etc.
Description
None
https://rchain.atlassian.net/browse/RIP-4?src=confmacro
Created October 9, 2018, 5:38 PM
Governance of the Blockchain - Draft
There is a need for having unknown users, that have never interacted with RChain / the blockchain, being able to deploy. Four propositions/way ofhow to do this have been discussed. Two of them are ready, and can be done with existing rnode.
The dapp develoepr knows the public keys:
Then just send REV to each public key
cons:
pro
(1) The user
(2) centralized authentication (email, captcha)
(3) dapp developer fills the account with REV + deploys the deploy signed by user
pros: impossible DDOS, already possible
cons: more complicated than onchain authentication
The dapp developers expresses his agreement to paying for a given rholang execution after having done the proper verification (rholang)
match (*x == "valid", *process) {
(true, { "key": "value" }) => {
if (*registeredUsers < 100) {
registeredUsersCh!(*registeredUsers + 1) |
// here dapp developer can send REV to deployed if he wants to
payForCurrentDeploy!(
*dappDeployerId,
{
limit: 2000 (phlo or dust ?),
allowVault: true/false
})
}
}
_ => {
Nil
}
}
cons:
pros:
a deployer starts with negative balance, the deploy is recorded / the change is saved ONLY IF the balance becomes positive or 0 at the end of the deploy
Rholang currently does not support string-level operations. It is desirable to include a simple delimiter-based parsing of string data
Rholang is better with string primitives
Hierarchical naming cannot be supported using strings
@TheoXD A string such as /my/file/goes/here
cannot be understood using the delimiter /
@fabcotech A string such as this.is.my.domain
cannot be understood using the delimiter .
@tgrospic Code available at https://github.com/r-publishing/rchain/tree/feature-explode
The purpose of this RCHIP is to specify the export process from on-chain REV vault contract to get the full list of REV balances. This export is needed for two planned hard forks: Hard Fork 1 and [Hard Fork 2] (not published yet).
REV vault contract keeps the record of all REV addresses and their current balance. It uses implementation of TreeHashMap with keys as hashed REV addresses and vaults as values.
Traversing of all records in REV vault is implemented in Scala to achieve much better speed than directly in Rholang. The main code is in RhoTrieTraverser and executable entry point is in StateBalanceMain with hard coded unforgeable name of vault map (TreeHashMap) where REV vaults are stored in REV vault contract.
Validation of the whole export process can be done with the prepared scripts and configurations in repository tgrospic/rnode-rev-export-hard-fork-1.
The process has two steps:
state-balance-main
command to export REV balances in csv format.Because exported csv file contain hashed REV addresses we are using data from transaction server to read mapping between hash value and REV address. Because this file can be validated by it own and export requires replay of all blocks as input to transaction server, this file will be provided as part of the specification.
More information can be found in the PR#3411 where is the source code for the export process.
This proposal is written from the instructions and conversations with L.G. Meredith and Mike Stay.
When Rholang executes, it's changing the state of the tuple space. These changes are written as event logs in the block.
In order to run multiple blocks in parallel and merge the state, we need a way to detect conflicting changes, apply only non-conflicting changes and mark the ones that are denied (discarded).
From Casper perspective this means using multiple parent blocks for justification. The current solution is inefficient because it's running replay for merging deploys multiple times.
When base
parent (candidate for finalization) is selected, deploys from all other parents are played/replayed again on top of the base parent.
This proposal is to improve the process of block merging by detecting conflicting changes in parent blocks and instead of replaying deploys to just apply changes from event logs.
This doesn't mean we don't need to replay the block. We must replay the block to check that event logs are correct but this needs to be done only once.
In order to detect conflicts between blocks, we have to find two descendants of a finalized block that used a common name in an incompatible way. This involves walking up the dag of blocks and looking at the event logs. The proposal is to keep in memory for every block the conflict set
. Conflict set include all the events of the current block and all the events in the block's ancestors, back to the latest finalized blocks.
When proposing a block that's a descendant of multiple parents, we check to see if any element of one parent's conflict set conflicts with an element of any other parent's conflict set using MergeabilityRules.scala (rules in spreadsheet). If so, we have to figure out whether they can both be run or if we have to choose just one of the deploys.
The elements of the conflict set have references to the blocks in which the channels were used. As blocks get finalized, we compute new conflict sets from old ones by taking the union of the parent conflict sets minus the uses in the newly finalized blocks.
In a situation when conflicts are detected we pick a winner deploy. That means to track two more states for a deploy, conflicted
(when there are multiple options and it's not yet finalized which one wins) and denied
when a different conflicting deploy won.
Deploy already have these states: waiting to be added to a block, in a block that has been sent, got some blocks in response to that block, finalized. Block data must be extended to include status of denied
deploys.
[?] Other deploy statuses can be resolved locally and do not have to be explicitly written in a block.
Changes in denied deploys will not be applied to the finalized state as though the deploy never happened. Deployer should not be charged for this deploy nor validator will get rewards.
LFB
/ | \
/ | \
/ | \
/ | \
B1 B2 B3
\ / /
\ / /
B4 /
\ /
\ /
B5
If we have this situation and we want to create block B5
. Conflict set for B4
will already be calculated and will contain event logs from B1
and B2
. So to calculate conflict set for B5
we need union of B3
and B4
. We do this when new block is created or replayed.
The second case is when the LFB (last finalized block) make progress, in which case we remove event logs from conflict set for the LFB and all its ancestors who were in the conflict set.
Running multiple deploys in parallel should be Rholang superpower but currently it's limited because of RuntimeManager
which has global lock and singleton instance of DebruijnInterpreter
that has global cost state.
To address these issues RuntimeManager should be refactored.
https://rchain.atlassian.net/browse/RCHAIN-4025
On the RSpace level it should be investigated if there is something preventing parallel execution. HotStore must be isolated for each execution instance.
It can be helpful to think about block merge similar to Git operations we are already familiar with. In the next table are assumptions or relations of RChain terms to Git terms.
RChain | Git |
---|---|
Block | Pull Request |
Deploy | Commit |
Event log (COMMs) | Changed text (lines) |
Block merge | Fast-forward merge / rebase |
Finalized blocks | Master branch |
Block finalization | Merging to master branch |
Each block is like a pull-request trying to be merged to master branch. Master branch represents unchangeable history or a chain of finalized blocks. Blocks that have been selected with most of the stake decided by Casper consensus.
When multiple PRs are competing to be merged to master branch and they don't have conflicting commits, they can be applied to master branch in any order in the same way fast-forward merge is done with Git.
So complication is when merging blocks contain event log in which channels are used in a conflicting way. In Git terms that means commits from two different PRs are changing the same line.
How to resolve this conflict is the main point of block merging. With Git we need to provide conflict resolution which commit will win or some mixture of changes.
In block merge, when conflicting deploys are detected, one deploy will be selected as a winner and the rest will be marked as denied. Keeping track of changes (event logs) and which deploys are denied is what constitutes conflict set.
Information about denied deploys is not necessary to be stored in the merge block but is can be useful when browsing history to have this information without recreating conflict set to find out the winner.
This is not special situation when it would be useful to persist data in the block, protected by the signature. E.g. invalid blocks are also stored in dag storage to optimize calculation each time this info is necessary.
It should also be investigated if signing only block metadata (deploys signatures w/o deploys data and event logs) could give us additional benefits like quicker download and validation of block dag when node is catching up.
Mergeability rules spreadsheet
https://docs.google.com/spreadsheets/d/1pABqArF9e8HRTO9zSefp93mIVUm91avekeDgqSEw0R8
Epic ticket
https://rchain.atlassian.net/browse/RCHAIN-1517
Block merging - trie merge
https://rchain.atlassian.net/wiki/spaces/CORE/pages/739278849/Block+merging+-+trie+merge
Block merging - conflict resolution - peek
https://rchain.atlassian.net/wiki/spaces/CORE/pages/739803157/Block+merging+-+conflict+resolution+-+peek
Description
Strategic Category: Growth & Reduce Risk: 8, 8
Background:
The RChain software is new and under active development. In the event a software bug in the RChain software causes a validator to be slashed, it's reasonable that some recompense be available to the validator.
Proposed Process:
The Validator submits logs to a bug report including evidence of the slash, core issue. Validator has to also provide evidence that they are running CoOp node software. If upon investigation, the outcome is that the issue is a software defect, the slashed amounts are refunded to the validator.
Slashed bond is routed to a CoOperative address, and an out of protocol process for vetting the slash for bugs is in place.
https://rchain.atlassian.net/browse/RIP-7
Created October 9, 2018, 6:21 PM
Governance of the Blockchain - Draft
No.
The RChain platform needs to develop a multi-token standard.
ERC 1155 is an example of an existing multi-token standard.
The naming convention for the RCHIP multi-token standard presents a problem. One thought is to use the 1155 number scheme so users can quickly identify and relate to the standard. This is helpful from a marketing point of view. However, the RChain standard, once created, will be vastly different from ERC 1155. One possibility is to simply call the RChain standard "assetStore".
The intention of this issue is to come up with a solution or a plan for a rholang developer to get meta information of the continuation(library) in the chain and make sure they are executing continuation(library) which are safe.
To make the situation more concrete, I would like to imagine a case and let everyone to provide some feedback.
Supposed that. Alice writes a library to calculate the Fibonacci number and wants to publish the library to every other developer to use. Alice deploys her rholang code to the chain and put that continuation in a registry rho:id:z3kotpcwux8ekqb85sdnmj9kj3mwyc1wpoqa1jwc1zcoqc9ptrn9ti
and then Alice tells Bob that she publishes a new library.
Now Bob wants to use this library and the problems come out.
Looking forward to the discussions
We can absorb some of the ways how other languages do like npm, pypi and etc. but more like an on-chain solution.
centralized
rholang name to get the lib from chainContracts that return json serialized terms are easier to parse in web apps. Currently web apps need to parse Rholang expressions to access data.
new ser(`rho:io:serialize`), out(`rho:io:stdout`), ack in {
out!(("Not serialized", {"key":"data"})) | // Stdout serialized the map
ser!({"key":"data"}, *ack) | // Serialized ther map to a json string
for(@res <- ack) {
out!(("Serialized", res)) // String can be returned to web apps
}
}
Add a fixed channel to Rholang and expose built in term serializer.
SystemProcess
.Definition[F]("rho:io:serialize", FixedChannels.SERIALIZE, 2, BodyRefs.SERIALIZE, {
ctx: SystemProcess.Context[F] =>
ctx.systemProcesses.serialize
}),
Serialize is already part of the interpreter. It is just not exposed as a channel.
Prototype by is here tgrospic is here:
https://github.com/tgrospic/rchain/blob/8458e467e95739538965782947fb8b51404ce4bb/rholang/src/main/scala/coop/rchain/rholang/interpreter/Runtime.scala#L215
1. To allow seamless experience for new node joining the shard.
Casper protocol has variables that all nodes in a shard have to obey, otherwise peers can slash each other. At the moment some of them are defined on node startup with CLI option/config file. The most obvious one is --max-number-if-parents
.
These parameters should be on chain, so node operator has no option aside from following on-chain configuration.
2. To increase decentralisation by extracting shard level variables from the scope of the core protocol.
Core protocol for RChain network is a set of rules all nodes in the network have to obey and is governed by RChain Cooperative. But there are variables that each shard free/can/have-to to decide on. Core protocol contracts (e.g. PoS
) should only reference shard configuration.
3. To give ability for shard variables to be changed with multisig.
Each shard can have its own governance process for shard level variables, from social consensus to on-chain stake weighted voting. Multisig covers all cases.
4. To prepare for future sharding implementation.
All shard level logic have to be abstracted from the core protocol.
root
on mainnetCurrent definition of core protocol: PoS contract
The following variables have to be extracted into shard configuration
Name | Definition |
---|---|
shardName | Name of a shard |
faultToleranceThreshold | Number from [-10^8; 10^8]. Actual faultToleranceThreshold is this value divided by 10^8 |
maxNumberOfParents | Maximum number of parents allowed |
maxParentDepth | Maximum depth of parent allowed (see rchain/rchain#2816) |
finalizationRate | Finalization will be triggered when (block-height % finalizationRate) = 0 |
synchronyConstraintThreshold | Node is allowed to create block only when % of stake in justification blocks is more then this percent of total stake in the shard |
heightConstraintThreshold | (experimental) Node is allowed to create block only if last finalised block is behind tip no more then this value |
deployLifespan | Validators will try to put deploy in a block only for next deployLifespan blocks. Enable protection from re-submitting duplicate deploys |
casperVersion | Version of Casper that shard is running |
configVersion | Version of this config |
bondMaximum | Max bond amount allowed in the shard |
bondMinimum | Min bond amount allowed in the shard |
epochLength | Epoch length. On epoch change the following events happen: new validators bonding, validator rotation, rewards distribution. TBD, clarification is needed |
quarantineLength | Quarantine length. TBD, clarification is needed |
{
shardName: GString,
faultToleranceThreshold: GInt,
maxNumberOfParents: GInt,
maxParentDepth: GInt,
finalizationRate: GInt,
synchronyConstraintThreshold: GString,
heightConstraintThreshold: GInt,
deployLifespan: GInt,
casperVersion: GInt,
configVersion: GInt,
bondMinimum: GInt,
bondMaximum: GInt,
epochLength: GInt,
quarantineLength: GInt
}
/*
The table below describes the required computations and their dependencies
No. | Dependency | Computation method | Result
----+------------+--------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------
1. | | given | sk = a9585a0687761139ab3587a4938fb5ab9fcba675c79fefba889859674046d4a5
2. | | given | timestamp = ???
3. | | lastNonce | nonce = 9223372036854775807
4. | 1, | secp256k1 | pk = 047b43d6548b72813b89ac1b9f9ca67624a8b372feedd71d4e2da036384a3e1236812227e524e6f237cde5f80dbb921cac12e6500791e9a9ed1254a745a816fe1f
5. | 4, 2, | genIds | uname = ???
6. | 3, 5, | registry | value = ???
7. | 6, | protobuf | toSign = ???
8. | 7, 1, | secp256k1 | sig = ???
9. | 4, | registry | uri = ???
----+------------+--------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------
*/
new shardConf,
rs(`rho:registry:insertSigned:secp256k1`),
uriOut
in {
contract shardConf(@"get", returnCh) = {
returnCh!({
"casperVersion": 1,
"configVersion": 1,
"shardName": "root",
"parentShardId": "/",
"faultToleranceThreshold": -100000000,
"finalizationRate": 1,
"maxNumberOfParents": 1,
"maxParentDepth": 1,
"synchronyConstraintThreshold": 99,
"heightConstraintThreshold": 0,
"deployLifespan": 50,
"bondMinimum": 1,
"bondMaximum": 9223372036854775807,
"epochLength": 10000,
"quarantineLength": 50000
})
} |
// Registers signed write-only shardConf contract bundle.
rs!(
"???".hexToBytes(),
(9223372036854775807, bundle+{*shardConf}),
"???".hexToBytes(),
*uriOut)
}
$$shardConfigContractAddress$$
is address of config map deployed to shard's Registrynew config, rl(`rho:registry:lookup`), shardConfCh in {
rl!($$shardConfigContractAddress$$, *shardConfCh) |
shardConfCh!("get", config)
}
TreeHashMap
to store contract uris, we will need to make this uri map available for peeksa
and one with unforgeable name b
, we make them contract unf(@"a", ...) = {...}
and contract unf(@"b", ...) = {...}
, respectively, for some fixed unforgeable name unf
unf
Each system (blessed) contract will exist as data on a fixed location channel corresponding to the contract. E.g. if C
is a blessed contract, then we add a level of indirection through dynamic dispatch by
contract C(arg1, ..., argN) = {
for (realC <<- cLocation) {
realC!(arg1, ..., argN)
}
}
The cLoction
channel will be accessible only through a multisig contract in the registry which the coop will have keys to. This indirection buys us the flexibility to update a contract by simply extracting all state elements from the old contract, initializing the new contract's state elements with them, and updating the data stored on the location channel through a quorum of multisig public key agreements.
In the registry uri map, instead of directly mapping a blessed contract's shorthand to the contract's uri, we map the shorthand to a dispatcher contract which gets the data from the corresponding location channel and calls that contract with the supplied arguments. E.g. if C
is a blessed contract, then it will have an accompanying dispatcher contract to dispatch calls
contract C(arg1, ..., argN) = { ... } |
cLocation!(*C) |
contract cDispatcher(arg1, ..., argN) = {
for (realC <<- cLocation) {
realC!(arg1, ..., argN)
}
}
Previously, when we added this contract to the registry, we simply did an insertSigned
with bundle+{*C}
. Now, we will do an insertSigned
with bundle+{*cDispatcher}
. Hence, the registry uri map will contain the key rho:registry:c
(for example) and value (max_int, bundle+{*cDispatcher})
.
Requiring all methods in a blessed contract to be of the form
contract contractName(@"methodName", arg1, ..., argN) = {...}
for a fixed unforgeable name contractName
, will make it so that we only need to manage contractName
. All method calls will be dispatched in the same way.
The channels which serve as a protected store for blessed contract data will be generated as unforgeable names in the original instance of the corresponding contract. In this original instance, the location channels will be passed to the multisig contract for further management via insertBlessed
. Calling insertBlessed
simply updates the blessedContractLocationMap
which the multisig controls. There is one insertBlessed
consume for each blessed contract to prevent any other contracts from being added.
The multisig contract is declared in the registry and gives privileged access to propose
, agree
, and update
methods. This contract is used to manage the data stored on the blessed contract location channels. The methods
propose
: allows any of the privileged public keys to propose new data (con, meth)
to store on a blessed contract location channel (i.e. an update) where con
is the unforgeable name of the new contract and meth
is the unforgeable name for the new contract's methods
propose(@pubKey, @uri, @con, @meth, @sig, ret)
agree
: allows the privileged public keys to "agree" with a proposal
agree(@pubKey, @uri, @con, @meth, @sig, ret)
update
: once there is a quorum of privileged keys agreeing on a proposal, this method will update the data on the corresponding location channel
update(@uri, ret)
// -----------------------------------------
// --- Blessed Contract Update Mechanism ---
// -----------------------------------------
new
a, b, // a few blessed contracts
aDispatcher, // a's dispatcher contract
newA, // an update for contract a
newAMethod,
insertBlessed,
blessedContractLocMapCh,
MultiSig,
msMethodsRet,
msRet,
stdout(`rho:io:stdout`)
in {
match Set("A", "B", "C") {
pubKeys => {
// blessedContractLocMap: uri-shorthand -> location
blessedContractLocMapCh!({}) |
// initialize blessed contract `a` data
for (@uri, loc, @data, @sig, ack <- insertBlessed;
@blessedContractLocMap <- blessedContractLocMapCh) {
// link uri with location channel
blessedContractLocMapCh!(blessedContractLocMap.set(uri, *loc)) |
// store contract data on location channel
loc!(data) |
ack!()
} |
// initialize blessed contract `b` data
for (@uri, loc, @data, @sig, ack <- insertBlessed;
@blessedContractLocMap <- blessedContractLocMapCh) {
// link uri with location channel
blessedContractLocMapCh!(blessedContractLocMap.set(uri, *loc)) |
// store contract data on location channel
loc!(data) |
ack!()
} |
// -------------------------------------------------------------------------------------
// MultiSig enables similiar functionality to a multisig vault.
// pubKeys = set of public keys which have the privilege to propose and approve upgrades
// quorumSize = number of pubKeys member approvals needed to upgrade a contract's data
// -------------------------------------------------------------------------------------
MultiSig!(pubKeys, 2, *msMethodsRet, *msRet) |
contract MultiSig(@pubKeys, @quorumSize, methodsRet, msRet) = {
new
multisig, // MultiSig contract's method entry point
agreementMapCh, // channel on which the agreement map is stored
proposeMapCh // channel on which the propose map is stored
in {
// Initialize agreement map
// (uri, contractData, methodData) -> agreement set
agreementMapCh!({}) |
// Initialize propose map
// uri -> (contractData, methodData)
proposeMapCh!({}) |
// -------
// Propose
// --------------------------------
// Privileged public keys, i.e. members of `pubKeys`, can propose contract updates.
// There is only one proposal per uri possible at a time.
contract multisig(@"propose", @pubKey, @uri, @con, @meth, @sig, ret) = {
// TODO verify sig of (uri, con, meth)
if (pubKeys.contains(pubKey)) {
// `pubKey` has the privilege to propose updates
for (@bcMap <<- blessedContractLocMapCh) {
if (bcMap.contains(uri)) {
// `uri` belongs to a blessed contract
match (uri, con, meth) {
key => {
for (@proposeMap <- proposeMapCh) {
if (not proposeMap.contains(uri)) {
// the update proposal is unique
proposeMapCh!(proposeMap.set(uri, (con, meth))) |
for (@agreeMap <- agreementMapCh) {
// the proposer is the first to agree with a proposal
agreementMapCh!(agreeMap.set(key, Set(pubKey))) |
ret!((true, uri, con, meth))
}
} else {
// an update has already been proposed for this uri
proposeMapCh!(proposeMap) |
ret!((false, "location already exists"))
}
}
}
}
} else {
// `uri` does not belong to a blessed contract
ret!((false, "uri does not exist"))
}
}
} else {
// `pubKey` does not have the privilege to propose updates
ret!((false, "invalid public key"))
}
} |
// -----
// Agree
// --------------------------------------------------------------------------------------
// Privileged public keys can agree with update proposals.
// Manages the `agreementMap`: (uri, contractData, methodData) -> set of agreeing pubKeys
// --------------------------------------------------------------------------------------
contract multisig(@"agree", @pubKey, @uri, @con, @meth, @sig, ret) = {
match (uri, con, meth) {
agreeTruple => {
if (pubKeys.contains(pubKey)) {
// TODO verify sig of (uri, con, meth)
for (@map <- agreementMapCh) {
match map.getOrElse(agreeTruple, Set()).add(pubKey) {
agreeing => {
agreementMapCh!(map.set(agreeTruple, agreeing)) |
ret!((true, uri, con, meth, agreeing))
}
}
}
} else {
// pubKey is not in pubKeys
ret!((false, "invalid public key"))
}
}
}
} |
// ------
// Update
// ------------------------------------------------------------------------------------
// if there is a quorum of privileged public keys agreeing on the update for `uri`
// then this method updates the contract data and manages the internal maps accordingly
// ------------------------------------------------------------------------------------
contract multisig(@"update", @uri, ret) = {
for (@proposeMap <- proposeMapCh) {
if (proposeMap.contains(uri)) {
for (@blessedContractLocMap <<- blessedContractLocMapCh) {
match (blessedContractLocMap.get(uri), proposeMap.get(uri)) {
(loc, (con, meth)) => {
for (@agreementMap <- agreementMapCh) {
if (agreementMap.getOrElse((uri, con, meth), Set()).size() >= quorumSize) {
// sufficiently many keys agree to update
// consume data on location channel in order to replace contract data
for (oldData <- @loc) {
new tmp, newARet in {
oldData!("extractState", *tmp) |
for (@oldState <- tmp) {
// launch new contract instance with initial state extracted from the old instance
@con!(oldState, *newARet) |
// manage agreement and propose maps
agreementMapCh!(agreementMap.delete((uri, con, meth))) |
proposeMapCh!(proposeMap.delete(uri)) |
// write new method entry point to location channel
@loc!(meth) |
ret!((true, *newARet))
}
}
}
} else {
agreementMapCh!(agreementMap) |
proposeMapCh!(proposeMap) |
ret!((false, "quorum does not exist"))
}
}
}
}
}
} else {
proposeMapCh!(proposeMap) |
ret!((false, "invalid proposal uri"))
}
}
} |
// Read
// --------------------------------------------
// Returns a map containing the current maps:
// "blessed" - blessed contract location map
// "agreement" - agreement map
// "propose" - proposals map
// --------------------------------------------
contract multisig(@"read", ret) = {
for (@agreementMap <<- agreementMapCh;
@blessedMap <<- blessedContractLocMapCh;
@proposeMap <<- proposeMapCh) {
ret!({ "blessed" : blessedMap, "agreement" : agreementMap, "proposals" : proposeMap })
}
} |
methodsRet!(bundle+{*multisig})
} |
msRet!((bundle+{*MultiSig}, { "pubKeys" : pubKeys, "quorumSize" : quorumSize }))
} |
// a blessed contract in the registry which will not updated in this example
b!() |
contract b() = {
new bMethod, bLoc, ack in {
insertBlessed!(`rho:registry:b`, *bLoc, bundle+{*bMethod}, Nil, *ack)
// insert arbitray contract code...
}
} |
// a blessed contract in the registry which we intend to update
contract a(@val1, @val2, ret) = {
new aMethod, aDispatcher, aLoc, state1, state2 in {
// original instantiation of contract data
state1!(val1) |
state2!(val2) |
contract aMethod(@"set1", @val, ack) = {
for (_ <- state1) {
state1!(val) |
ack!()
}
} |
contract aMethod(@"set2", @val, ack) = {
for (_ <- state2) {
state2!(val) |
ack!()
}
} |
contract aMethod(@"read", ret) = {
for (@val1 <<- state1; @val2 <<- state2) {
ret!((val1, val2))
}
} |
contract aMethod(@"extractState", ret) = {
for (@st1 <<- state1; @st2 <<- state2) {
ret!((st1, st2))
}
} |
// Dispatcher contract for `a`
contract aDispatcher(@arg1, @arg2) = {
for (realA <<- aLoc) {
realA!(arg1, arg2)
}
} |
contract aDispatcher(@arg1, @arg2, @arg3) = {
for (realA <<- aLoc) {
realA!(arg1, arg2, arg3)
}
} |
// initialize original contract data
new ack in {
insertBlessed!(`rho:registry:a`, *aLoc, bundle+{*aMethod}, Nil, *ack) |
for (<- ack) {
ret!(bundle+{*aDispatcher})
}
}
}
} |
// updated contract to replace the old one
// - contract updates do not get a location or dispatcher
// - location and dispatcher are created in the original contract instance
contract newA(@oldState, ret) = {
new state1, state2 in {
// initialize new state channel with old state
state1!(oldState.nth(0)) |
state2!(oldState.nth(1)) |
contract newAMethod(@"set1", @val, ack) = {
for (_ <- state1) {
state1!(val) |
ack!()
}
} |
contract newAMethod(@"set2", @val, ack) = {
for (_ <- state2) {
state2!(val) |
ack!()
}
} |
contract newAMethod(@"modify", ack) = {
for (_ <- state1; _ <- state2) {
state1!("new state 1") |
state2!("new state 2") |
ack!()
}
} |
contract newAMethod(@"read", ret) = {
for (@val1 <<- state1; @val2 <<- state2) {
ret!((val1, val2))
}
} |
contract newAMethod(@"extractState", ret) = {
for (@val1 <<- state1; @val2 <<- state2) {
ret!((val1, val2))
}
} |
// upon successful multisig operations, the data on `aLoc` is replaced with bundle+*{newAMethod}
ret!(bundle+{*newAMethod})
}
} |
// Scenario
// --------------------------------------------------------------
// 1. The pubKeys member "A" will propose an update: bundle+{*newA}, bundle+{*newAMethod}, to contract `a`.
// 2. Then the pubKeys member "B" will agree with the proposal.
// 3. Then some "Rando" proposer makes a proposal and it is rejected.
// 4. Then `a` is updated to the data originally proposed by "A".
// --------------------------------------------------------------
new
ack,
aCh,
randoContract,
randoMethod
in {
// instantiate original `a` contract
// this would happen during the creation of the genesis block
// or during an update
a!("old state 1", "old state 2", *aCh) |
for (_ <<- aCh) {
for (@oldBcMap <<- blessedContractLocMapCh) {
// get the MultiSig method entry point
for (ms <- msMethodsRet) {
new ret, ret1, tmp in {
match (`rho:registry:a`, bundle+{*newA}, bundle+{*newAMethod}) {
(uri, newContractData, newMethodData) => {
// "A" proposes an update for `a`
ms!("propose", "A", uri, newContractData, newMethodData, Nil, *ret) |
for (@(true, _, _, _) <- ret) {
ms!("read", *ret1) |
for (@m <- ret1) {
match m.get("agreement") {
am => {
// Only "A" has made a proposal and hence only "A" has agreed on any proposal
stdout!(("proposal implies agreement", am.get((uri, newContractData, newMethodData)) == Set("A")))
}
}
} |
// "B" agrees with the proposal
ms!("agree", "B", uri, newContractData, newMethodData, Nil, *ret) |
for (@(true, _, _, _, _) <- ret) {
new oldMapsCh, newMapsCh in {
ms!("read", *oldMapsCh) |
for (@oldMaps <- oldMapsCh) {
// just for fun: some rando tries to propose an update for `a`
ms!("propose", "Rando", uri, bundle+{*randoContract}, bundle+{*randoMethod}, Nil, *ret) |
for (@(false, _) <- ret) {
// "A" attempts to agree with their own proposal.
// This vote is not counted again; "A" voted for the proposal by proposing it.
ms!("agree", "A", uri, newContractData, newMethodData, Nil, *ret) |
for (_ <- ret) {
ms!("read", *newMapsCh) |
for (@newMaps <- newMapsCh) {
// check that invalid proposals and agreements do not the corresponding maps
stdout!(("invlaid proposer does not change the proposal map", oldMaps.getOrElse("proposals", 0) == newMaps.getOrElse("proposals", 1))) |
stdout!(("pubKeys cannot agree more than once with a proposal", oldMaps.getOrElse("agreement", 0) == newMaps.getOrElse("agreement", 1)))
} |
// both "A" and "B" have agreed to update `a`, quorumSize = 2
// so we can update `a`
ms!("update", `rho:registry:a`, *ret) |
for (_ <- ret) {
for (@bcMap <<- blessedContractLocMapCh) {
// get updated contract's method data
for (newMeth <<- @{bcMap.get(`rho:registry:a`)}) {
// contract data is correctly updated
stdout!(("after update new data is stored on location channel", *newMeth == {bundle+{*newAMethod}})) |
// no contract locations should be changed during the update
stdout!(("blessed location map is unchanged", oldBcMap == bcMap)) |
// the proposal and agreement maps should be empty after the update
ms!("read", *ret) |
for (@m <- ret) {
stdout!(("multisig maps should be empty", m.get("proposals") == {} and m.get("agreement") == {}))
} |
// check that the new contract's state is correctly initialized
// and apply a method which was not available in the old contract
newMeth!("read", *tmp) |
for (@v <- tmp) {
stdout!(("correct initial state", v == ("old state 1", "old state 2"))) |
// since the contract's data has been updated,
// we can call a method that's only available in the new contract
newMeth!("modify", *tmp) |
for (<- tmp) {
newMeth!("read", *tmp) |
for (@v <- tmp) {
stdout!(("correct new state", v == ("new state 1", "new state 2")))
}
}
}
}
}
}
}
}
}
}
}
}
}
}
}
}
}
}
}
}
}
}
Extra comm events are required to interact with the affected contracts.
This RCHIP should document the error we created when preparing data for genesis block after Hard Fork 1.
During post-process verification of hard fork we discovered that total sum of REVs in wallets file is not the same as initial sum in the first genesis.
The difference was found with amount of total bonded validators which are specified in bonds.txt file and used to initialize PoS contact in genesis block.
The mistake was that we didn't take into count creation of PoS address with the balance of total validators stake so now this amount of REVs is created extra.
As part of supporting repository for export of REV balances for Hard For 1 is also validation/report for wallets file for both genesis blocks with the calculation of the error.
Links: Run report, Source
To correct the error we need to burn 6000000000000465
tinyREVs from Coop vault because now all of the staking REV is collected on Coop vault.
The proof of stake contract uses public keys to represent parties who stake REV. As far as I can tell, this excludes the use of multisig or other smart contracts as staking entities. Can you confirm? If so, I can evolve this into a concrete proposal...
If PoS used REV Addresses, we could use addresses from RevAddress.fromUnforgeable
connected to multisigs, staking pools, etc.
Improve http powerbox function demoed by @tgrospic. In order to incorporate external applications without an oracle.
asset prices
time
process orchestration for legacy applications
develops expect standard string functions including converting other types toString()
Please check the ones we already have. This is a good first issue for someone.
I struggled to concatenate a number and a string ultimately discovering it is impossible in rholag. Nothing should be impossible in rholang.
The discussion was brought up during this weeks Tech. Governance session about ways to solidify RChips that are in the works (from draft to final) and how other open source projects do it (namely Ethereum). As of right now anyone can create an RChip and it's considered automatically approved if it meets some criteria. This RChip is about quantifying that criteria with some automation tools.
One solution would be to let the submission process be as is, and as issue receives a specific tag, it will be processed by a bot/tool that retreives issue body, parses it, checks if it meets certain criteria, such as:
If all the criteria are met, then the RChip can receive a unique identifier and be recorded in the repository (or maybe pushed to Rchain network itself, because why not). Storing them as files also allows us to generate static pages with something like Jekyllrb to make RChips viewable outside of Github.
Ethereum Foundation also has a tool called eip_validator that goes through markdown files and makes sure they all meet certain criteria. If possible rewrite it for Node.js rather than using Ruby as a dependency.
The alternative would be to force everyone to submit pull requests with a draft but that process is too manual and would discourage participation, or we can leave everything as is and check for criteria manually.
**Summary: **
SysOps runbook procedure for a breaking change release.
Definition:
Breaking change release is a release that is not backwards compatible. Should these procedures apply at the root shard level only or all shards? Protocol (data structures and/or API call parameters) breaking changes vs. specific implementation breaking change (scala vs java vs rust) ? Note that multiple language nodes can run in the same shard .. it might improve security and resilience.
Validators:
All bonded node validators must participate in a breaking change release implementation to migrate from the old network to the new network at the block height announced by the Tech Governance team (with inputs from dev team).
We have a rholang 1.1 implementation in the form of a source-to-source translation to rholang 1.0. Getting this integrated into validator nodes on mainnet involves lots of coordination that will understandably take time, but meanwhile, much like JavaScript developers use babel to translate new features to syntax understood by deployed browsers, we should expose the rholang 1.1 to 1.0 translator as tool so that developers can write rholang 1.1 and translate it rholang 1.0 automatically for deployment.
We could, for example, integrate that with the Rholang Playground - RChain.
ref:
Done | Date (2021) | Event | Description |
---|---|---|---|
✔️ | 02 July | Specification | Specification of Hard Fork process |
✔️ | 07 July | Announcement | Announcement of preparatory snapshot |
✔️ | 12 July | Snapshot | Preparatory snapshot, block 896988 wallets_REV_BLOCK-896988.txt |
✔️ | 12 July | Announcement | Disseminate information about Preparatory Snapshot |
✔️ | 15 July | Announcement | Announcement of block height 908300 for final snapshot |
✔️ | 16 July 10:41 UTC |
Shutdown | Network shutdown at Last Finalized Block 908300 |
✔️ | 16 July | Snapshot | Final snapshot, block 908300 wallets_REV_BLOCK-908300.txt |
✔️ | 18 July | Genesis | New network operational with genesis block 908400 |
Links in this table points to the text file where REV balances can be checked, for Preparatory and Final Snapshot.
The balance is in raw format (revlettes), add 8 decimal places to get REV amount.
Here are the raw csv files from export process documented in Export of REV vault state RCHIP. mergeBalances.csv
contains mapping between REV address and its hash which is acquired from transaction server.
Two upcoming hard forks are planned for RChain main net network. They will restart the network from fresh state with only REV vault state (balances) transferred from the old network.
The purpose of this RCHIP is to provide specification how the process of the first hard fork will be executed. Hard Fork 1 is more like a rehearsal for Hard Fork 2 which will contain many improvements and fixes. So in Hard Fork 1 are only fixes to support future block-merge release, remove inactive validators from PoS contract and a few configuration changes.
Terminology:
Change | Type | Description | Reference |
---|---|---|---|
TreeHashMap depth (rewards map) |
PoS fix | Support for block-merge (prevent conflicts) |
PoS rewards map init |
Reduce quarantine length | PoS config | Reduce period for holding unbonded/slashed validators in active bonds map | PoS quarantine length config |
Remove inactive validators | PoS config | Remove inactive validators from bonds map, start a new network with only active validators | PoS initial bonds map |
Increase max bonding amount | PoS config | Increase maximum amount of REV for validator to stake (1.5M REV => 5M REV) |
PoS max bond config |
Config type | Value | Description |
---|---|---|
Shard name | root1 | Root shard for RChain blockchain network |
Network ID | mainnet1 | Message identifier of the RChain network |
Config type | Value | Description |
---|---|---|
Epoch length | 250,000 |
Number of blocks between epoch |
Quarantine length | 0 |
Number of blocks to keep validator after withdrawal (it's zero because epoch length is already included) |
Minimum bond | 12,000.00000000 |
Minimum stake needed for bonding (REV amount) |
Maximum bond | 5,000,000.00000000 |
Maximum stake allowed for bonding (REV amount) |
Bonds file used to start the new network bonds.txt.
In short, Hard Fork will not have impact for owners of private keys for corresponding REV address, including REV on exchanges.
MultiSig REV accounts cannot be migrated and must be transferred to private key owned REV address. To our knowledge, there should be no additional MultiSig accounts other than those used in system contracts.
The impact will be on operations on the network. After Final Snapshot network will be offline for a short period for the final community validation.
The Cooperative Board established the process to transfer some of the locked RHOC tokens. Description of the adjustments is documented in the Coop Governance-Committee repository and REV balances will be published in a separate file.
The whole proces is divided in multiple steps to ensure enough time for the community to check the balance of their REV accounts and to minimize the downtime of the main net network.
Two snapshots are planned. The first Preparatory Snapshot will leave more time for community validation because network will continue working without interruption.
In the second Final Snapshot, the network will stop and exported REV balances will serve as the initial state of the REV vault for the new network.
Before Final Snapshot, RChain Cooperative will announce block height on which snapshot will be taken and published on GitHub rchain/rchain repository in the wallets_REV_BLOCK-nnnnnn.txt
file for community validation.
This is also an invite to all Coop members to check their balance in specified wallets.txt
file by searching for their REV address and comparing with the balance read from any available wallet.
The Cooperative encourage members to check their balances and any reported irregularities will be investigated, documented and corrected.
The whole process of REV export (snapshot) and validation is documented (Export of REV vault state (REV balances)) and supporting repository (tgrospic/rnode-rev-export-hard-fork-1) enables anyone to repeat the process, not only before Hard Fork but any time in the future. The Cooperative will ensure that state before each Hard Fork is preserved and available in the future.
REV balances exported in the Final Snapshot will serve as the basis to start the new network through Genesis ceremony. The first block will contain deploys with initial deposits from wallets.txt
file published in rchain/rchain repository.
Speaking with Jim Whitescarver at the moment. He believes that other forks are working on decentralization of the Tuplespace, but that it's important to add it to the issue list here.
This is a necessary and important performance enhancement feature.
RChain aspires to “content delivery at the scale of Facebook". One of the pain points in RCat was hex-encoding assets as rholang strings and then decoding them on chain.
Several projects involve binary assets: the encrypted ID wallet, dappy, etc.
term
string, it would have space for binary attachments; perhaps a list of them, or a map-like structure of names and byte-sequence values.
new song1(`rho:attachment:1`) in {
new stream in {
contract stream(payment, ret) {
...
ret!(song1)
}
}
}
This would let the stream
contract send a ByteArray
to ret
. The bytes of the ByteArray
would come from the 1st binary attachment in the GRPC message.
Non-trivial development time. More stuff for client devs to learn (even if only to ignore it).
Requires a hard-fork.
Do nothing.
Perhaps the 2x size price for hex-encoding and the compute cost of hexToBytes not worth the bother? But that cost is ongoing, whereas the cost of this feature is mostly a one-time thing (modulo ongoing maintenance).
Description
Strategic Category: Reduce Risk - 6
In Ethereum, validators are able to change the gas exchange rate every block. The amount by which the exchange rate can change is limited.
https://rchain.atlassian.net/browse/RIP-6?src=confmacro
Created October 9, 2018, 5:56 PM
Governance of the Blockchain - Draft
This proposal evolved from thinking about deploy gossip protocol and made under assumption that all validators in a shard are obliged to provide the same SLA for shard users, and users do not differentiate validators. Feel free to question this statement if not agree.
Apologies for not very fell formed idea, but I publish it to start a discussion. Any objections, best practices etc are highly appreciated.
What are the requirements for deploy gossiping/propagation and why we need it?
Maximal possible decentralisation.
Block finalisation depends on how many blocks of other validators are proposed on top of it. So when all blocks creation time is about the same - finalisation time is predictable. When one node hoard deploys - it introduce delay because time to create block will be big. So equal distribution of deploys across validators in a shard is good.
We need (ideally) not have two conflicting deploys on two different validators at a time. In general this is about making less conflicting blocks. So we need some analytics when distributing deploys across the validator set. This is future task when we can detect conflicting deploys at compile time. But the reason here is that we need that analytics before assigning deploy to particular validator. After some discussions it was decided that with block merging implemented there will be no such thing as the conflicting blocks
, conflicts will be on per deploy basis. Two blocks always can be merged, and only one of conflicting deploys will go through. Other deploys which do not cause conflicts won't be affected.
Short term goal might be to make some deploy go through as fast as it possible, by sending deploy to some validator that is going to issue block soon. This is an example of analytics function, and there might be many of them aiming different optimisations.
General point - each shard is a decentralised computer, so deployer submit a code to a shard, not to particular validator, it should be abstracted from that. Shard is the entity that provides consistent Term of Service, cost policy, etc.
So the following is suggested:
Currently deployer have to query for block height to supply validAfterBlockNumber, so we have a “pre-deploy“ stage where deployer have to get some information from the network before supplying the actual deploy.
Let's call this stage Reception
and introduce target validator (lets call it ExecutiveValidator
) switch here. Let's call node that does reception Porter
. Later some logic that will pass deploy not to just some random validator but to the most suitable one can be added, e.g. to produce less conflicting blocks.
A piece of data that is required to do a deploy introduced - a Parole
, an offer of ExecutiveValidator
to receive deploy with particular deploy signature. Parole
= {block-height, word}. Word is the passphrase that deployer should supply to validator to get the deploy accepted. Later we can add more fields to Parole
to enable cryptographic proofs, eliminating duplicates and so on. But for now let's KISS.
Deployer should issue a ParoleRequest
to any shard peer (playing here role of Porter
) before making deploy.
When receiving ParoleRequest
, Porter
looks into the most recent bonds map, pick validator, forwards ParoleRequest to potential ExecutiveValidator
.
ExecutiveValidator
responds with the Parole
. PorterValidator
forwards Parole
to deployer.
Deployer sends deploy, providing Parole
to ExecutiveValidator
.
Initial implementation in trusted environment should be pretty straight forward. To make it robust to malicious actors will take unknown amount of work. So the question is - is this feasible strategy to let any peer act as Porter
? If we restrict only validators to be Porters
? Is there any reasons why it is not feasible so we can kill it fast?
In order to make it easy to validate short epics, of say 100 blocks with random assignment of validators could be implemented.
All validators available for validating would be eligible for rewards independent of whether they are selected for participating in the consensus.. Nodes not validating would operate read only and confirm finalized blocks determined by a small subset of the available validators.
Rewards would be higher for stake held longer to provide incentives for longer staking periods.
"As a user I want to know how much REV my deployment will cost"
-- https://github.com/rchain/rchain/blob/dev/docs/features.md#as-a-user-i-want-to-know-how-much-rev-my-deployment-will-cost
I expected to be able to do this with exporatory deploy, but in documenting the API ( rchain/rchain#3140 ) I discovered cost is not reported.
Currently, printing logs to stdio is being used but is not the best method to debug applications.
Some options are
Given the success of rchain the price of REV can be expected to climb. Buy low sell high can profit the coop and stabilize the REV price and enable minting additional REV to prevent the price raising too quickly. . The price can be set to reward early adopters without becoming a scarce resource. Over time the price of the REV as a utility token can be fixed.
As the REV price stabilizes to a constant value over 10(?) years investors can trade rev for shard tokens that are in the early reward stage.
It has been stated that rchain will eventually delete data that has not paid for continued storage. We cannot allow tuple space to grow without bound with reads and writes that are never likely to occur. Currently there is no distinction between items in tuple space. No record is kept of when they were created or last accessed.
There is no immediate need to solve all the retention issues at this time but there is a need to insure we are creating a tuple space that will allow for a retention policy in the future.
A minimum implementation is a modification of tuple space to keep a least recently used list of tuples along with the block number.Additional future fields can be provided for.
at some block height least recently used tuples could be dropped . Then after each propose data can be purged for the lower block height plus one. The number of block age always kept may be constant or inflationary TBD.
In the future keeping the deployid for tuples could enable refreshing deploys.
A comprehensive solution might include a rholang extension allowing names to listen for the deletion event
?
This issue is a place holder to list projects that could potentially connect to the RChain platform.
We must have the possibility to validate strings in rholang.
This is a needed feature for Dappy main network (no urgent though).
rchain-names is a name system coded in rholang, Dappy uses this name system to manage name purchases/selling/renewals, just like for the domain name system.
Right now the rholang code names.rho
checks only the String
type, therefore anyone is able to register names like "aaa", "aaaa123", "aBC", "a?!..", "eeꀀ€", and maybe even non-latin characters "的的的".
So we must be able to validate strings in a regexp-like way.
Note: this proposal is dedicated to string validations and not string manipulation like .toLowerCase()
.
Regexp validation can be very much resource intensive, so if we provide a transparent rholang -> scala bridge, it will be a very dangerous situatin, moreover we do not have an easy way to price a regexp operation.
See https://snyk.io/blog/redos-and-catastrophic-backtracking/
Strings in rholang already have functions like .slice(0, 3)
. We cannot do a one-to-one bridge like the following "abc".test(/[a-z]/g)
So my proposition is a limited set of regexp patterns, and therefore validation capabilities much more limited than regexp, but for which a pricing mechanism is easier to setup.
"abc".test(["a-z"]) // true
"abcD".test(["a-z"]) // false
"abcD".test(["a-z", "A-Z"]) // true
"abcD8".test(["a-z", "A-Z"]) // false
"abcD8".test(["a-z", "A-Z", "0-9"]) // true
// from now the patterns that are not one of "a-z", "A-Z", "0-9" must be a litteral match
"abcD8?".test(["a-z", "A-Z", "0-9"]) // false
"abcD8?".test(["a-z", "A-Z", "0-9", "?"]) // true
// etc...
There is no way to validate that a string starts with x or ends with x. Or that a string respects a certain pattern followed by another pattern, just dead-simple string validation.
Just like every changes in rholang, this implies a "hard-fork" and that all nodes upgrade. It could be triggered at a specific block height, independently of release number.
We must be very careful https://snyk.io/blog/redos-and-catastrophic-backtracking/
"aa".containsLowercases() // true
"aa".containsNonAscii() // false
"aâ".containsNonAscii() // true
"aa".containsUppercases() // false
"aa".containsNumbers() // false
"aa12".containsNumbers() // true
"aa12".containsSpecialCharacters() // false
"aa12?".containsSpecialCharacters() // true
Proxy channels are valuable to control access to a capability such as ont time capabilities, revocable capabilities, remote channels (#45), decorator patterns, etc. Currently when we do not know how many arguments might be sent to a proxy we must implement a contract for every possible number of arguments. Unless we limit the number of arguments in a send the proxy will fail if the channel receives a message with too many arguments.
The proposal is that arg... binds the name arg to a tuple and arg... in a send sends the arguments in the tuple.
contract proxy(tuple...) = {
channel!(tuple...)
}
Currently a simple rholang proxy is awkward and inefficient. It requires a contract for each of the possible number of arguments
contract proxy(one) { channel!(one) } |
contract proxy(one, two) { channel!(one, two) } |
contract proxy(one, two, three) { channel!(one, two, three) } |
...
Match can allow simplified control of proxy alternatives. e.g.
match tuple {
(one,two) => { proxy!(one, two) }
* => { proxy!(tuple...) }
}
Description
Reference: https://rchain.atlassian.net/browse/RIP-1#icft=RIP-1
Strategic Category: Reduce Risk - 8
Background:
To reduce the risk of a bulk of validators abruptly leaving the platform, it is advisable to throttle the rate at which unbonding requests can be processed.
Please specify the rate by which validators can unbond from the platform. The rate will need to include the quantity in the numerator as well as the number of blocks in the denominator.
https://rchain.atlassian.net/browse/RIP-2?src=confmacro
Created October 9, 2018, 5:31 PM
Governance of the Blockchain - Draft
IMPLEMENTATION CONSIDERATIONS COPIED FROM ISSUE 3:
To reduce the risk of 'all' validators abruptly leaving the platform, it is advisable to throttle the rate at which unbonding requests can be processed. In order to specify a rate, it is necessary to specify what unit the rate will use. There are 2 options:
The amount of staked token / count of blocks OR
The number of validators / count of blocks
Subtasks are attached. Please vote on which one should be used.
https://rchain.atlassian.net/browse/RIP-1
Created October 9, 2018, 5:31 PM
Governance of the Blockchain - Draft
This document seeks to propose a process by which updates to the RChain blockchain are defined, approved, prioritized and released. Link
Tooling:
The RChain feature backlog is captured in Jira at: http://rchain.atlassian.net at: https://rchain.atlassian.net/secure/RapidBoard.jspa?projectKey=RIP&rapidView=19&view=planning.nodetail
The RChain Improvement Proposals are captured in Confluence at: RChain Improvement Proposals
Description
None
Ned Robinson
November 1, 2018, 8:03 PM
Is Phlo expected to be a certain percent of Rev?
https://rchain.atlassian.net/browse/RIP-5?src=confmacro
Created October 9, 2018, 5:55 PM
Governance of the Blockchain - Draft
Users need to be able to see the rholang code they are executing, even when the rholang contract has already been stored on-chain. In support of this, the rholang source code can be stored alongside the actual contract. This leaves a security hole, since the rholang source code can be maliciously different from the stored contract.
Access to the interpreter/parser would allow execution to validate the stored rholang code against the stored contract to resolve malicious activity.
I would like to be able to do something like
new interpreter(`rho:rholang:interpreter`), someBugger(`someBugger`), compare, theSame in {
for (@bugger <- "new i in for () { i!(\"blah\") }") {
interpreter!(bugger, *compare) |
compare!(*someBugger, *theSame) |
match theSame {
true => { blah | blah | blah }
false => { blah | blah | blah }
}
}
}
Where someBugger
is a previously stored contract.
Data limitations create bugs. Without BigInt we are forever plagued by overflow and underflow errors. One may blow up the world. without these types we are unable to marshal objects of those types faithfully using inter blockchain protocol.
@dckc pointed out these issues and how agoric contracts convert easily to rholang and there is a lot there we can use ERTP, IBC, CapTP, etc. However when BigInts and Float are used we will introduce bugs as already happened with the POS contract and again recently. See video. https://youtu.be/IbW6uvgCSv4?t=5463
BigInt requires virtually a global change in the rnode source of Int to BigInt.
Adding floating point operations and constants is faily mechanical and requires an update to the rholang spec.
Calling insertArbitrary without bundle+ around the arg is a huge hazzard.
Please add insertBundled (by any name) that does the bundle for the caller.
Then hide the documentation to insertArbitrary or something :)
I suppose something analogous for insertSigned is in order.
As far as I can tell, the current RevVault contract makes this impossible.
I hope to evolve this into a proposal to make it possible, as discussed last night with @leithaus .
It would be more straightforward if we could withdraw and deposit purses from rholang as well as do signed transfers
new myevent('rho:io:event') in { myevent!("stuff happened") }
#Event + + ("stuff happened") -> events.log
https://www.tutorialspoint.com/scala/scala_bitwise_operators.htm
In order for networks to be fully connected networks acting as a single network, any part of any of the connected network must be reachable from any other. The success of WWW is in part due to being able to use url's to anywhere on the web anywhere. There are good arguments that rchain networks should be fully connected with homogeneous access to resources. Programming concepts does not require knowledge of the location of the parameters if it is to be reusable as a concept such that it needs to be programmed only once, it must work no matter where on the network the resources it references are located.. Rholang is perfect for doing decentralized computing across shards given access to remote channels.
In order to be fully connected remote names need to be supported transparently as do remote deploys.
A remote deploy would need to translate local names to remote names and the receiving shard would translate remote name pointing to it to local mames.
In order to deploy remotely, the rev account deploying remotely must own the rev of the receiving shard on that shard or be allowed sponsored access.
A capability transport (capTP) protocol may be used to implement security across shards..
The results of remote deploys need to be cached in tuple space such that the result of the deploy will be the same for every validator the same remote deploy without redundant execution on the remote shard. A means of inquiring the finalization status of the remote block containing the deploy ought be provided for critical transactions.
Remote names may be implemented as proxy channels transparently forwarding the entire message, e.g. (first, ...rest) to the remote deploy process., Rchain needs proxy channel monitor support for many reifications, e.g. decorator patterns, revocable capabilities and tokenization. Using proxies enables all the sharding logic being used to be in rholang and thus customizable per shard..
Names and even whole processes could be migrated across shards to minimise cross shard access in an evolutionary manner.
In the EIES effort including Jim, Rao, kunj, GSJ and w2vy a decentralized object store was developed supporting remote objects transparently across the network of user agents and group agents which successfully supported the virtual classroom and other group system research at NJIT. Unfortunately we have been unable to get it open sourced. However Rchain provides layer one security our system never had. A number of us are here because we see rchain as a potential platform for developing agents for users and groups scalably as we have done before. However reviewing the documentation available on rchain sharding we have not found a design including the transparent decentralization we had on EIES2 at NJIT where the programmer was not required to know the location of any the objects it uses at run time. Without that capability developing a coherent networked system can be a nightmare.
Programmers being unaware of which names it accesses are remote could lead to a excess of network traffic. They need to be aware that not considering what is likely to be remote could be expensive.
09:51:05 From Rao Bhamidipati to Everyone : @ian could you please give me recording permission ?
10:47:23 From Tomislav Grospić to Everyone : https://rchain.atlassian.net/wiki/spaces/CORE/pages/515899462/Sharding+proposal
11:00:59 From Tomislav Grospić to Everyone : Obsolete
https://rchain.atlassian.net/wiki/spaces/CORE/pages/298549249/Sharding+Via+Localized+Processes
11:01:08 From Tomislav Grospić to Everyone : https://rchain.atlassian.net/wiki/spaces/CORE/pages/478445641/Shards+in+RChain
11:01:14 From Tomislav Grospić to Everyone : https://rchain.atlassian.net/wiki/spaces/CORE/pages/311722016/Powerset+shards
11:02:25 From Tomislav Grospić to Everyone : ETH cross shards
https://ethresear.ch/t/atomic-cross-shard-function-calls-using-system-events-live-parameter-checking-contract-locking/7114
Higher-order Smart Contracts across Chains - Mark Miller https://www.youtube.com/watch?v=iyuo0ymTt4g
dan's saturday RChain Rgov/smart contract call - dan's wisdom, ocap, capTP, erights and cross chain transactions https://youtu.be/OrEZzF2t8Uk?t=4458
In most popular blockchains I know of, there's a light client specification - a way that a client can verify any transaction given a recent block header and a merkle proof. Does RChain supply something analagous?
zsluedem/transaction-server shows how to find REV transfers from com events in RChain blocks. Is there something analagous to a merkle proof for these transactions (or the com events that define them)?
As shown in this IBC relayer diagram, a light client for each blockchain is needed in order to relay between them. If we had an RChain light client, we could use IBC to participate in Gravity, a bridge to Ethereum.
See also cross-chain peg; e.g. to Agoric · Issue #12 · rchain-community/rstake
Description
None
https://rchain.atlassian.net/browse/RIP-3?src=confmacro
Created October 9, 2018, 5:38 PM
Governance of the Blockchain - Draft
Specify expressions for string (Aurthur Greef PR merge - convert into RChip)
Dappy needs Scala regular expressions to be exposed in Rholang - e.g. to check for domain names
https://www.tutorialspoint.com/scala/scala_regular_expressions.htm
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.