Giter VIP home page Giter VIP logo

protocol's Introduction

Cryptographic primitives, hosted on the decentralized nodes of the Threshold network, offering accessible, intuitive, and extensible runtimes and interfaces for secrets management and dynamic access control.

pypi pyversions codecov discord license


TACo Access Control

TACo (Threshold Access Control) is end-to-end encrypted data sharing and communication, without the requirement of trusting a centralized authority, who might unilaterally deny service or even decrypt private user data. It is the only access control layer available to Web3 developers that can offer a decentralized service, through a live, well-collateralized and battle-tested network. See more here: https://docs.threshold.network/applications/threshold-access-control

Getting Involved

NuCypher is a community-driven project and we're very open to outside contributions.

All our development discussions happen in our Discord server, where we're happy to answer technical questions, discuss feature requests, and accept bug reports.

If you're interested in contributing code, please check out our Contribution Guide and browse our Open Issues for potential areas to contribute.

Security

If you identify vulnerabilities with any nucypher code, please email [email protected] with relevant information to your findings. We will work with researchers to coordinate vulnerability disclosure between our stakers, partners, and users to ensure successful mitigation of vulnerabilities.

Throughout the reporting process, we expect researchers to honor an embargo period that may vary depending on the severity of the disclosure. This ensures that we have the opportunity to fix any issues, identify further issues (if any), and inform our users.

Sometimes vulnerabilities are of a more sensitive nature and require extra precautions. We are happy to work together to use a more secure medium, such as Signal. Email [email protected] and we will coordinate a communication channel that we're both comfortable with.

A great place to begin your research is by working on our testnet. Please see our documentation to get started. We ask that you please respect testnet machines and their owners. If you find a vulnerability that you suspect has given you access to a machine against the owner's permission, stop what you're doing and immediately email [email protected].

protocol's People

Contributors

arjunhassard avatar cygnusv avatar derekpierre avatar mswilkison avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

protocol's Issues

A skeleton-logic proposal for another economic model of grant (probabilistic micropayments)

Note: this conversation originally took place on NuCypher's forum.

@jMyles [Sep 19]
When Ursula joins, she commits a sequence of hidden lottery numbers.

Alice creates KFrags, a TreasureMap, and a lottery punch. Gives the map and punch to Bob.

Alice proposes Arrangement, providing unsigned (and thus not yet valid) lottery ticket

Ursula checks contract to see how much funding Alice has for that ticket

Ursula says sure

Alice gives Ursula KFrag and signs lottery ticket

Bob seeks retrieval:

  • Gives Ursula WorkOrder consisting of hash of Capsules + punch-holes
  • Ursula responds by signing WorkOrder and saying “ready”
  • Bob sends clear Capsules and punch-holes
  • Ursula sends signed CFrags

Every block, every punch hole has a 1/1000 chance of winning for Ursula’s hidden numbers. If it’s a winner, Ursula reveals and receives the money from the contract.


@arjunhassard [Sep 19]
This is a really interesting idea, and also stimulates some broader thoughts about our payment channel(s) and fee structure. I’ve argued previously that compensating Ursulas (= workers, stakers, nodes, proxies) based on only one of a. policy duration or b. re-encryption requests, leads to various problems – see #7. In short, there is a risk of abuse if Alice and Bob are separate end-users with distinct, discretionary economic motives, and there isn’t a prevailing application layer channeling funds between users. Moreover:

  1. If Ursula is only paid when re-encrypting (or some % of them), but Bob issues very few or no requests over the course of the policy duration, then they will spend resources (the vast majority of overheads, in fact) maintaining an available policy/state, but receive little or no compensation.
  2. If Ursula is only paid for holding a policy, then this doesn’t help address the uptime problem. The infrequency with which an Alice would pay for a 3 or 6 or 12 month policy also makes probabilistic micropayments unnecessary and unsuitable.

Therefore a hybrid approach, where Alice pays once for the persistence of the policy, and Bob probabilistically pays for each re-encryption, might be a happy medium.

In this flow, Ursula accepts an Arrangement based on the (standardised) cost of being online for the policy’s duration, which is entirely covered by Alice’s deposit up front.

Later, Bob pings Ursula. Ursula checks he has sufficient funds (see verification point below), then commits to some random numbers. The rest of the process mirrors jMyles’s proposal above. This disentangles Bob from Alice, where he is no longer restricted by the ‘punch’ bestowed to him, and can request re-encryptions as many times as he is willing to pay for (and collateralise).

Some general points about probabilistic micropayments applied to the NuCypher network, regardless of the viability of this counter-proposal:

  • Ursula needs the ability to verify that the payer (whether its Alice or Bob) has enough funds to cover the situation in which ALL of the winning tickets are redeemed – i.e. for all their current policies or requests, and for all n Ursulas with whom there is presently a commercial relationship.
  • The risk remains of Ursula receiving the capsule and random numbers, withholding the CFrag, and proceeding to claim payouts.
  • In terms of sourcing randomness, why not use something akin to LivePeer’s approach – winning tickets are selected based a random value produced by hashing the payer’s random number with the payee’s secret random number.

@michwill [Oct 19]
Let’s define a protocol. How about this.

We have Bob, and we have someone who pays (could be Alice, or Bob, or some third party - let’s call him Patrick). The protocol looks like the following:

Patrick escrows in some money;

  • Patrick makes a ticket pack where he defines a range of counters for Bob (ctr_min and ctr_max) and signs: s_p = patrick.sign(hash(ctr_min | ctr_max | pubkey_ursula | pubkey_bob), and the preticket is t_0 = s_p, ctr_min, ctr_max, pubkey_bob;
  • Bob gets t_0 and verifies that Patrick’s account in an escrow (where Ethereum address is derived from Patrick’s pubkey) has money locked for some time;
  • When Bob pays, he produces t_1 = t_0, ctr, bob.sign(t_0 | ctr) and gives to Ursula;
  • Ursula checks that Bob is indeed allowed to pay with this ticket pack (e.g. Patrick signed t_0) and that Bob signed t_1. Then, Ursula produces what is actually a ticket number: t_2 = t_1, ursula.sign(t_1). If remainder of dividing hash(t_2) by p, where p is how rare the lucky tickets are is 0, it means that Ursula won. She show this t_2 to an escrow and claim her reward;
  • If she didn’t win - too bad for her!
  • The smart contract tracks the claimed hashes, so that the same prize cannot be claimed twice.

In this protocol, no one knows, what hash Ursula will get (except for Ursula) before she actually calculates and broadcasts it. Neither can anyone guess, what Bob will produce (because no one can forge his signature). True randomness isn’t required.

Patrick makes tickets for Bob and for specific Ursulas (so that Bob can spend tickets only for the n Ursulas).

Does it work, or am I missing something?


@cygnusv [Oct 19]
Two things:

  • Perhaps s_p needs also a nonce, to avoid any kind of replay attack.
  • Unless we use deterministic signatures, Ursula can sign t_1 multiple times until she finds a t_2 that is a winner ticket.

Entry points for lazy Ursulas

Just writing down some things we noticed regarding the issue of lazy ursulas that only confirm activity but perform no work.

These are some entry points for such Ursulas:

  • consider_arrangement() in ProxyRESTServer, which allows ursulas to decide if they want to participate or not in a particular Policy.
  • PolicyManager.setMinRewardRate(), which allows Ursulas to put a high reward rate that discourage Alices to select this Ursula (either handpicked or randomly sampled). Related: nucypher/nucypher#1021.

@arjunhassard Not sure what previous issue is this related. Feel free to organize this better in one of the already opened.

Cloudflare to authy auto connect

I am having a.hardtime Everytime I am signing in with my Cloudflare account..it keeps on asking Authentication password number, so I had downloaded Authy two-factor authentication..can it be possible to synchronized my Cloudflare account directing Authy to auto connect log in?

Customer <> service-provider commercial engagement

First of multiple Issues relating to NuCypher market design, with the aim of exploring the system constraints and crystallising the engagement between customers (Alices) and service-providers (Ursulas). Starting with how this flow looks today:

  1. Alice has created and parameterized an Arrangement.
    def __init__(self, alice, expiration, ursula=None, arrangement_id=None, kfrag=UNKNOWN_KFRAG, value=None, alices_signature=None) -> None:
  • Duration is specified (today -> expiration), this is the input for calculating the price (per policy).
  • A deposit is included to compensate participating Ursulas (value). Is this reassurance that the data owner has enough funds? Should this cover the entirety of the eventual payment? If not, how does Ursula know they will be paid in full?
  • m and n are not included in the arrangement, but have been specified via the grant function
  1. This arrangement is now broadcasted out to various Ursulas – selected at random, but weighted by the size of the segments of their respective stakes that satisfy the duration criteria.
    *If many Alices choose specific nodes (handpicked_ursulas), it may change the nature of the market, as it introduces competition between service-providers. More on this in #5.
    *The number of jobs/policies an Ursula is currently responsible for has no impact on the selection process. In the future this could even out demand.

  2. The Ursulas who receive the arrangement choose whether or not to accept it.
    def consider_arrangement(): from nucypher.policy.models import Arrangement arrangement = Arrangement.from_bytes(request.data)
    *On what basis are Ursulas accepting/rejecting arrangements, given their contents? Arrangement durations will equal or be shorter than an existing commitment to the network, so that's not a good reason. Certain Ursulas may seek to optimise their margins (i.e. avoid high overheads), but blindly rejecting arrangements is unlikely to help, so the only exception is if the Ursula anticipates a particularly large amount of access requests (e.g. they work out an arrangement was sent by a Telegram-like application). If that became a widespread strategy, this would jeopardise the network scaling and likely require changing the compensation model.
    *Giving Ursulas the right to reject arrangements introduces uncertainty and latency for data owners / developers, who are accustomed to ~100% predictability and immediate responses from existing key management systems.
    *Ursulas rejecting arrangements may also clash with an over-arching objective to commoditise the access control service – i.e. provide a predictable, uniform service for a uniform payment. Indeed, a single network price for a given policy duration would make it even less rational to reject an arrangement, as there is definitely no prospect of a better offer around the corner.

  3. Alice waits until n Ursulas accept her arrangement proposal.
    *Is the system sequentially approaching Ursulas (or batches of Ursulas) until enough accept? If so, how long does it wait before the next Ursula/batch is contacted? Is there a situation where an Ursula accepts, only to find that the "position has been filled"?

  4. Once n Ursulas accept the arrangement....
    *Before publishing, Alice can still decide to cancel the arrangement and create a new one, without paying out anything.
    *If the deposit is not enough to cover the total cost of the participating Ursulas for the duration, does Alice now add more funds (now she knows how many Ursulas to pay)?

  5. The policy is live. Participating Ursulas perform their duties and are compensated.
    *Compensation is split into tranches, and spaced out to fit the duration of the corresponding policy?

  6. Alice can revoke the arrangement for all or any of the participating Ursulas. She has to 'pay for the period in which the revocation is requested'.
    *A 'period' is a single cycle, so just one day – is this enough of a disincentive? The ditched Ursula may be gearing up for a long commitment, only to be dropped with zero notice or severance pay.

Decentralization

Following @cygnusv's initial look at testnet decentralization.

Some risks of overcentralization that impact NuCypher service quality & network health:

  • Collusion
    Stakers might collude to compromise data privacy, to deny revocation, to game payout mechanisms, or to facilitate other (unknown) attacks.
  • Censorship
    Blocking access to data by refusing to re-encrypt. This is hard to orchestrate – see nucypher/nucypher#803
  • High threshold service discontinuation
    For whatever reason, if a dominant staker or staker cartel decides to spin up the minimum number of workers (1), then users (and indeed, mechanisms) relying on a large n may suffer.
  • Weakening of economic mechanisms (i.e. stake-based sybil resistance)

Across which planes should we measure and monitor the gini coefficient / lorenz curve? At least these four:

  1. Primary ownership of tokens
    Fairly obvious, since reward allocation and job assignment are stake-weighted
  2. Control of delegated tokens
    Depending on governance rules, this could equate to the the political power @michwill mentioned
  3. Total number of workers
    More of a functional issue (i.e. are there enough workers to satisfy the threshold choices of users) – doesn't have much impact on censorship or power consolidation risks
  4. Client diversity
    Including dependency on underlying clients (i.e. Geth dominance)

The risks of overcentralization are also affected by:

  • the reliability/security of the randomness with which stakers are selected for policies (initially and if user requests a worker re-shuffle)
  • 'permanence of employment' - how long stakers are tied into (relatively) lucrative policies, during which they cannot be displaced
  • liquidity and price of token: how feasible is it for a staker to join later and reduce a trend towards consolidation
  • typical thresholds chosen by users

NuCypher’s “free market”

Since it is near-impossible, nor necessarily desirable, to completely prevent providers from adjusting the price point they offer to customers – we examine the incentives compelling providers to proactively implement ‘independent’ pricing, and conversely, passively follow ‘standardised’ pricing. We begin with possible engagement flows.

Engagement flow
The ‘engagement flow’ is the protocol through which customers and providers discover one another, agree to the price and other parameters of the service, and confirm service commencement/compensation. This flow is important to a decentralized network in many respects, including UX, security and redundancy – but crucially, it impacts price convergence/divergence trends and the degree to which the market is ‘free’.

Customer-driven engagement
A simple engagement flow involves the customer first constructing a job offer package (in NuCypher this is called an Arrangement), which specifies attributes of the service required (in NuCypher's case the policy duration and security threshold n [number of providers] are the most important), and finally a deposit to cover the total cost. The customer submits this to the network and waits for the specified number of providers to accept it. This choice of engagement flow has the following characteristics and possible reactions from customers:

  • Without a formal, advertised price discovery tool, there is no obvious way to know a priori what the distribution of price points are, and whether there are a sufficient number of providers willing to serve at the desired price.
  • Without burdensome extra-protocol actions – i.e. contacting and orchestrating providers individually – there is no straightforward means to pay different providers different amounts for the same service (i.e. managing the same policy).
  • The onus is on the customer to bid a price. If they want to fish for a deal – i.e. lower than the default or dominant price – then they must submit a deposit of commensurately smaller size.
  • Fishing for a deal may lead to failed offers, if <n providers are willing to accept the proposed price.
  • This may lead to sophisticated customers starting with the issuance of very low priced offers and steadily increasing the price until n providers accept. They may also opt to decrease n. The former could come at the expense of a non-trivial time delay, the latter decreases security/redundancy.
  • It may be more difficult for customers who require longer duration policies, since there will be fewer compatible providers (i.e. with locked-tokens for that length), and therefore a greater likelihood of failed offers and time wasted. A customer cannot get a true price signal for a long duration policy by testing with short duration policies, since the offer acceptance rate will differ even for the same set of providers.

The lack of formal or provider-driven price discovery means that some customers will stick to the default pricing and not attempt to ‘shop around’, particularly those with end-users that are sensitive to slow or unreliable UX. On the other hand, the ability for developers to ‘sponsor’ policies and abstract the payment away from end-users greatly facilitates this kind of strategy. If customers do employ the deal-search method described above, low-cost providers will accept offers only to see those jobs timeout or withdrawn due to the lack of other willing providers. This may lead to proactive banding together at certain lower-than-default price points.

Alternative: bidding
A common pricing model for decentralised networks is some sort of auction system. These appear to be inappropriate for the NuCypher network for a very fundamental reason: the service is not scarce. Unlike in a consensus mechanism with finite block sizes, the number of transactions that can be processed by a NuCypher provider in a given time period is practically unlimited. Nor do NuCypher providers incur extra overheads above and beyond the minimum – i.e. the costs of staying online and answering requests promptly – if demand increases. Conversely, a service like data storage, market-making or heavy computation are all highly capacity-sensitive (financially and/or in terms of risk) and therefore encourage, or in some cases necessitate, fishing around to find the highest customer bids. In terms of COGs, NuCypher is similar to a SaaS product – the demand-overhead (x-y) relationship is sublinear. The conclusion is that, in the absence of a price-fixing cartel, it is rational to accept all non-zero bids/offers, and hence an auction system (whether first-price, second-price or some other configuration) is fairly pointless.

Alternative: formal price discovery layer
To be discussed.

Pricing strategies on a provider level

Independent pricing
Reasons to diverge from the default or dominant price point:

  • In order to serve a customer segment that cannot currently afford NuCypher.
    This incentive can be harnessed to increase adoption of the network – see 'Demand-driven pricing' section of the full pricing analysis.
  • In order to stand out, in lieu of other options for a provider to differentiate themselves. Perception of service ‘quality’ for a given engagement is relatively binary. Arguably, providers either deliver a correct re-encryption or revocation when expected, or they don’t. Hence, the continuum along which a provider can stand out to customers is very limited. A public scoreboard logging the percentage of answered requests would likely show high performance amongst the majority of providers, and therefore not provide customers with sufficient incentive to pay more for a marginally more reliable provider (though they may blacklist low-performers). This incentive is stronger in epochs of oversupply.
    Note: One exception to the relative homogeneity of NuCypher service quality is the latency with which request calls are answered, and so a major avenue for differentiation, besides price, may be the strategic placing of worker machines around the world, plus other optimisation (e.g. higher RAM). This is only a differentiator if customers both need and are optimising for low-latency, and is a costly exercise if customers are not – this chicken-and-egg situation means latency differentiation will probably not occur in early epochs of the network.
  • In order to undercut other providers and/or undermine their business sustainability.
    Note: unlike in the traditional economy, there is a lack of information on other providers – in particular, their funds and operational efficiency – so sacrificing revenue to put another provider out of business is very risky (they may have deeper pockets than expected).
  • External, macro changes necessitating increases in revenue to stay afloat – including the value of subsidies (inflation) decreasing due to a. scheduled decay, b. large increases in the stake ratio or c. non-temporary depreciation of the native token.

Standardised pricing
Reasons to stick with the default (or converge to the dominant) price point:

  • The majority of customers stick to the default price point, because ‘choosing’ providers involves proactivity, risk and extra-protocol work. Although, as discussed, it is conceivable that an application developer could create a rule to only select providers at a lower-than-default price point, this may be limited to a minority of sophisticated customers, especially in early epochs.
  • There is a lack of ‘relationship stickiness’ between customer and provider. Although some customers may want their policies to perpetuate for a long time, e.g. 1-2 years, and the entire cost of said policies is agreed upon and paid for up-front via the deposit, there is almost nothing preventing the customer switching to another provider once these policies have expired (or if they need other policies prior to expiry) or even revoking the initial policies for a near-full refund if a cheaper provider emerges. In other words, there is almost zero customer ‘lock-in’, either contractually or in terms of time/effort (for example, the time it takes a salesforce to learn a new CRM interface). This means that offering unsustainably low prices in an attempt to secure customer lock-in/loyalty is a poor strategy.
  • Efficiency-driven (i.e. sustainable) undercutting strategies are difficult due to the nature of the service. Beyond a low minimum of outgoing expenses (hardware + internet connection + electricity OR server rental), there is little opportunity to leverage economies of scale, or other strategies, to increase the efficiency of service. (The exception to this is the cost of maintenance and upgrades, which is a time/expertise/salary-driven overhead and can be made leaner). This means that differentiation based on price can only go so far (i.e. as close as possible to universal minimum expenses) if it is to be sustainable. In other words, there is not a great deal of wiggle room for one provider to be significantly more operationally efficient than another. Hence price wars may involve providers running at a loss, which, as examined in points (2) and (4), is risky in the long-term.
  • Some providers may want to expediently maximise the predictability/stability of the service to customers. For example, if many providers choose to offer prices that imply unsustainable operations below a given subsidy value, and this inevitably necessitates price rises, this risks irrevocable damage to the network if customers with large sunk costs are suddenly, or even gradually, unable to afford the service. Providers vested in the network (e.g. those with NU tokens locked for 1+ years) will seek to avoid this scenario, as even the prospect of this could seriously hamper medium/long-term demand.

Full pricing analysis

Migrating future unminted issuance on a stake-by-stake basis

At a given snapshot in time (e.g. right this moment), the future unminted supply can be thought of as having two distinct segments:

(1) A sum of tokens that is currently ‘reserved’ for all active sub-stakes. 

(2) A sum of tokens which is ‘unclaimed’ because no current sub-stakes have an unlock duration stretching enough into the future to reserve the issuance scheduled to be distributed on those future periods.

One can even calculate, for each sub-stake, the sum of tokens reserved for it on each forthcoming period until it unlocks, based on the current snapshot of all other sub-stakes' sizes and unlock dates, and the maximum period-to-period issuance – itself based on the combination of kappa coefficients on relevant future periods, also calculated in this current moment.



It is therefore possible, in theory, to port future, reserved issuance over to a new network/protocol/contract with sub-stake level granularity. In thisscenario, stakers migrate over at their own discretion, bringing their subsidies – all taken from supply segment (1) above.

If each sub-stake initialized on the new network is forced to have the exact same unlock date as it did on the old network, its harder to game. It doesn't matter how many sub-stakes are migrated over, each sub-stake’s individual issuance rate/schedule doesn't change, relative to the old network [1]. Similarly, the issuance rate/schedule of sub-stakes remaining on the old network shouldn’t change, because only the portion of future issuance that was already reserved for the departing staker has gone with them.

Because NuCypher doesn’t allow you to decrease an unlock duration, only extend it, one can make some sort of claim to unminted tokens. Of course, receiving an exact sum calculated today relies on the breakdown of sub-stake size and unlock duration not changing, which is unrealistic. However, if both the old and new networks have very similar rules with respect to the divvying up of tokens on a given period, and both disallow decreasing unlock durations, then the main difference is that the new network doesn't have segment (2) above.

[1] However, once any sub-stake parameters are configured, it could change the issuance rate/schedule. For example, if sub-stake A unlocks 100 days into the future, and sub-stake B in 200 days, then if A decides to prolong, it could eat into sub-stake B's future issuance – depending on the rules. Arguably, one shouldn't be able to touch issuance one didn't bring. However, this means brand new stakers (who buy tokens on the market, for example), would only be able to earn fees on the new network. This isn't definitely terrible, but it would create a big pull factor to the new network, because your future issuance wouldn't be dilutable (unlike in the old network). However, you wouldn't have access to supply segment (2), unless some part of it was also migrated over based on DAO consensus. Whether these incentives balance out, needs more thinking.

A system like this would also allow for a staker to return to the old network unilaterally, bringing their unminted future issuance with them. It reduces the issuance-based dilution pressure on old network non-stakers, while preserving optionality on an individual staker level.

Using Unicorn https by pass forCloudflare DNS

I am having a.hardtime Everytime I am signing in with my Cloudflare account..it keeps on asking Authentication password number, so I had downloaded Authy two-factor authentication..can it be possible to synchronized my Cloudflare account directing Authy to auto connect log in?

Number of workers in the network – determining max_stake_size, min_stake_size & default m/n

Stimulated by a discussion with @derekpierre, @mswilkison, @vepkenez and @szotov, plus an interchange between Gaia and @cygnusv & @michwill on the #staking channel.

The protocol cannot fully control the number of individual workers/Ursulas in the network. However, it can push this towards desirable bounds through careful parametrization; in particular max_stake_size, min_stake_size and in less direct sense via the default and/or recommended threshold.

There are many incentivizing forces underpinning how a staker chooses to split their stake. Here are some possible objectives:
(1) Maximise exposure to sharing policies and therefore fees. In general, due to the threshold scheme, the more workers a staker runs, the more policies it will be selected for. This holds if: a) 5 workers with x/5 the stake earn exactly the same as 1 node with x stake, b) there is nothing preventing multiple workers controlled by the same staker to be selected for the same policy and c) some policies have an n such that larger stakes are excluded (even if these are rare).
(2) Minimise downtime. In general, the more workers a staker runs, the more redundant their set-up is and the fewer work opportunities they will squander. This may, if high-throughput policies become common, trade-off with the capacity of each individual worker, and is therefore bound by overheads – see (6)
(3) Maximise exposure to policy issuers optimising for latency. Although this isn't yet a formal feature, it's probable that certain use cases (e.g. Staker KMS) will necessitate this user optionality, and furthermore, the protocol may not be able to prevent network users from testing latency with various nodes and selecting accordingly. The more spread out (and strategically placed) a staker's workers are, the better.
(4) Minimise slashing downside. The more workers, the smaller the amount of stake at risk of being ceded in the case of an incorrect re-encryption.
(5) Minimise attack downside. The more workers, the smaller the amount of stake at risk if a worker is compromised and forced to re-encrypt incorrectly.
(6) Minimise overheads. Generally, the more workers, the greater the cost – both in terms of time+effort and cloud/hardware overheads.

The net economic pressure might be run as many workers as possible, increasing over the long-term as fees overtake rewards. Hence min_stake_size must be set to counter-balance this, or there may be a race to the bottom – compromising a key protocol objective; that workers handling policies and kfrags have sufficient collateral attached to incentivise a minimum competence and reliability.

There are other constraining factors on the max number of workers, such as the scalability of the learning/discovery loop and worker sampling before policy creation, and the capacity of dependencies – but I'll leave it to others to elaborate on these!

Advanced worker selection conditions (sampling algorithm)

Setting the scene:

  • NuCypher’s sampling algorithm is already more sophisticated than selection weighted purely by stake size – it in fact reconciles unpredictable user preferences (policy duration and threshold) with a sybil-resistant, incontrovertible network state (all staked tokens, their lock durations and the workers to which they pertain) – this is a great foundation to experiment with other selection conditions.
  • Changing selection conditions are a lighter, lower-risk nudge towards good behaviour than slashing, which from an economic (and psychological) perspective is less repeatable and uni-directional. In other words, the probability of a given worker being assigned a policy could continually adjust over time, in both a positive and negative direction, but conversely there's only so many times you can slash a worker, and there’s no mechanism to reverse a slash. An imperfect selection rule is less likely to cause an exodus of workers.

Performance indicators that could be incorporated into worker selection:

  1. service quality
    measurable via: % correct re-encryptions, % ignored re-encryption requests (i.e. no answer within a globally agreed time-box), median time to answer re-encryption request. Regardless of which measure or combination defines service quality, it makes sense to weight recent activity as more indicative of quality than older activity, so some kind of ageing function can be utlised
  2. service compatibility
    based on specified user preferences; e.g. geographical proximity (measured as latency) or capacity (e.g. for a particularly high throughput). Rather than selecting exact workers by address, a user could get a set of workers that best matches their stated requirements

Open questions:

  • Will rewards dwarf the incentive of higher probability selection?
  • How do we generate statistics on worker performance in a cheap and ungameable way?
  • How does the selection algorithm handle price differentials between workers (i.e. a free market)?
  • Can stakers with a negative scoring workers simply spin up new ones?
  • What other factors could be inputs int the selection algorithm?
  • Given the imperfect attribution associated with worker downtime, might selection-based punishments be fairer than slashing?

Pricing (supply-side overheads analysis)

Firstly, we confirm that, at launch, the network service will be fully commoditised. In other words, initially there will be a default, universal relationship between a policy's duration (+ # Ursulas involved) and the price of that policy, regardless of other factors (e.g. historical reliability of those Ursulas). Next, we need to narrow in on this uniform cost in real terms (i.e. in ETH or USD), through:

(1) analysing of overheads required to run a reliable Ursula
(2) comparing to existing services and their pricing models
(3) testing potential pricing (outcomes of 1 & 2) with adopter budgets
(4) taking into account other third-party costs (e.g. fiat->ETH conversion fees)

(1) Overheads
Assumptions based on network expectations:
a. Ursulas must be online (nearly) of the time – minimise interruptions
b. Ursulas should dedicate a machine or server to re-encrypting – minimise security/downtime risks
d. Ursulas will ideally have an anti-DDOS solution running – minimise censorship/downtime

Most Ursulas will achieve these requirements in one of two ways: with a cloud solution or a physical machine.

Cloud Monthly Costs

  • AWS t2 micro (1 vCPU, 1GB): $8.47
    this may be too little memory to derive a key from passphrase at bootup
  • AWS Compute Optimised Instance (2 vCPU, 4GB, Linux): $62.05
  • AWS Dedicated Instance (1 vCPU, 2GB, Linux): $1,479.71 (includes $2/h region fee)
  • Amazon Dedicated Host: $327.77
  • Google Cloud Compute Optimised (2 vCPU, 4GB, Linux): $72
  • Azure Cloud Compute Optimised (2 vCPU, 4GB, Linux): $63
  • Digital Ocean Compute Optimised (2 vCPU, 4GB, Linux): $40
  • Digital Ocean Standard (2 vCPU, 4GB, Linux): $20
  • Digital Ocean Standard (2 vCPU, 2GB, Linux): $10
    (all prices on-demand and US cheapest)

Cloud Ursulas will still need an internet connection, which varies worldwide from $10 to $150 per month, but we can assume that most nodes do not need to upgrade their internet to service the network. Given minimum memory constraints, the minimum spend for a cloud approach is at least $10/month, but more likely to be between $20-40/month.

Physical Machine Costs

  • Electricity costs vary between ~$0.33 and ~$0.04 per kwH worldwide
  • Typical CPU wattage varies between ~300 and ~100 depending on the model/usage.
  • CPUs cost between ~$80 and ~$1,000.

Assuming that Ursulas will predominantly be located in regions with cheaper energy, and that re-encryptions do not require high wattage nor an expensive machine, and that the Ursula spreads out their payments for the hardware over 1 year with 0% interest: Cost/month = ( [$0.10/kwh]/1000 * 150W * 730h ) + ($450/12m) = $10.95 + $37.5 = ~$50

Conclusion
There will be lots of variance, but we can assume that if an Ursula is earning less than $10 per month in revenue, they'll be making a loss. Or, if most Ursulas aren't making more than $10, it will make node operation unsustainable or pointless in all but a few super cheap regions. If they're earning less than $50, then they're not covering the cost of their hardware, which is significant because we may expect a dedicated machine that shouldn't be used for anything else (TBD).

It's important to note that this floor of outbound costs is largely unaffected by the size of the Ursula's stake, since the vast majority of expense goes into ensuring high availability rather than handling a big re-encryption load. In other words, we've identified the minimum cost of running a node, regardless of how much work is taken on.

(2)Pricing of alternative products
It is likely that customers will pay more for better security, trustlessness, decentralization, a wider range of operations, or the unique capabilities of Ursulas, Enricos, etc. However, our prices are still bounded to some extent by the cost of alternative providers (centralized or otherwise) and their pricing structure:

Software protected

  • AWS KMS - AES/GCM only | $1 per key per month, $0.03 / 10k decryptions
  • GC KMS - EC, others | $2.5 per key per month for first 2k keys ($1/month thereafter), $0.15 / 10k decryptions

HSM-protected

  • AWS CloudHSM | $1,058.50 per month (x2 for replication), $5,000 set up fee
    An AWS CloudHSM cluster can store a maximum of approximately 3,300 keys
  • Azure KeyVault - EC, others | $5 - $0.40 per key, $0.15 / 10k decryptions

Use case
Let's walk through hypothetical use cases and see if our nodes are profitable. We'll assume we have 500 nodes in our network. A gaming DApp has 10,000 users who play 2 games per day. In each game, 50 decryptions occur per user. We will also assume that the game developer wants to provide a semblance of trustlessness to users, so generates an individual 'master key' for each (_Note: this is a big assumption as most developers use a small number of master keys to handle access for a large population of users). In any case, using AWS KMS would cost the app $91 in decryptions and $10,000 for maintaining unique keys each month, which equals $1.01/user/month. Using GC KMS would cost $456 for decryptions and $13,000 for keys, which equals $1.35/user/month. If NC matched GC KMS on total cost, this would yield $27 in revenue for each of the 500 nodes. Looking at other use cases with the same number of users and matching GC KMS's per user price:

  • Data marketplace: (25 data buyers, 50 new data points per user per day): $31 per Ursula per month
  • Chat app (average # of groupchats/members/messages, shared secret updated every 5 minutes): $39 per Ursula per month

Other assumptions:

  • These figures assume that all stakes are equal. In reality, some Ursulas will have 5-10x less stake, and therefore far less revenue capture, but the same minimum overheads.
  • All adopters choose the same number of Ursulas for each policy. A big variance here makes choosing a price per policy more difficult.

(3) Adopter Budgets
Our prices are also bounded our adopters' budgets, which in turn are bounded by their revenue models – e.g. how much does a DApp make per user, and does this justify the overhead of each user's sharing flows. So the next questions are:

  • How much revenue per user will typical adopters be seeking in each use case?
  • What percentage of this revenue can justifiably be spent on decentralized access control, even where it is critical to the functioning of the platform?

TBD: Changing the pricing
How easy is it to dynamically alter the price as a respond to demand (or lack of), or other factors?

Sybil/collusion-resistant subsidies

TLDR
A protocol may subsidise the earning of fees by providers (thereby enhancing the incentive to perform correspondingly useful work), without opening the door for sybil/collusion attacks, by taking into account the historical, verifiable destinations of payments made by users.

Proposal motivation

  • Decentralized networks typically utilise a growing monetary supply to incentivize nodes/workers/service-providers (hereafter: ‘providers’) to initially join the network and then help fund their operations. However, the basis for earning these subsidies often aligns poorly with the actual work performed on behalf of network users. This misalignment risks some providers configuring their machines to execute the bare minimum required to collect the subsidy, and neglecting real service requests from users (e.g. by being offline).
  • A growing supply depreciates the value of the native token, so it should incentivise good service, punish poor service, or both. If it is profitable to collect the subsidy without actually serving users, then it is probable that a greater proportion of providers will do this, relative to a network with zero subsidies. This makes the network less reliable and delays or impedes adoption.
  • Importantly, misaligned subsidies may also blunt the power of other incentives. In early epochs, when subsidies dwarf actual service fees (i.e. the sums paid by users to providers directly in exchange for work), and providers earn fees and subsidies based on different actions, user engagement (e.g. requesting service) is rendered a relatively impotent economic incentive. This limits its effectiveness in motivating good behaviour (e.g. maintaining a high uptime in order to reliably answer service requests).

Service layer sybil attack



In layer 2 networks that offer an infrastructural resource or service (e.g. access control, outsourced computation), it is particularly difficult to align the allocation of subsidies to the delivery of useful work, because commercial engagements between users and providers are point-to-point, unlimited and often extra-protocol. This means that subsidy mechanisms can be profitably sybil attacked, wherein a provider sets up two addresses and continually requests their own service (hereafter: ‘self-requesting’). They receive back the service fee and earn the subsidy on top for every fake transaction – theoretically repeatable until the supply is drained.

Proposal assumptions



  1. Providers must stake collateral (i.e. lock tokens) in order to participate and service users. Stakes are sybil-resistant (i.e. they can’t be cheaply duplicated).
  2. Service fees are paid in return for provider actions that constitute real work – aligned with good service and network health. For example, earning a micropayment in exchange for a correct re-encryption.
  3. All fee transactions have the following publicly verifiable attributes:
    1. User address
    2. Provider address
    3. Timestamp
    4. Value of transaction (e.g. denominated in ETH or native token)



Note: an efficient architecture for transactions – which allows the regular verification of the four attributes above – involves state channels and linked signatures. Users and providers pass signed hashes of each transaction back and forth, which includes the amount paid and the total paid so far. This enables the total sum of fees between each user and provider to be aggregated and evaluated by the protocol – this can occur as regularly as is practical (e.g. once a month). For NuCypher, a ratcheted re-encryption channel is appropriate – see this proposal from @tuxxy, with the addition of a counter to each signed message to keep track of the agreed sum of fees thus far.

Proposal outline & example

To make the reasoning simple, let's discuss a highly idealised network in which:

  • There are 1,000 tokens staked evenly by 10 providers (all for the same duration).
  • There are 10 total users of the network.
  • Each period (1 month), each user spends $10 on the service.
  • Jobs and fees are assigned and split evenly by providers.
  • We don’t know which providers are also users – i.e. there are between 10 and 20 entities interacting with the network. 


Hence in this network, providers earn a total of $10 per month – $1 from each user. This is obviously a toy example, and we will discuss ways to deviate from this unrealistic uniformity later.

To avoid opening the door to colluders, let’s add our final assumption, expressible in multiple ways: What % of other providers/stakers is would it be impossible or near-impossible to collude with?
What % of the network can we assume to be virtuous or incorruptible?
At which point (in % of the network), do costs of orchestrating collusion massively outweigh the gains?

For now, let’s assume it’s unfeasible to collude with more than 50% of the network. We can now safely assume that for each of the 10 users, at least $5 of the $10 they pay out each month is legitimate – i.e. they cannot get it back via a side channel from their colluder group.


Now let's zoom into a single, misbehaving provider – we’ll call her Alicesula. Alicesula wishes to sybil attack the subsidy mechanism posing as a user. She approaches other providers, and convinces 4 of the 10 to follow suit and collude with her. What happens? She pays out $10 in fees just like the other 9 users. She gets back $1 from herself, and $4 from her fellow colluders. This means: 


Alicesula’s net expenditure: self-requested fees + fees returned by colluders - outgoing fees 
$1 + $4 - $10  = -$5



Let’s say we add a $0.5 subsidy for every $1 earned in fees. Since, like other providers, Alicesula earned $10 in fees, she now additionally receives a subsidy of $5 for this month. This means that her illegitimate net earnings, and the net earnings of her colluders, sum to zero – despite their involvement in a cartel controlling 50% of the staked tokens. 



Subsidy calculation function

We seek a function that calculates a subsidy amount for each unit of value earned in fees by providers during a predefined period, accommodating inevitable deviation from perfect payment uniformity. This formula is a first attempt.

Other solutions

There are not many attempts to square this circle. However, some projects that involve a ‘protocol rebate’ mechanism – where service fees are paid into a pool and then rebated (distributed) out every so often, based on both the provider’s relative contribution to the pool and their relative stake size – have discussed appending a subsidy to the rebate. However, this is not sybil-resistant, and the only partial solution thus far has been to limit the size of the subsidy such that it is always lower than third-party costs (i.e. Ethereum gas) involved in each transaction and incurred by the provider – thereby ensuring that attempts to self-request would result in net negative earnings.

Delegated re-encryption requests to allow for better availability checking

Current mechanism for checking Ursulas availability is only based on checking that they are online but not on whether they are effectively following the protocol, i.e. providing the re-encryption service under the terms of the policies deployed in the network.

There should be some mechanism in place to assure nodes are indeed following the protocol. Ideally, the mechanisms should be decentralized, where peers agree on the performance of other peers. I like the idea of setting up a reputation system, but I am not sure whether reputation should be measured from outside clients: Alices and Bobs; or from other peers: i.e. other Ursulas, or even both of them. Anyway, it is not straight forward to implement a fair decentralized reputation system.

In my opinion Ursulas should play an active role on testing other Ursulas performance. Apart from being required to respond to Alice and Bob requests, they should also be in charge of challenging other Ursulas. That will increase Ursulas workload but will also provide a decentralized mechanism to evaluate Ursulas’ performance.

Whereas posing as an Alice is fairly simple, Ursulas will only need some allowance to create random policies, it is not clear how an Ursula can act as Bob in order to ask for re-encryptions. To begin with, the challenging Ursula should know where the re-encryption key fragments are, but I think this information in currently only known by Alice and the actual Bob. In order to avoid that limitation, we could make this information public, but that might produce some collateral effects.

My proposal is based on allowing for delegated requests between Ursulas. In this case Bobs won’t be in charge of contacting the right Ursulas, i.e. the ones with their fragments, they will instead contact any Ursula in the network based on parameters such closeness, latency or maybe lower fees. This Ursula will act as a gateway for Bob and will be in charge of finding the right Ursulas and collecting the cFrags to finally pass them to Bob. In this way, Ursulas would have to answers other Ursulas requests regularly without knowing whether it is an availability challenge or a real request originating from the actual Bob. This will allow Bob and Alice to run off chain, they don´t need to learn anything from the Nucypher Network if they use a gateway Urusla.

Ursulas would be able to assess whether:

  • An Ursula is denying re-encryption for expired policies
  • An Ursula is providing re-encryption for active policies
  • An Ursula is not responding if it doesn’t hold the corresponding kFrag.

Under some circumstances, the gateway Ursula could even combine cFrags and pass the capsule directly to Bob. If we try to implement that, when reconstructing the capsule, the gateway Ursula could check if there was a problem with any of the shares in order to identify a misbehaving Ursula, i.e. an Ursula that is providing faulty re-encryption for active policies. This is something that will need some changes in the protocol and may have some implications in the security model.

All this information could be reported to the network or stored on-chain in order to have some metrics on individual Ursulas performance. Of course, there is also the problem of falsely reporting a bad Ursula. In order to mitigate that, there should be some kind of consensus by a group of Ursulas before reporting bad behavior.

If we opt for this challenging mechanism, it is important that challengers and the targets are selected randomly using some fair and secure mechanism. An interesting approach would be to use the random beacons to select the sample (https://www.cloudflare.com/leagueofentropy/ or https://csrc.nist.gov/projects/interoperable-randomness-beacons). That would assure that all Ursulas contribute to the resilience of the network equally and that we got enough "opinions" on every Ursula in order to reach an agreement. Assuming M as a security level, for every period each target Ursula should be challenged by M randomly selected challenging Ursulas. As M grows, we get more confidence on the reports but we also get a greater load on the Nucypher network …

All these ideas will have a significant impact the network architecture, even in the token economy that might not be worth the effort. I understand it would be difficult to implement them, but I hope this issue serves as a basis for good discussions on how to improve the mechanisms to check Ursula availability and improve the nucypher network in general.

Pricing structure (what is paid work?)

Let's assume that all remuneration calculations discussed in this issue incorporate the following inputs in precisely the same way:

  1. The number of Ursulas assigned to the policy (n)
  2. The number of recipients (which equals the number of policies until we have multi-Bob policies)

Primary Calculation Inputs

[Input 1: Policy duration] Our current calculation takes one further input, the policy duration, measured in periods of 24 hours. This enables the total cost of a policy to be calculated up front (value), paid into an escrow, the sum split into the number of periods within the policy's duration, each of which is paid out to participating Ursulas every time they confirm activity.

  • This calculation leaves Ursulas vulnerable to an attacker Bob who can flood a policy with unlimited requests, with near-zero economic consequences.
  • This also puts high-throughput users at risk of being ignored by Ursulas (this is only an issue if the number of requests begins to impact Ursula's overheads). It's also possible for Ursulas to anticipate high-throughput – this is mentioned in issue #4.

[Input 2: Access requests] The number of access requests sent to a policy becomes the variable determining the policy's price, replacing duration.

  • The number of access requests must be tallied up at the end of each period, and Alice/Bob billed. This begs the question, what happens if the bill is not paid? Alternatively, Alices/Bobs could be required to pay up front for the number of access requests expected within a given period. Then the question is, what happens if the number of access requests goes over this sum? Do unused requests roll over? If we follow the rules of Ethereum gas payments, then users would have to deposit at least as much as they expect the sum of access requests to cost, with any leftover cash returned once the policy expires.
  • How do we measure the number of access requests? Relying on Bob to report it isn't ideal – but neither is a system that totally ignores Bob, who may be subject to throttling by misbehaving Ursulas.
  • This calculation could leave Alices vulnerable to an attacker Bob who can overwhelm a policy with requests, running up the bill for Alice, unless the application automatically passes this cost onto Bob.

[Input 3: Users] The number of users is the primary input into the cost calculation. This can be calculated as the sum of Alices, sum of Bobs, or sum of both, or the sum of unique keys (i.e. users/devices can be Alices and Bobs without incurring extra costs). This would make it easier for network adopters to budget for access control, since for many applications, revenue (and/or the venture's fundraising potential) is a function of the total user population.

  • This approach leaves Uruslas vulnerable to being flooded with both arrangements/policies and requests, where the former is can also enable the latter.

Additional Logic, Calculation Modifiers & Combinations

  • For any of the three primary inputs, we also need a coefficient to bring the cost into line with real-world expectations – for example, pricing that roughly resembles the monthly cost of AWS Cloud HSM – see issue #6. This is currently the rewardRate.
  • One solution to Ursulas being flooded with requests may be to program hard caps on policies or users. These can be set based on the number of requests that would overwhelm a small Ursula (e.g. 500k requests per user per day, or 10k per policy per day), but is unlikely to impact the needs of legitimate users.
  • Combining users and requests as inputs would bring NuCypher in line with the pricing structure of most major KMS services.

Real-world scenarios

Which calculation/combination we settle on hinges to some extent the nature of adopter use of the network. In a scenario where hardly any adopters/users require highly frequent access requests, then policy duration may suffice as the key input variable / resource. However, this would imply that a major selling point of the NuCypher network – that it can scale to handle high volume / frequent sharing, is not being leveraged. The only exception to this would be an application with a very high number of policies, but low numbers of requests per policy (can we think of a real-world application with this characteristic?).

Excluding that particular type of application, this may also imply that we need to change our pricing significantly – since in basic terms, revenue is a function of a. price of service and b. frequency of usage – if we are assuming a low frequency of usage, then we will have to increase the price significantly to generate enough revenue. The problem with this approach, from a product perspective, is if we are targeting applications with very low-throughputs but strong need for trustlessness, is that we then start competing with client-side PKI, which is free. Our value proposition is reduced to "Alice can go offline".

Customer choice (handpicking nodes)

There are 'legitimate' reasons to handpick Ursulas, such as:

  • Regulation requiring that consumer data remain in a certain jurisdiction [although the extent to which this applies to the ciphertext of a symmetric key associated with the underlying data is unclear]
  • Regulation requiring data storage to comply with some criteria (e.g. HIPAA)

However, even where prices are centrally-set and uniform, there are other reasons to prioritise jobs to certain Ursulas. Developers/users can find each Ursula's address and use this to discern:

  • historical uptime, expressed in terms of the total cycles where their activity was confirmed (i.e. checking to see if they missed a reward)
  • any punishments received (checking for slashes)
  • typical latency (this can be tested)
  • size and length of stake (i.e. how much they have to lose, how committed they are)

We must consider this 'market information' and how much we encourage, enable or attempt to hide.

  • How much of an edge case will discretionary choice of Ursulas be? 
In other words, how many of our adopters are likely to do this?
  • What impact does this have on the market and Ursula behaviour, particularly in demand-scarce epochs?

Calculating the genesis slashing penalty

I'd like to open a discussion on this crucial component of the slashing protocol, and indeed our overall economic design. Our current logic and parameters originate from back-of-the-envelope reasoning in the nucryptoeconomics compilation doc:

penalty = basePenalty.add(penaltyHistoryCoefficient.mul(penaltyHistory[_miner]));
penalty = Math.min(penalty, _minerValue.div(percentagePenaltyCoefficient));

&

BASE_PENALTY = 100
PENALTY_HISTORY_COEFFICIENT = 10
PERCENTAGE_PENALTY_COEFFICIENT = 8

(see PR nucypher/nucypher#507 for more context )

Background

In general, it's not straightforward to design penalty calculation algorithms (i.e. determining the amount X to decrease a stake Y by, given offence Z), that yields fair punishments uniformly across all Y and Z, and maximises the likelihood of behaviour aligned with the protocol's goals. It's hence unsurprising that there's some variance in the approaches taken by prominent staking+slashing networks. For example:

Livepeer:

if (del.bondedAmount > 0) {
    uint256 penalty = MathUtils.percOf(delegators[_transcoder].bondedAmount, _slashAmount);
    del.bondedAmount = del.bondedAmount.sub(penalty);
... }

where the _slashamount coefficient depends on the offence (failed verification, missed verification, double claiming a segment).

Casper:

fraction_to_slash: decimal = convert(convert(recently_slashed * self.SLASH_FRACTION_MULTIPLIER, "int128"), "decimal") / \ convert(convert(self.validators[validator_index].total_deposits_at_logout, "int128"), "decimal")

where self.SLASH_FRACTION_MULTIPLIER = 3.

Polkadot's most recent testnet version:

Slashing starts with a 0.001 DOTs penalty and doubles. If your node successfully passes a session, the current penalty is reduced by 50% with a minimum of 0.001 DOTs.

And if this post is accurate, then a Tezos baker who screws up will immediately lose their entire deposit, including what they would have earned from the 'offending operation'.

Discussion

Penalties calculated with absolute figures (e.g. a fixed number of tokens) run into issues:

  • Volatile token conversion rate to fiat means punishment in real terms has an unpredictable multiplier
  • Fixed penalties can be overly punitive to small stake holders and potentially irrelevant to large stake holders. Making them large enough to be disincentivizing to large stake holders might wipe out smaller stake holders for a single error.

However, calculations involving the percentage of the offenders' stakes are also problematic:

  • They can be overly punitive to large stake holders. For example, a set 10% decrease in stake for 2 nodes who commit the same offence/error, but node A holds $100k worth of tokens and node B $1k – larger node could reasonably consider it unfair that there's a $9,900 difference in absolute terms between their respective penalties. This could lead to a disproportionate favouring of smaller stakes, depending on each individual's perceived risk that they might commit a slashable offence. This is especially important if the network ever ventures into attributing offences imperfectly, or punishes offences that could be caused by network issues or censorship (i.e. nodes failing to come online)
  • Traditional crime and punishment does not generally take into account the perpetrator's wealth, as percentage based penalties do (in the sense that theoretically the letter of the law treats, or should treat, everyone the same. Of course in practice wealthier citizens have many advantages with respect to criminal justice requirements such as bail / counsel costs).

Combination ideas

A natural solution to balance the tension between absolute and percentage based penalties is to combine them.
Broad ideas:

  • Fixed amount OR Percentage of stake, whichever is lower
    more or less what we have now
  • Fixed amount OR Percentage of stake, whichever is greater
  • Fixed amount AND Percentage of stake
    More complex ideas:
  • Percentage of stake input to penalty calculation modified by the size of stake relative to others.
    i.e. a tiered system wherein the greater the percentage of the total staked tokens you control, the smaller the percentage of your stake slashed given offence Z.
    Although this sounds slightly unpalatable, it would avoid large nodes abandoning the network when they are punished too severely (in absolute terms).
  • Fixed amount figure calculated using current fiat value of tokens
    i.e. using a price oracle to calculate a base punishment – e.g. minimum of $500 fine for offence Z
    this would avoid uneven real-world severity of punishment due to token volatility

Choosing the exact parameters

e.g. 5% of stake if stake < 0.01% of network + fixed penalty of $100 in nu tokens
This is a to-do following a discussion of the approach. Also I think this will require scenario modelling.

Confirm activity: problem definition, threat model, analysis of potential solutions

The goal of this issue cover the following:

  • Agree on the formulation/description of the confirm activity problem; define the problem and what should a solution achieve.
  • Outline a threat model for this problem (i.e., what we consider a plausible threat/attack in practice to defend against).
  • Discuss the potential of existing ideas (namely, #15 and Ratcheting Re-Encryption State Channel) as well as PCD, CoSi-based, chellenge/open-based (see here) , and under what assumptions they solve/do not solve the problem.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.