Giter VIP home page Giter VIP logo

contracts-v2's Introduction

Notional Contracts V2

Notional is a fixed rate lending and borrowing platform, built on Ethereum. Fixed rates differ from variable rates or stable rates because the interest rate will not change by for the term of the loan. Fixed rate loans are 25x variable rate loans in traditional financial markets because of the superior guarantees they offer. Notional V1 first introduced our concepts of how to achieve fixed rates on Ethereum and Notional V2 extends upon those concepts to add support for fixed term loans out to 20 years. Notional V2 also introduces many improvements upon Notional V1, including superior capital efficiency, better upgradeability, and on chain governance.

Deployments

Most Notional transactions (lending, borrowing, providing liquidity) go through the main Notional proxy. Currently, Notional is only deployed on Ethereum Mainnet, Kovan testnet and Goerli testnet. Contract addresses can be found here

Notional is undergoing a process of gradual decentralization. Due to the complexity of the system, the creators of the protocol feel that it is necessary to be able to respond quickly in case of issues during the initial launch of the system. Because many economic conditions in Notional only arise at specific periods (i.e. during initialize markets every three months), the period of administrative control may last longer than other DeFi protocols.

Currently Notional is owned by a three of five Gnosis multisig contract where two signers are the founders of the protocol (Jeff and Teddy) and the three other signers are community members. A longer discussion of administrative controls can be found here

Codebase

A full protocol description can be found in the whitepaper. Detailed code walkthroughs can be found at:

The codebase is broken down into the following modules, each directory has a _README.md file that describes the module. A full protocol description can be found in WHITEPAPER.md.

contracts
|
└── external: all externally deployable contracts
|   └── actions: implementations of externally callable methods
|   └── adapters: adapter contracts used to interface between systems
|   └── governance: on chain governance system, forked from Compound Governor Alpha and COMP token (thanks!)
|
|       FreeCollateralExternal.sol: deployed library for checking account free collateral positions
|       Router.sol: implementation for proxy contract that delegates calls to appropriate action contract
|       SettleAssetsExternal.sol: deployed library for settling matured assets to cash
|       Views.sol: view only methods for inspecting system state
|
└── global: storage, struct and constant definitions
└── internal: shared internal libraries for handling generic system wide functionality
|   └── balances: encapsulates all internal balance and token transfer logic
|   └── liquidation: contains calculations for determining liquidation amounts
|   └── markets: contains logic for defining tradable assets and fCash liquidity curve
|   └── portfolio: handlers for account portfolios and transferring assets between them
|   └── settlement: contains logic for settling matured assets in portfolios
|   └── valuation: calculations for determining account free collateral positions
|
|       AccountContextHandler.sol: manages per account metadata
|       nTokenHandler.sol: manages nToken configuration and metadata
|
└── math: math libraries
└── mocks: mock contracts for testing internal libraries

Statistics

Code Size

Module File Code Comments Total Lines Complexity / Line
Actions AccountAction.sol 92 54 170 10.9
Actions BatchAction.sol 322 37 404 16.8
Actions ERC1155Action.sol 250 86 381 18.0
Actions GovernanceAction.sol 243 116 399 11.5
Actions InitializeMarketsAction.sol 488 159 725 10.2
Actions LiquidateCurrencyAction.sol 316 52 399 0.9
Actions LiquidatefCashAction.sol 178 42 243 1.7
Actions TradingAction.sol 470 42 571 11.3
Actions nTokenAction.sol 194 57 290 9.3
Actions nTokenMintAction.sol 215 53 303 14.9
Actions nTokenRedeemAction.sol 212 57 308 14.6
Adapters CompoundToNotionalV2.sol 66 19 99 10.6
Adapters NotionalV1ToNotionalV2.sol 149 27 203 3.4
Adapters NotionalV2FlashLiquidator.sol 369 46 473 18.4
Adapters NotionalV2ifCashLiquidator.sol 169 19 213 7.7
Adapters cTokenAggregator.sol 35 12 59 2.9
Adapters nTokenERC20Proxy.sol 74 43 136 0.0
Balances BalanceHandler.sol 339 73 470 17.4
Balances Incentives.sol 67 16 98 10.4
Balances TokenHandler.sol 199 25 260 21.6
External FreeCollateralExternal.sol 50 17 77 6.0
External PausedRouter.sol 36 18 65 22.2
External Router.sol 183 34 239 35.0
External SettleAssetsExternal.sol 104 7 127 6.7
External Views.sol 448 54 564 5.8
Global Constants.sol 67 32 116 0.0
Global StorageLayoutV1.sol 19 23 49 0.0
Global Types.sol 181 139 347 0.0
Governance GovernorAlpha.sol 344 129 546 13.1
Governance NoteERC20.sol 295 94 456 14.2
Governance Reservoir.sol 35 25 73 5.7
Internal AccountContextHandler.sol 174 47 262 30.5
Internal nTokenHandler.sol 396 73 535 8.1
Liquidation LiquidateCurrency.sol 388 91 539 12.9
Liquidation LiquidatefCash.sol 357 88 506 11.2
Liquidation LiquidationHelpers.sol 176 33 236 13.6
Markets AssetRate.sol 189 36 263 11.6
Markets CashGroup.sol 298 46 387 6.0
Markets DateTime.sol 139 20 190 28.8
Markets Market.sol 553 180 835 10.1
Math ABDKMath64x64.sol 173 52 250 46.2
Math Bitmap.sol 68 10 87 22.1
Math FloatingPoint56.sol 14 14 34 7.1
Math SafeInt256.sol 44 18 87 31.8
Portfolio BitmapAssetsHandler.sol 250 19 310 11.6
Portfolio PortfolioHandler.sol 303 46 400 20.1
Portfolio TransferAssets.sol 85 9 106 11.8
Settlement SettleBitmapAssets.sol 64 28 106 15.6
Settlement SettlePortfolioAssets.sol 135 20 181 23.7
Valuation AssetHandler.sol 203 33 275 16.7
Valuation ExchangeRate.sol 70 23 108 14.3
Valuation FreeCollateral.sol 391 45 495 15.9

These are ported from OpenZeppelin Contracts and do not require audit:

Module File Code Comments Total Lines Complexity / Line
Proxy Proxy.sol 29 46 85 3.4
Proxy nProxy.sol 12 1 18 0.0
Utils StorageSlot.sol 35 39 83 0.0
Utils UUPSUpgradeable.sol 13 38 56 0.0
Erc1967 ERC1967Proxy.sol 12 16 32 8.3
Erc1967 ERC1967Upgrade.sol 51 43 107 11.8

Test Coverage

  • Running tests using Ganache can run out of memory, use export NODE_OPTIONS=--max-old-space-size=16000
  • debug_traceTransaction can also fail due to string length errors. You will see an error from brownie saying that the environment has crashed. This error is unavoidable, the plan is to test if Hardhat also has this issue.
  • Coverage reports are incomplete due to the above string length issue in Ganache.

Gas Costs

Gas costs for various scenarios are in gas_stats.json. This report can be regenerated by running brownie run scripts/gas_stats.py

contracts-v2's People

Contributors

elnilz avatar jeffywu avatar t-woodward avatar weitianjie2000 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

contracts-v2's Issues

Make bitNum in bitmap portfolios more efficient

Now you extract bit indexes from bitmap currency, and then convert these bit indexes to/from maturity. Extracting bit indexes from a word is quite expensive. It would be cheaper to extract bits, rather than bit indexes (bit = 2^bitIndex). This would make iteration through set bits significantly cheaper, maturity->bit conversion about the same cost, and bit -> maturity conversion a bit more expensive.
06:07

not sure i totally follow
06:12

Mikhail Vladimirov
OK. You have a word used as a bitmap. You want to iterate through all the “one” bits it it.
06:17

Currenlty, you just check bits one by one. This is suboptimal.
edited 06:17

Taking into account, that there couldn't be too many “one” bits, you may optimize, by checking several bits at once via “and” mask, and in case all the checked bits are “zero”, skip them all at once.
06:19

right i see
06:19

Mikhail Vladimirov
However, there is a very efficient way to find the lowest “one” bit in a word:

x & (~x + 1)
06:20

So you may directly jusmp to the next “one” bit, process it, remove from the word (just by subtracting) and proceed and proceed to the next “one” bit.
06:21

yeah that makes sense to me
06:21

Mikhail Vladimirov
The only problem is that this way you will get bits themselves (i.e. 1, 2, 4, 8, 16, 32, ...) instead of their indexes (0, 1,2, 3, 4, 5, ...)
06:21

But in your case there is no big difference whether to use bit indexes or bits themselves to identify maturities.
06:22

right, we just need to change the calculation in date time a little
06:22

Mikhail Vladimirov
Only bitnum -> maturity conversion will become more expensive...
06:22

But you may convert bit to its index quite efficiently using binary search
edited 06:23

maturity => bitnum is just going to be something like 1 << offset
06:24

Mikhail Vladimirov
Right, you need to use << and >> instead ofg + and -, and start with one rather than with zero.
06:24

Should be about the same cost.
06:24

right so the other way, we can use something like this function getMSB(uint256 x) internal pure returns (uint256 msb) {
        if (x >= 0x100000000000000000000000000000000) {
            x >>= 128;
            msb += 128;
        }
        if (x >= 0x10000000000000000) {
            x >>= 64;
            msb += 64;
        }
        if (x >= 0x100000000) {
            x >>= 32;
            msb += 32;
        }
        if (x >= 0x10000) {
            x >>= 16;
            msb += 16;
        }
        if (x >= 0x100) {
            x >>= 8;
            msb += 8;
        }
        if (x >= 0x10) {
            x >>= 4;
            msb += 4;
        }
        if (x >= 0x4) {
            x >>= 2;
            msb += 2;
        }
        if (x >= 0x2) msb += 1; // No need to shift xc anymore
    }
06:24

Mikhail Vladimirov
right so the other way, we can use something like this function getMSB(uint2

Jeff Wu
This is for backward conversion (bitnum -> maturity)
06:25

yes
06:25

so i pass in something like 00010000 and i want to convert that to "5"
06:25

or..."4" i guess if im going from the big end

Variable renaming to clarify what is in asset terms and what is in underlying terms

Variables to rename:

int256 totalCurrentCash;

int256 totalCurrentCash;

Questions:
Is this asset cash or underlying?

int256 preFeeCashToAccount =

Is this asset cash or underlying?

netCashToAccount.mul(rateAnchor).div(Constants.RATE_PRECISION).neg();

Is this asset cash or underlying?

derivative = cashAmount

Are these asset cash or underlying?

(int256 netCashToAccount, int256 netCashToMarket, int256 netCashToReserve) =

Do these functions return asset cash or underlying values?

function _getNTokenHaircutValue(

function _getPortfolioAndNTokenValue(

function _getBitmapBalanceValue(

function _getBitmapPortfolioValue(

Mistake in benefitRequired calculation in fCashLiquidation

I made a mistake in the spec. In the case where localAvailable < 0, the benefit required is simply -localAvailable. That will bring localAvailable to 0. We don't need to scale it by the buffer like we do. That would bring localAvailable over 0.

// If local available is negative then we can bring it up to zero
c.benefitRequired = c
.factors
.localAvailable
.neg()
.mul(Constants.PERCENTAGE_DECIMALS)
.div(c.factors.localETHRate.buffer);

Look at how to simplify decimals usage

Also, I see that you use several different fixed point formats across the code: with 2, 8, 9 decimals. There are several issues with how these numbers are used:

  1. There are too many formats used. It is hard to know what format each value is, thus code is error-prone, as it is easy to use wrong number of decimals.
  2. Two decimals is too little. You use values with 2 decimals for percentages, so you may represent, say 2% and 3% , but not 2.5%. This is weird.
  3. You implement calculations with these numbers manually every time, so there are lots of code fragments like this: x.mul(RATE_PRECISION).div(y) or x.mul(y).div(RATE_PRECISION). Extracting such things to utility functions would make code simpler,. easier to read, less error-prone, and more efficient, as only one function call will be needed instead of two. Also, the implementation could be made more efficient and resistant to phantom overflows.

I would recommend to use the same format everywhere, say with 9 or 18 decimals, and implementing arithmetic functions for this format. This all for numbers stored in memory and on stack. You may still use shorter format for storage if needed to save storage space.

Actually, the variety of number formats used is quite large. There are signed and unsigned, integer and fractional, binary and decimal fixed point with 2, 5, 8, 9, 10, and 18 decimals, also different bit widths... All thses formats are converted to each other and computed manually, i.e. not through functions...

I agree that in storage it does make sense to use the smallest number of bits possible for each value, but this doesn't necessary mean, that all these formats should be also used in memory and on stack. Probably all the number should be unpacked to one of a few formats when read from the storage and packed back when written to it.

Unnecessary invocation of calculateLocalToPurchase

Why invoke calculateLocalToPurchase twice here?

int256 localToPurchase;
(collateralToRaise, localToPurchase) = LiquidationHelpers.calculateLocalToPurchase(
factors,
liquidationDiscount,
collateralToRaise,
collateralToRaise
);
// Enforce the user specified max liquidation amount
if (maxCollateralLiquidation > 0 && collateralToRaise > maxCollateralLiquidation) {
collateralToRaise = maxCollateralLiquidation;
// prettier-ignore
(
/* collateralToRaise */,
localToPurchase
) = LiquidationHelpers.calculateLocalToPurchase(
factors,
liquidationDiscount,
collateralToRaise,
collateralToRaise
);
}

Instead, you could just do this check:

// Enforce the user specified max liquidation amount
if (maxCollateralLiquidation > 0 && collateralToRaise > maxCollateralLiquidation) {
collateralToRaise = maxCollateralLiquidation;

And then calculateLocalToPurchase:

(collateralToRaise, localToPurchase) = LiquidationHelpers.calculateLocalToPurchase(
factors,
liquidationDiscount,
collateralToRaise,
collateralToRaise
);

I think that would be equivalent and would allow you to drop the first invocation of that function.

Incorrect Liquidity Fee Calculation

We still calculate the liquidity fee assuming the old (non-compounding) rate frame work. If we want to charge XX bps on the annualized continuously compounded interest rate we have to do it slightly differently.

We first call getExchangeRate here:

(tradeExchangeRate, success) = getExchangeRate(

Then we separately calculate the liquidity fee and add/subtract that to/from the exchange rate here:

if (fCashAmount > 0) {
uint fee = cashGroup.getLiquidityFee(timeToMaturity);
tradeExchangeRate = tradeExchangeRate.add(int(fee));
} else {
uint fee = cashGroup.getLiquidityFee(timeToMaturity);
tradeExchangeRate = tradeExchangeRate.sub(int(fee));

Ultimately, what we want to solve for is this:

tradeExchangeRate = exp((tradeInterestRateNoFee + Fee) * t)

What we have access to after we call getExchangeRate is tradeExchangeRateNoFee, defined like this

tradeExchangeRateNoFee = exp((tradeInterestRateNoFee) * t)

Some math:

tradeExchangeRate = exp((tradeInterestRateNoFee + Fee) * t)
= exp(tradeInterestRateNoFee * t + Fee * t)
= exp(tradeInterestRateNoFee * t) * exp(Fee * t)
= tradeExchangeRateNoFee * exp(Fee * t)

So instead of adding/subtracting the liquidity fee, we need to calculate exp(Fee * t) and multiply that by tradeExchangeRateNoFee

Switch to continuous compounding

Overview:
Using continuous compounding keeps interest rates consistent across periodSizes and ensures that we don't run into any issues if we extend the maturity horizon significantly

Changes:

  1. Change the implied rate calculation here: https://github.com/swapnet-protocol/swapnet-lite/blob/6b782f91be1d2b453fbdbdd9ebda282a6c9683d2/packages/contracts/contracts/CashMarket.sol#L1157-L1159
periodNum = timeToMaturity / G_MATURITY_LENGTH
rate = math.log(exchangeRate) / periodNum
  1. Change the initial anchor calculation here: https://github.com/swapnet-protocol/swapnet-lite/blob/6b782f91be1d2b453fbdbdd9ebda282a6c9683d2/packages/contracts/contracts/CashMarket.sol#L326-L334
periodNum = timeToMaturity / secondsInYear
anchor = math.exp(periodNum * (G_RATE_ANCHOR  - 1))

Add a check for deleted assets to _hasLiquidityTokens in LiquidateCurrency

During liquidation, assets are first settled and finalized before reading from storage again during getLiquidationFactors. As a result, there should not be any deleted assets when we go to look for liquidity tokens in a portfolio. However, it still doesn't hurt to add a check for deleted assets in this method.

Update getAnnualizedSupplyRate to recommended formula

It seems that the “getAnnualizedSupplyRate” function in “cTokenAggregator.sol” uses the formula that is different from the formula recommended in Compound docs.

Used formula: Supply rate per block * blocks per year * notional rate precision / supply rate precision
Recommended formula: APY = ((((Rate / ETH Mantissa * Blocks Per Day + 1) ^ Days Per Year)) - 1) * 100

Update comments

  • Add additional commentary on calculateLiquidationAmounts function
  • Fix comment about rateScalars / liquidity Token Haircuts size in cash groups
  • Update comments in remapBitmap code

incorrect calculation in liquidityToken withdrawal during liquidation

These calculations are slightly incorrect:

w.incentivePaid = assetAmountRemaining
.mul(Constants.TOKEN_REPO_INCENTIVE_PERCENT)
.div(Constants.PERCENTAGE_DECIMALS);
// Otherwise remove a proportional amount of liquidity tokens to cover the amount remaining.
int256 tokensToRemove =
asset.notional.mul(assetAmountRemaining).div(w.netCashIncrease);

This calculation treats assetAmountRemaining as if it were netCashIncrease as opposed to netCashToAccount per the below nomenclature:

// We can only recollateralize the local currency using the part of the liquidity token that
// between the pre-haircut cash claim and the post-haircut cash claim. Part of the cash raised
// is paid out as an incentive so that must be accounted for.
// netCashIncrease = cashClaim * (1 - haircut)
// netCashIncrease = netCashToAccount + incentivePaid
// incentivePaid = netCashIncrease * incentive

If we do it this way, assetAmountRemaining wouldn't actually be exactly zeroed out, so it would be erroneously set to zero here:

Instead we can convert assetAmountRemaining to netCashIncrease before calculating incentivePaid:

netCashIncrease = netCashToAccount + incentivePaid
netCashIncrease = netCashToAccount + netCashIncrease * incentive
netCashIncrease * (1 - incentive) = netCashToAccount
netCashIncrease = netCashToAccount / (1 - incentive)

incentivePaid = netCashToAccount * (incentive / (1 - incentive))

Confusing terminology in calculateMaxLiquidationAmount

  • This function calculates the liquidationAmount, not the maxLiquidationAmount.
  • MAX_LIQUIDATION_PORTION does not represent a maximum amount. It represents instead a default amount.
  • maxAllowedAmount is not the maximum amount that a user is allowed to liquidate. If initialAmountToLiquidate is greater than maxAllowedAmount, result can be greater than maxAllowedAmount

Rounding issues when converting from internal to external

From ABDK:

BTW, I see across the code that you deal with token decimals as if token amounts would be fractional. And you convert amounts from external precision to internal (8 decimals) and back. However, for me this is wrong approach. Token amounts are actually integers, and should be treated as such. The (optional) "decimals" property was initially introduced as a hint for UIs to help rendering token amounts in a more human-friendly way. But smart contracts should usually just ignore this property.
Treating token amounts as fractional numbers just makes code more complicated and introduces rounding errors.

This is true only if you assume that a whole token has reasonable price (not too high, not too low), while this assumption could be false. There could be a token with 18 decimals, where even a whole token (10^18 base units) cost almost nothing and real amounts even measured in whole tokens are very large. And there could be a token with, say, 18 decimals, where 1 base unit costs something, and a whole token is actually an very large value.
So what you basically is talking about is a scale factor, that you could introduce to make a whole fCash token to have reasonable price even if underlying token's price is unreasonable.
Now you try to figure out this scale factor from decimals, but this is not reliable.
This scale factor is similar to the "lot" concept on traditional exchanges.

Several DeFi protocols had problems listing certain tokens, because of similar assumptions. They assumed either that a whole token has reasonable price or that a base unis has extremely low price. It is better not to rely on such things.

Also, I see that when converting between internal and external precision you always round down. This could lead to a situation, when the contract doesn't have enough assets to transfer due to accumulated rounding errors. A common best practice is to always round towards protocol, i.e. against user. This would require two versions of conversion functions, one that rounds up and another that rounds down.

Aztec <> Notional bridge

We are looking to have a bridge contract written for Aztec <> Notional that would allow DeFi users to lend on Notional via Aztec's zk.money. This would introduce a layer of privacy and a lower gas option to using Notional on eth mainnet. The grantee will need to fill out an application for the grant here: https://medium.com/aztec-protocol/private-defi-with-the-aztec-connect-bridge-76c3da76d982

The grant recipient will be eligible for a 3k grant (in NOTE) from Notional and 2k grant from Aztec on completion.

More info on the resources available: https://aztecnetwork.notion.site/aztecnetwork/Aztec-Grants-19896948e74b4024a458cdeddbe9574f

Here's more on the bridge contracts: https://medium.com/aztec-protocol/private-defi-with-the-aztec-connect-bridge-76c3da76d982

And finally a repo https://github.com/AztecProtocol/aztec-connect-starter

This grant will be posted on gitcoin

Updates to Portfolio Handler gas efficiency

  • When dealing with arrays in memory, store the current element into a local variable rather than accessing via the array index every time.
  • Store the initial slot in the portfolio state object so that we don't need to recalculate later when we store it back.
  • Use the initial slot to determine the maxActiveSlot via something like maxActiveSlot = initialSlot + storedAssetLength

Inconsistency in underlying vs asset cash in liquidateCurrency

In liquidation, the benefit required figure that is calculated at the beginning of each branch is defined in underlying terms but it is not converted into asset terms before math is done with other values that are defined in asset terms. The root of the problem is that exchange rates between currencies are stored in underlying terms, but Notional's balances and the collateral/liquidation calculations are denominated in asset terms. This means that the issue is relatively simple in the local liquidation branches, but is more complicated in the cross-currency liquidation branch.

Liquidate Local Currency:
benefitRequired is defined in underlying terms here:

int256 benefitRequired =
factors
.localETHRate
.convertETHTo(factors.netETHValue.neg())
.mul(Constants.PERCENTAGE_DECIMALS)
.div(factors.localETHRate.buffer);

benefitRequired is then used in calculations with values that are defined in asset terms:

(w, benefitRequired) = _withdrawLocalLiquidityTokens(
nTokensToLiquidate = benefitRequired
.mul(balanceState.storedNTokenBalance)
.mul(int256(uint8(factors.nTokenParameters[Constants.PV_HAIRCUT_PERCENTAGE])))
.div(factors.nTokenValue.mul(haircutDiff));

The solution is to convert benefitRequired to assetCash terms before continuing on.

Liquidate Collateral Currency:
All the math that governs the calculation of collateralToRaise and localToPurchase is underlying terms, but the actual balances are both in asset terms so we need to convert in two places. First, convert collateralToRaise to asset terms after calculating it here:

collateralToRaise = benefitRequired.mul(Constants.PERCENTAGE_DECIMALS).div(denominator);

The issue is that we still have to calculate localToPurchase. That depends on exchange rates which means calculations in underlying terms. So when we call localToPurchase, we either need to transform collateralToRaise back in to underlying terms to pass it as an argument to the function, or we need to transform collateralToRaise to underlying terms within the localToPurchase function itself.

Whichever way, we need to convert localToPurchase into asset terms before we conduct this check:

if (localToPurchase > factors.localAvailable.neg()) {

Given that we will call localToPurchase again in this branch later on with a collateral figure that is in asset terms, I think it might be easiest to convert the collateral figure to underlying terms within the localToPurchase function. But either way, we need to be sure that both the collateral and local figures that localToPurhcase returns are in asset terms

Incorrect calculation when getting nTokensToLiquidate

The following code does not match the commented math:

// benefitGained = perpTokensToLiquidate * (liquidatedPV - freeCollateralPV)
// benefitGained = perpTokensToLiquidate * perpTokenPV * (liquidationHaircut - pvHaircut)
// perpTokensToLiquidate = benefitGained / (perpTokenPV * (liquidationHaircut - pvHaircut))
nTokensToLiquidate = benefitRequired.mul(Constants.INTERNAL_TOKEN_PRECISION).div(
factors.nTokenValue.mul(haircutDiff).div(Constants.PERCENTAGE_DECIMALS)
);
}

The "perpTokenPv" in the math describes the Pv of a single perpToken without any valuation haircut applied. But "nTokenValue" is the aggregate PV of the account's total nTokenBalance with the valuation haircut applied, not one single nToken without any haircut applied.

So to get the code to match the math, you would have to first divide nTokenValue by nTokenBalance and "nToken.parameters[Constants.PV_HAIRCUT_PERCENTAGE]" before solving for nTokensToLiquidate. Similar to what you do here:

nTokensToLiquidate
.mul(int256(uint8(factors.nTokenParameters[Constants.LIQUIDATION_HAIRCUT_PERCENTAGE])))
.mul(factors.nTokenValue)
.div(int256(uint8(factors.nTokenParameters[Constants.PV_HAIRCUT_PERCENTAGE])))
.div(balanceState.storedNTokenBalance);

Update average incentive calculation to use time weighted integral values

Here is how it works:

  1. You have the current total supply value stored
  2. You have the integral total supply value stored
  3. You have the timestamp stored of when the total supply value last changed
    When you are about to change the total supply you do the following:

integral_total_supply += current_total_supply * (current_time - total_supply_last_changed)
total_supply_last_changed = current_time

So if you know the integral total supply values at the beginning and the end of end of the integral (S0 and S1) and the interval length in seconds (T), then the average total supply for the interval is (S1 - S0) / T.

Update totalBitsSet to be more efficient

The way you use to count “one” bits in a word is very inefficient. Here is how I would do this:

function setBitsCount (uint x) public pure returns (uint) {
    x = (x & 0x5555555555555555555555555555555555555555555555555555555555555555) + (x >> 1 & 0x5555555555555555555555555555555555555555555555555555555555555555);
    x = (x & 0x3333333333333333333333333333333333333333333333333333333333333333) + (x >> 2 & 0x3333333333333333333333333333333333333333333333333333333333333333);
    x = (x & 0x0707070707070707070707070707070707070707070707070707070707070707) + (x >> 4);
    x = (x & 0x000F000F000F000F000F000F000F000F000F000F000F000F000F000F000F000F) + (x >> 8 & 0x000F000F000F000F000F000F000F000F000F000F000F000F000F000F000F000F);
    x = x + (x >> 16);
    x = x + (x >> 32);
    x = x  + (x >> 64);
    return (x & 0xFF) + (x >> 128 & 0xFF);
}

Add all liquidity curve calculations

  1. Calculate traded exchange rate, calculate implied rate, cash etc
  2. Add convert to underlying for totalCurrentCash
  3. Calculate rateOracle update
  4. Calculate incentives
  5. Update market
  6. Update portfolio

Incorrect conversion between exchange rates and implied rates

In market.sol we convert between exchangeRates and impliedRates in two places - here:

function getImpliedRate(

and here:
function getExchangeRateFromImpliedRate(

The math in these functions is fine, but the way they are invoked is incorrect because timeToMaturity is passed as an absolute number of seconds (i.e. timeToMaturity = maturity - blockTime). Because the rates we're working with are annualized, the timeToMaturity parameter needs to instead be in the form of a "yearCount". Here is how you would transform the timeToMaturity parameter into the proper form to be used in the two above functions as written:

yearCount = timeToMaturity / YEAR

Incorrect calculation in limitPurchaseByAvailableAmounts

The check on collateralAvailable is not properly applied:

if (fCashLiquidationPV > c.factors.collateralAvailable.add(fCashBenefit)) {
fCashToLiquidate = c.factors.collateralAvailable.mul(Constants.RATE_PRECISION).div(
liquidationDiscountFactor
);

The net change to collateralAvailable is -fCashCollateralPV + fCashBenefit, not -fCashLiquidationPV + fCashBenefit where fCashCollateralPV equals the fCashPV after applying the riskAdjustedDiscountFactor, not the liquidationDiscountFactor.

collateralAvailableNew = collateralAvailable - fCashCollateralPV + fCashBenefit
collateralAvailableNew = collateralAvailable - (fCashCollateralPV - fCashBenefit)

So you want to check that fCashCollateralPV - fCashBenefit is less than collateralAvailable. Equivalently you can check if fCashCollateralPV > collateralAvailable + fCashBenefit, and then modify fCashToLiquidate if it is as you do. Solving for the new value for fCashToLiquidate:

fCashCollateralPV - fCashBenefit = collateralAvailable
fCashCollateralPV = fCashToLiquidate * riskAdjustedDiscountFactor
fCashBenefit = fCashToLiquidate * (liquidationDiscountFactor - riskAdjustedDiscountFactor)

fCashToLiquidate * riskAdjustedDiscountFactor - fCashToLiquidate * (liquidationDiscountFactor - riskAdjustedDiscountFactor) = collateralAvailable
fCashToLiquidate * (2 * riskAdjustedDiscountFactor - liquidationDiscountFactor) = collateralAvailable
fCashToLiquidate = collateralAvailable / (2 * riskAdjustedDiscountFactor - liquidationDiscountFactor)

And then we also need to change this:

c.factors.collateralAvailable = c.factors.collateralAvailable.sub(
fCashToLiquidate.mul(riskAdjustedDiscountFactor).div(Constants.RATE_PRECISION)

collateralAvailableNew = collateralAvailable - (fCashCollateralPV - fCashBenefit)

Incorrectly summing underlying values with asset values in freeCollateral calculation

The summing is done in three places:

int256 netLocalAssetValue = netCashBalance.add(nTokenValue).add(portfolioBalance);

int256 netLocalAssetValue = netCashBalance.add(nTokenValue).add(portfolioBalance);

int256 netLocalAssetValue = netCashBalance.add(nTokenValue).add(portfolioValue);

The portfolioBalance is represented in underlying terms, but it is aggregated with the netCashBalance and nTokenValue figures which are denominated in asset terms.

setBalanceStorageForSettleCashDebt may set active currency to false if nToken balance is still active

In "setBalanceStorageForSettleCashDebt" function inside "BalanceHandler.sol":

if (cashBalance == 0) {
    accountContext.setActiveCurrency(
        cashGroup.currencyId,
        false,
        Constants.ACTIVE_IN_BALANCES
    );
}

This code may reset "ACTIVE_IN_BALANCES" flag even if nTokenBalance is not zero. In other place this flag is reset only when both, cash balance and nToken balance are zero.

Governance Proposal Vulnerabilities

From ABDK:

Looking at the “cancelProposal” function inside “GovernerAlpha.sol”. It seems that a proposer has an ability to cancel his proposals at any time, regardless of how many people voted for these proposals. This makes the voting potentially unfair, as the proposer has “veto” right of the propose, while other voters don't have such right.
I mean this code:

require(
    msg.sender == guardian ||
        note.getPriorVotes(proposal.proposer, blockNumber - 1) < proposalThreshold,
    "GovernorAlpha::cancel: proposer above threshold"
);

The idea is that if the voting power of the proposer will drop below threshold before the proposal will be executed, anybody may cancel the proposal. But the propoer may intentionally reduce his voting power to be able to cancel his own proposal, even if this proposal already succeeded. This looks like an abuse that should not be permitted.

Different attacks are opssible, I believe. Here is one example:

  1. A good proposal is discussed on social media. The proposal will most probably succeed. Someone just needs to propose it.
  2. A malicious user, who wants to sabotage the proposal, proposes it from his own address and starts advertising it.
  3. People vote for the proposal, the proposal succeeds and is queued.
  4. Just before the execution window opens, the proposer cancels the proposal.
  5. As a result, the community lost several days and needs to start over.
  6. Now other users make the same proposal, probably several people will do this at once.
  7. The malicious user also makes several copies of his proposal from different addresses.
  8. Now users need to somehow distinguish porposals made by honest users from those made by the malicious user, and only vote for the proposals from trusted proposers. This breaks the original idea (as I understand it) that users don't need to trust proposers, but only trust their proposals.
  9. Also, in case there are several similar proposals made by several honset proposers, users will need to somehow coordinate off-chain and choose which one to vote for. This breaks the idea that voting is made on-chain and doesn't require any off-chain coordination.
  10. All this could disrupt the voting and no proposal could acheive quorum.
  11. In such case the community will need to start over etc.

It seems that the voting schema implemented in the “GovernorAlpha.sol” is vulnerable to certain kinds of attacks.
One attack is when a bad proposal is proposed by a malicious user. As the proposal is bad, almost nobody vote for it and it doesn't acheive quorum. As long as the proposal is not going to succeed, most of the users, who are actually against this proposal, will not waste their gas voting against. However, the malicious proposer may vote for this proposal with large number of votes few seconds before the end of the voting, so other users will just don't have anough time to vote agains.
Another attack is that there could be proposals that are good when executed once, but are bad when executed more than once. Once such proposal is proposet, a malicious user may propose a copy of it and start advertising it. As proposals are basically the same, and are good, when considered in isolation, some users will vote for the original proposal, while another will vote fo the second one. Probably some users will vote for both. It is very possible that both proposals will suuceed resulting in a kind of double spending or something like this.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.