Giter VIP home page Giter VIP logo

pegnetd's Introduction


Build Status Discord Coverage Status License

A Network of Pegged Tokens

This is the main repository for the PegNet application.

Pegged tokens reflect real market assets such as currencies, precious metals, commodities, cryptocurrencies etc. The conversion rates on PegNet are determined by a decentralized set of miners who submit values based on current market data. These values are recorded in the Factom blockchain and then graded based upon accuracy and mining hashpower.

The draft proposal paper is available here.

For any questions, troubleshooting or further information head to discord.

Mining

Requirements

Setup

Create a .pegnet folder inside your home directory. Copy the config/defaultconfig.ini file there.

On Windows this is your %USERPROFILE% folder

Linux example:

mkdir ~/.pegnet
wget https://raw.githubusercontent.com/pegnet/pegnet/master/config/defaultconfig.ini -P ~/.pegnet/
  • Sign up for an API Key from https://currencylayer.com, replace APILayerKey in the config with your own

  • Replace either ECAddress or FCTAddress with your own

  • Modify the IdentityChain name to one of your choosing.

  • Have a factomd node running on mainnet.

  • Have factom-walletd open

  • Start Pegnet

On first startup there will be a delay while the hash bytemap is generated. Mining will only begin at the start of each ten minute block.

Contributing

  • Join Discord and chat about it with lovely people!

  • Run a testnet node

  • Create a github issue because they always exist.

  • Fork the repo and submit your pull requests, fix things.

Development

Docker guide can be found here for an automated solution.

Manual Setup

Install the factom binaries

The Factom developer sandbox setup overview is here, which covers the first parts, otherwise use below.

# In first terminal
# Change blocktime to whatever suits you 
factomd -blktime=120 -network=LOCAL

# Second Terminal
factom-walletd

# Third Terminal
fa='factom-cli importaddress Fs3E9gV6DXsYzf7Fqx1fVBQPQXV695eP3k5XbmHEZVRLkMdD9qCK'
ec='factom-cli newecaddress'
factom-cli listaddresses # Verify addresses
factom-cli buyec $fa $ec 100000
factom-cli balance $ec # Verify Balance

# Fork Repo on github, clone your fork
git clone https://github.com/<USER>/pegnet

# Add main pegnet repo as a remote
cd pegnet
git remote add upstream https://github.com/pegnet/pegnet

# Sync with main development branch
git pull upstream develop 

# Initialize the pegnet chain
cd initialization
go build
./initialization

# You should be ready to roll from here

pegnetd's People

Contributors

davidajohnston avatar emyrk avatar mberry avatar ormembaar avatar paulbernier avatar paulsnow avatar sambarnes avatar sigrlami avatar starneit avatar whosoup avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pegnetd's Issues

Lending of pAssets

Addition of the ability to lend pAssets for crypto. This can be a 2nd layer solution, does not have to be on-chain at this time.

Database is Locked panic

The easiest way to reproduce this issue is to start syncing a node from scratch while hammering the api with slower queries (pegnetd get txs ADDRESS --asset PEG or something).

The easiest fix might to just add a retry to the sync loop.

Balance Module

The goal is to have a way of handling balances that can be easily rolled back in case of errors. My first thought was to implement it similar to the grader module in the following way (using an interface for illustration purposes only. we'll never have two versions of balances so a simple struct would suffice)

type Balance interface {
	NewBlock() Balance
	SetRate(rates []opr.AssetUint)
	AddCoinbase(coinbase Transaction)
	AddTransaction(t Transaction)
	Finalize() (successful, pending, failed []Transaction)
}

The idea is to have a Balance block at a specific height and to build on top of that one for the next block. NewBlock() would deep-copy the Balance and return a new one for height + 1.

To start with, you load the most recent Balance (for height - 1) into memory, then run NewBlock(). You set the rates derived from grading for that height, add the burns and grading rewards as special coinbase Transactions, then add all of the Transactions (which are an umbrella for both transfers and conversions). Transactions know their own height and number of tries left.

The Balance struct would have the following fields:

  • balance map[factom.FAAdress]uint64: self-explanatory
  • pool []Transaction: a slice of all transactions
  • rates map[uint32]map[string]uint64 (or something): basically the Balance block would be capable of tracking rates of multiple heights. only heights for height - maximum number of tries would need to be tracked

Once done, you call Finalize(). This does:

  1. Add the coinbase Transactions first (these can never fail)
  2. Go through the pool and apply all the transactions. If they succeed or have no more tries left, they are removed, otherwise they stay in the pool. Repeat until there are no possible transactions left.
  3. Return a copy of the transactions. successful include coinbase, pending are ones that remain in the pool, failed are ones that have no more tries left (they'll have an error message associated why they failed) This is mostly for application feedback / console notifications.

This would allow us to do the logic of conversions and transfers in one cycle. A conversion can just check if rates exist for its height+1, and will remain in the pool if it doesn't so the next block can handle it.

The Balances struct should be persistable and for a basic "build-as-you-go" state, we only need to keep the most recent one around. However if we persist all of the Balance blocks, it means we can roll back the state to whatever position we desire and calculate metrics like volume and supply for a specific height on-demand (though it would still make more sense to have that in a dedicated table).

Thoughts on this approach?

The downside is that we're not really leveraging the use of SQL's transactions to do anything. We'd be keeping the entire balance book in memory rather than having the state just exist in the tables, and applying individual transactions as sql queries and rolling back if necessary.

"get rates" doesn't understand v1 rates

The pegnetd node errors on V1 rates:

(v2 activation height)

$ pegnetd get rates 210330
{"PEG":"0","pADA":"0.04641384","pBNB":"20.30758377","pBRL":"0.24451683","pCAD":"0.75503038","pCHF":"1.00763281","pCNY":"0.14149677","pDASH":"89.35304181","pDCR":"21.87815282","pETH":"192.06360392","pEUR":"1.10054587","pFCT":"3.12754958","pGBP":"1.24254781","pHKD":"0.12790258","pINR":"0.01395666","pJPY":"0.00926388","pKRW":"0.00084379","pLTC":"70.85025702","pMXN":"0.05139827","pPHP":"0.01912669","pRVN":"0.03167611","pSGD":"0.72679542","pUSD":"1","pXAG":"17.92114695","pXAU":"1503.75939849","pXBC":"304.07238158","pXBT":"10210.52956099","pXLM":"0.05876461","pXMR":"73.05056356","pZEC":"48.03699448"}

one height earlier:

$ pegnetd get rates 210329
Failed to make RPC request
Details:
json.Unmarshal({"jsonrpc":"2.0","result":{"pADA":4630000,"pBNB":2029990000,"pBRL":24490000,"pCAD":75500000,"pCHF":100770000,"pCNY":14140000,"pDASH":8912040000,"pDCR":2194750000,"pETH":19193120000,"pEUR":110000000,"pFCT":313090000,"pGBP":124200000,"pHKD":12790000,"pINR":1390000,"pJPY":920000,"pKRW":80000,"pLTC":7064430000,"pMXN":5140000,"pPHP":1910000,"pPNT":0,"pRVN":3170000,"pSGD":72670000,"pUSD":100000000,"pXAG":1792850000,"pXAU":150375930000,"pXBC":30384980000,"pXBT":1021009450000,"pXLM":5870000,"pXMR":7299770000,"pXPD":160410650000,"pXPT":93374160000,"pZEC":4805240000},"id":662}): invalid token type
exit status 1

balance uses uint64

The functions in pegnet/addresses.go all use uint64 for balances but sqlite doesn't support uint64. they need to be changed to int64 or a workaround found

Yyy

How to get paid?

Better error message when node not synced

My local mainnet node's first pass was 100% but second pass was lagging behind 3 blocks (I started it recently), which resulted in the following set of messages:

time="2019-10-06T15:29:22+02:00" level=debug msg=synced height=213120 took=80.0045ms
time="2019-10-06T15:29:22+02:00" level=debug msg=synced height=213121 took=104.006ms
time="2019-10-06T15:29:22+02:00" level=debug msg=synced height=213122 took=86.0049ms
time="2019-10-06T15:29:22+02:00" level=debug msg=synced height=213123 took=106.0061ms
time="2019-10-06T15:29:22+02:00" level=error msg="failed to sync height" error="jsonrpc2.Error{Code:-32008, Message:"Object not found"}" height=213124
time="2019-10-06T15:29:27+02:00" level=error msg="failed to sync height" error="jsonrpc2.Error{Code:-32008, Message:"Object not found"}" height=213124
time="2019-10-06T15:29:32+02:00" level=error msg="failed to sync height" error="jsonrpc2.Error{Code:-32008, Message:"Object not found"}" height=213124
time="2019-10-06T15:29:37+02:00" level=error msg="failed to sync height" error="jsonrpc2.Error{Code:-32008, Message:"Object not found"}" height=213124
time="2019-10-06T15:29:42+02:00" level=error msg="failed to sync height" error="jsonrpc2.Error{Code:-32008, Message:"Object not found"}" height=213124
time="2019-10-06T15:29:48+02:00" level=error msg="failed to sync height" error="jsonrpc2.Error{Code:-32008, Message:"Object not found"}" height=213124
time="2019-10-06T15:29:53+02:00" level=error msg="failed to sync height" error="jsonrpc2.Error{Code:-32008, Message:"Object not found"}" height=213124
time="2019-10-06T15:29:58+02:00" level=error msg="failed to sync height" error="jsonrpc2.Error{Code:-32008, Message:"Object not found"}" height=213124
time="2019-10-06T15:30:03+02:00" level=error msg="failed to sync height" error="jsonrpc2.Error{Code:-32008, Message:"Object not found"}" height=213124
time="2019-10-06T15:30:08+02:00" level=error msg="failed to sync height" error="jsonrpc2.Error{Code:-32008, Message:"Object not found"}" height=213124
time="2019-10-06T15:30:13+02:00" level=error msg="failed to sync height" error="jsonrpc2.Error{Code:-32008, Message:"Object not found"}" height=213124
time="2019-10-06T15:30:18+02:00" level=error msg="failed to sync height" error="jsonrpc2.Error{Code:-32008, Message:"Object not found"}" height=213124
time="2019-10-06T15:30:23+02:00" level=error msg="failed to sync height" error="jsonrpc2.Error{Code:-32008, Message:"Object not found"}" height=213124
time="2019-10-06T15:30:28+02:00" level=error msg="failed to sync height" error="jsonrpc2.Error{Code:-32008, Message:"Object not found"}" height=213124
time="2019-10-06T15:30:33+02:00" level=error msg="failed to sync height" error="jsonrpc2.Error{Code:-32008, Message:"Object not found"}" height=213124
time="2019-10-06T15:30:38+02:00" level=error msg="failed to sync height" error="jsonrpc2.Error{Code:-32008, Message:"Object not found"}" height=213124
time="2019-10-06T15:30:43+02:00" level=error msg="failed to sync height" error="jsonrpc2.Error{Code:-32008, Message:"Object not found"}" height=213124
time="2019-10-06T15:30:48+02:00" level=error msg="failed to sync height" error="jsonrpc2.Error{Code:-32008, Message:"Object not found"}" height=213124
time="2019-10-06T15:30:53+02:00" level=error msg="failed to sync height" error="jsonrpc2.Error{Code:-32008, Message:"Object not found"}" height=213124
time="2019-10-06T15:30:58+02:00" level=error msg="failed to sync height" error="jsonrpc2.Error{Code:-32008, Message:"Object not found"}" height=213124
time="2019-10-06T15:31:03+02:00" level=error msg="failed to sync height" error="jsonrpc2.Error{Code:-32008, Message:"Object not found"}" height=213124
time="2019-10-06T15:31:08+02:00" level=error msg="failed to sync height" error="jsonrpc2.Error{Code:-32008, Message:"Object not found"}" height=213124
time="2019-10-06T15:31:13+02:00" level=error msg="failed to sync height" error="jsonrpc2.Error{Code:-32008, Message:"Object not found"}" height=213124
time="2019-10-06T15:31:18+02:00" level=error msg="failed to sync height" error="jsonrpc2.Error{Code:-32008, Message:"Object not found"}" height=213124
time="2019-10-06T15:31:23+02:00" level=error msg="failed to sync height" error="jsonrpc2.Error{Code:-32008, Message:"Object not found"}" height=213124
time="2019-10-06T15:31:28+02:00" level=error msg="failed to sync height" error="jsonrpc2.Error{Code:-32008, Message:"Object not found"}" height=213124
time="2019-10-06T15:31:33+02:00" level=error msg="failed to sync height" error="jsonrpc2.Error{Code:-32008, Message:"Object not found"}" height=213124
time="2019-10-06T15:31:38+02:00" level=error msg="failed to sync height" error="jsonrpc2.Error{Code:-32008, Message:"Object not found"}" height=213124
time="2019-10-06T15:31:43+02:00" level=error msg="failed to sync height" error="jsonrpc2.Error{Code:-32008, Message:"Object not found"}" height=213124
time="2019-10-06T15:31:48+02:00" level=debug msg=synced height=213124 took=68.0039ms
time="2019-10-06T15:31:48+02:00" level=debug msg=synced height=213125 took=91.0052ms
time="2019-10-06T15:31:58+02:00" level=debug msg=synced height=213126 took=81.0047ms

If the height is available but the data is not, we should print out a better message. something like "data not yet downloaded on node", OR we could be using entryblockheight instead of directoryblockheight

Error message is not completely clear

factom@arisen:~$ pegnetd newcvt EC2DKSYyRcNWf7RS963VFYgMExoHRYLHVeCfQ9PGPmNzwrcmgm2r FA2uoLUndqhou4VNcra8btU8M2qV67YTosktGRAD3SL2pjHV9gZ8 pFCT 100 PEG
failed to parse input: jsonrpc2.Error{Code:-32603, Message:"Internal error", Data:"wallet: No such address"}

There is no information regarding which adderss is not there in the wallet. It is good to give additional information

Very Slow Get Transactions

The get transactions query is very slow, on some addresses over 20s when also filtering by asset. This query is just very inefficient.

I tried to use joins vs the current syntax, and got the execution fast, but the number of rows affected the time it took to fetch. If you were to query 1 row, I could get the query to run in ~150ms. But when you ask for 25 rows, the query time was at 3s, and 50 pushes it close to 6s.

I don't have that much information as to what is making things so slow, so some investigation is needed. I have noticed that things are slower with more tables. We currently make batches and transactions different tables, but it might be better to make it 1 table. The duplication of data wouldn't be that much given most txs are a batch of 1. If it improves our query time, the duplicated data is worth the cost.

Address table

Just saw the following:

const createTableAddresses = `CREATE TABLE "pn_addresses" (
"id" INTEGER PRIMARY KEY,
"address" BLOB NOT NULL UNIQUE,
"peg_balance" INTEGER NOT NULL
CONSTRAINT "insufficient balance" CHECK ("peg_balance" >= 0),
"pusd_balance" INTEGER NOT NULL
CONSTRAINT "insufficient balance" CHECK ("pusd_balance" >= 0),
"peur_balance" INTEGER NOT NULL
CONSTRAINT "insufficient balance" CHECK ("peur_balance" >= 0),
"pjpy_balance" INTEGER NOT NULL
CONSTRAINT "insufficient balance" CHECK ("pjpy_balance" >= 0),
"pgbp_balance" INTEGER NOT NULL
CONSTRAINT "insufficient balance" CHECK ("pgbp_balance" >= 0),
"pcad_balance" INTEGER NOT NULL
CONSTRAINT "insufficient balance" CHECK ("pcad_balance" >= 0),
"pchf_balance" INTEGER NOT NULL
CONSTRAINT "insufficient balance" CHECK ("pchf_balance" >= 0),
"pinr_balance" INTEGER NOT NULL
CONSTRAINT "insufficient balance" CHECK ("pinr_balance" >= 0),
"psgd_balance" INTEGER NOT NULL
CONSTRAINT "insufficient balance" CHECK ("psgd_balance" >= 0),
"pcny_balance" INTEGER NOT NULL
CONSTRAINT "insufficient balance" CHECK ("pcny_balance" >= 0),
"phkd_balance" INTEGER NOT NULL
CONSTRAINT "insufficient balance" CHECK ("phkd_balance" >= 0),
"pkrw_balance" INTEGER NOT NULL
CONSTRAINT "insufficient balance" CHECK ("pkrw_balance" >= 0),
"pbrl_balance" INTEGER NOT NULL
CONSTRAINT "insufficient balance" CHECK ("pbrl_balance" >= 0),
"pphp_balance" INTEGER NOT NULL
CONSTRAINT "insufficient balance" CHECK ("pphp_balance" >= 0),
"pmxn_balance" INTEGER NOT NULL
CONSTRAINT "insufficient balance" CHECK ("pmxn_balance" >= 0),
"pxau_balance" INTEGER NOT NULL
CONSTRAINT "insufficient balance" CHECK ("pxau_balance" >= 0),
"pxag_balance" INTEGER NOT NULL
CONSTRAINT "insufficient balance" CHECK ("pxag_balance" >= 0),
"pxbt_balance" INTEGER NOT NULL
CONSTRAINT "insufficient balance" CHECK ("pxbt_balance" >= 0),
"peth_balance" INTEGER NOT NULL
CONSTRAINT "insufficient balance" CHECK ("peth_balance" >= 0),
"pltc_balance" INTEGER NOT NULL
CONSTRAINT "insufficient balance" CHECK ("pltc_balance" >= 0),
"prvn_balance" INTEGER NOT NULL
CONSTRAINT "insufficient balance" CHECK ("prvn_balance" >= 0),
"pxbc_balance" INTEGER NOT NULL
CONSTRAINT "insufficient balance" CHECK ("pxbc_balance" >= 0),
"pfct_balance" INTEGER NOT NULL
CONSTRAINT "insufficient balance" CHECK ("pfct_balance" >= 0),
"pbnb_balance" INTEGER NOT NULL
CONSTRAINT "insufficient balance" CHECK ("pbnb_balance" >= 0),
"pxlm_balance" INTEGER NOT NULL
CONSTRAINT "insufficient balance" CHECK ("pxlm_balance" >= 0),
"pada_balance" INTEGER NOT NULL
CONSTRAINT "insufficient balance" CHECK ("pada_balance" >= 0),
"pxmr_balance" INTEGER NOT NULL
CONSTRAINT "insufficient balance" CHECK ("pxmr_balance" >= 0),
"pdas_balance" INTEGER NOT NULL
CONSTRAINT "insufficient balance" CHECK ("pdas_balance" >= 0),
"pzec_balance" INTEGER NOT NULL
CONSTRAINT "insufficient balance" CHECK ("pzec_balance" >= 0),
"pdcr_balance" INTEGER NOT NULL
CONSTRAINT "insufficient balance" CHECK ("pdcr_balance" >= 0)
);
`

That defeats the whole purpose of using relational databases in the first place. You have to add/remove columns to modify the assets (leading to data loss and backward incompatibility) and you can't run automated queries. This is what k-v dbs are for.

The SQL way of doing it would be three tables:

  1. An address / id mapping (this is only for FATd compatibility, the address is already a unique id)
  2. An asset / assetid mapping table (eg 1 is USD, 2 is PEG, 3 is EUR, ...)
  3. A balance table that consists of (addressid, assetid, balance) with a unique key of (addressid, assetid)

that would let us remove assets without having to remove records from the database and also allow easier comparative queries, like querying a list of assets sorted by their supply

Make denomination consistent for pFCT

pegnetd balances denominates pFCT in factoshis. However, pegnetd newcvt expects the amount argument to be denominated in factoids.

Consistent denomination should be used for both in order to avoid confusion. I suggest denominating all pFCT balances/arguments in factoids in order to make sure it is consistent with pegnet burn.

Provide better feedback for syncing and operations

Just wanted to see how long it would take to sync from the open node. The status report of syncing happens every 50 blocks, but the time it takes to process a single block from the open node is between ~3 and ~10 seconds. It didn't give me the first status update until 4:30 minutes were elapsed:

time="2019-10-06T09:28:33+02:00" level=info msg="Listening on :8070..."
time="2019-10-06T09:28:42+02:00" level=debug msg=synced height=206422 took=8.8035035s
time="2019-10-06T09:28:49+02:00" level=debug msg=synced height=206423 took=6.7853881s
time="2019-10-06T09:28:53+02:00" level=debug msg=synced height=206424 took=3.7572149s
time="2019-10-06T09:28:57+02:00" level=debug msg=synced height=206425 took=4.5542605s
time="2019-10-06T09:29:18+02:00" level=debug msg=synced height=206426 took=3.7092121s
time="2019-10-06T09:29:22+02:00" level=debug msg=synced height=206427 took=3.8902225s
time="2019-10-06T09:29:26+02:00" level=debug msg=synced height=206428 took=4.0652325s
time="2019-10-06T09:29:29+02:00" level=debug msg=synced height=206429 took=3.4901997s
time="2019-10-06T09:29:33+02:00" level=debug msg=synced height=206430 took=3.5952056s
time="2019-10-06T09:29:36+02:00" level=debug msg=synced height=206431 took=3.4221957s
time="2019-10-06T09:29:40+02:00" level=debug msg=synced height=206432 took=3.9762274s
time="2019-10-06T09:29:44+02:00" level=debug msg=synced height=206433 took=3.1281789s
time="2019-10-06T09:29:47+02:00" level=debug msg=synced height=206434 took=3.198183s
time="2019-10-06T09:29:51+02:00" level=debug msg=synced height=206435 took=3.9082235s
time="2019-10-06T09:29:55+02:00" level=debug msg=synced height=206436 took=4.3762503s
time="2019-10-06T09:29:59+02:00" level=debug msg=synced height=206437 took=3.8482201s
time="2019-10-06T09:30:09+02:00" level=debug msg=synced height=206438 took=10.5626042s
time="2019-10-06T09:30:14+02:00" level=debug msg=synced height=206439 took=4.2612437s
time="2019-10-06T09:30:18+02:00" level=debug msg=synced height=206440 took=3.9982287s
time="2019-10-06T09:30:23+02:00" level=debug msg=synced height=206441 took=4.8192757s
time="2019-10-06T09:30:27+02:00" level=debug msg=synced height=206442 took=4.2992459s
time="2019-10-06T09:30:30+02:00" level=debug msg=synced height=206443 took=3.2961886s
time="2019-10-06T09:30:34+02:00" level=debug msg=synced height=206444 took=3.9352251s
time="2019-10-06T09:30:38+02:00" level=debug msg=synced height=206445 took=4.3222472s
time="2019-10-06T09:30:42+02:00" level=debug msg=synced height=206446 took=3.741214s
time="2019-10-06T09:31:13+02:00" level=debug msg=synced height=206447 took=4.4012518s
time="2019-10-06T09:31:17+02:00" level=debug msg=synced height=206448 took=3.8692214s
time="2019-10-06T09:31:21+02:00" level=debug msg=synced height=206449 took=4.0612322s
time="2019-10-06T09:31:25+02:00" level=debug msg=synced height=206450 took=4.6522661s
time="2019-10-06T09:31:29+02:00" level=debug msg=synced height=206451 took=3.724213s
time="2019-10-06T09:31:32+02:00" level=debug msg=synced height=206452 took=2.7751587s
time="2019-10-06T09:31:35+02:00" level=debug msg=synced height=206453 took=3.3431912s
time="2019-10-06T09:31:39+02:00" level=debug msg=synced height=206454 took=3.5502031s
time="2019-10-06T09:31:43+02:00" level=debug msg=synced height=206455 took=4.0952343s
time="2019-10-06T09:31:46+02:00" level=debug msg=synced height=206456 took=3.7292133s
time="2019-10-06T09:31:51+02:00" level=debug msg=synced height=206457 took=4.3242474s
time="2019-10-06T09:31:55+02:00" level=debug msg=synced height=206458 took=4.3192471s
time="2019-10-06T09:31:59+02:00" level=debug msg=synced height=206459 took=3.9082235s
time="2019-10-06T09:32:08+02:00" level=debug msg=synced height=206460 took=9.2275278s
time="2019-10-06T09:32:13+02:00" level=debug msg=synced height=206461 took=4.6022632s
time="2019-10-06T09:32:18+02:00" level=debug msg=synced height=206462 took=5.2933028s
time="2019-10-06T09:32:22+02:00" level=debug msg=synced height=206463 took=3.8262188s
time="2019-10-06T09:32:28+02:00" level=debug msg=synced height=206464 took=5.9693415s
time="2019-10-06T09:32:33+02:00" level=debug msg=synced height=206465 took=4.5802619s
time="2019-10-06T09:32:37+02:00" level=debug msg=synced height=206466 took=4.8542776s
time="2019-10-06T09:32:42+02:00" level=debug msg=synced height=206467 took=4.6242645s
time="2019-10-06T09:32:46+02:00" level=debug msg=synced height=206468 took=4.4832564s
time="2019-10-06T09:32:51+02:00" level=debug msg=synced height=206469 took=4.510258s
time="2019-10-06T09:32:55+02:00" level=debug msg=synced height=206470 took=4.2022404s
time="2019-10-06T09:33:00+02:00" level=debug msg=synced height=206471 took=4.7092693s
time="2019-10-06T09:33:00+02:00" level=info msg="sync stats" avg=4.467575534s elapsed=4m26.5082434s height=206471 left=8h12m46.414884012s syncing-to=213089

I think we should make it time based rather than block based or at least caution people that it's going to take a while for the first message to show up if they're syncing from the open node.

Potential problem with price average calculation algorithm

Hey. I'd like to make sure that you guys are aware of potential problem with calculating average prices the easy way.

Basically, what can happen is that miners may submit prices which differ from the average price by orders of magnitude. Let's say, we have 5 miners submitting rate of BTC/USD:
[8800,8900,9000,9100,90000000000000000000000000000000]

If you calculate average the normal way, it will be way off. This problem is not offset by increasing number of miners, since the orders of magnitude price deviation 9e30 will offset calculations even if thousands of miners report correct price.

Does PegNet use average calculation algorithm which is sensitive to deviations like that?

Add asset filter on get-transactions

Currently, get-transactions returns all transactions for an address/height/hash. We should be able to filter the get-transactions response by asset type.

Golang library for pegnetd api

There currently doesn't exist a library / jrpc client that supports the pegnetd api along the lines of FactomProject/factom or Factom-Asset-Tokens/factom.

A lot of the functionality is already implemented in the pegnetd cli, but it should be spun out into its own library for shared use in other projects.

API method "get-transaction-status" returns an error for failed transactions

The API method get-transaction-status will return an error for any transaction where executed <0. It appears that an unsigned integer is used for the execution response where a signed integer should be used.

  error: {
    code: -32602,
    message: 'Invalid params',
    data: 'sql: Scan error on column index 1, name "executed": converting driver.Value type int64 ("-1") to a uint32: invalid syntax'
  },

var height, executed uint32
err := p.DB.QueryRow("SELECT height, executed FROM pn_history_txbatch WHERE entry_hash = ?", hash[:]).Scan(&height, &executed)
if err != nil {
if err == sql.ErrNoRows {
return 0, 0, nil
}
return 0, 0, err
}
return height, executed, nil

Can be reproduced with the following transaction hash: 5145f90dabce81bd963758f2bf2d836f663add6fa154121206d5d18014fdd13a

I will submit a PR shortly to fix the issue.

PegNet Trader Tools

Development of automated trading and arbitrage tools for within Pegnetd, and pAssets on exchanges.

PIP 18 Delegated Staking

  1. PIP-18 use balances of PEG for each address which is more complicated. It is the balance of PEG for the address (assuming it has not be delegated)
  2. It quits looking at the rich list, and just consider the top 100 submissions with the highest stake
  3. PIP-18 pays out with the ratio of the total PEG staked. (Removes old top 100 PEG addresses staking reward and give staking opportunity to all PEG holders)

Failed to make RPC request — pegnetd get rates

I have written a script that calculates average FCT price using PegNet.

pegnetd is fully synced:

pegnetd[5412]: time="2020-09-04T02:22:28Z" level=info msg="status report" height=261018
pegnetd[5412]: time="2020-09-04T03:22:05Z" level=info msg="status report" height=261024
pegnetd[5412]: time="2020-09-04T04:22:15Z" level=info msg="status report" height=261030
pegnetd[5412]: time="2020-09-04T05:22:06Z" level=info msg="status report" height=261036
pegnetd[5412]: time="2020-09-04T06:22:02Z" level=info msg="status report" height=261042
pegnetd[5412]: time="2020-09-04T07:22:04Z" level=info msg="status report" height=261048

My script makes 500 sequential requests pegnetd get rates {height}.
pegnetd returns this error for exactly the same heights:

pegnetd get rates 260814
Failed to make RPC request
Details:
jsonrpc2.Error{Code:ErrorCode{-32809}, Message:"Not Found", Data:"could not find what you were looking for"}

It's not a pokemon bug, because each run I get exactly the same "failed" heights:

500 blocks range [260548-261047] read:

Missing PegNet blocks count: 132 (260814, 260815, 260817, 260823, 260837, 260852, 260857, 260858, 260859, 260860, 260861, 260862, 260863, 260865, 260866, 260868, 260869, 260871, 260872, 260873, 260874, 260875, 260876, 260877, 260878, 260879, 260880, 260881, 260882, 260883, 260884, 260885, 260886, 260887, 260888, 260889, 260890, 260891, 260892, 260893, 260894, 260896, 260899, 260901, 260902, 260903, 260906, 260907, 260908, 260909, 260910, 260911, 260912, 260917, 260919, 260932, 260933, 260935, 260936, 260937, 260945, 260946, 260947, 260948, 260949, 260950, 260951, 260952, 260953, 260954, 260955, 260956, 260957, 260958, 260959, 260960, 260961, 260962, 260963, 260964, 260965, 260966, 260967, 260968, 260969, 260970, 260971, 260972, 260973, 260974, 260975, 260976, 260977, 260978, 260979, 260980, 260981, 260982, 260984, 260985, 260986, 260987, 260988, 260989, 260990, 260991, 260992, 260993, 260994, 260995, 260996, 260997, 260998, 260999, 261000, 261001, 261002, 261003, 261004, 261005, 261006, 261007, 261008, 261009, 261010, 261011, 261012, 261013, 261014, 261015, 261016, 261017)

Second point of exit of PegNet

  1. PegNet supposed to be the store of value network. But right now it’s very easy to enter pegnet, but hard to exit (PEG buyside is tiny, pUSD sold with huge discount and even there is lack of buying).

  2. The same reason drives people to sit in PEG rather than using network — once you leave PEG, you’re unable to exit (see p.1).

  3. I think we need to focus on building strong second point of entry/exit PegNet. Many would agree, that it should be pUSD.

Logic:
3.1. If people feel they can exit via pUSD without huge discount, let’s say 1-3%, they would feel safe converting PEG into any pAssets: trade pAssets and pUSD.
3.2. Therefore, PEG supply would drop. Selling pressure on exchanges would also being reduced.
3.3. At the certain moment (supply under curve line) we would be able to implement PEG bank and eliminate conversion limit. This will drive arbitragers to start arbitraging PEG, significantly increasing it’s liquidity.

  1. As result: PEG supply normalized, selling pressure reduced, pUSD is additional entry/exitpoint to PegNet, liquidity increased, conversion limit eliminated, people trade on PegNet and use it as store of value without fear of being unable to exit.

Did I miss something?
Should we focus on it?

Hardcode Mainnet, don't use TestNet

Since this is a stopgap measure, I propose that we remove all the references to the TestNet and hardcode it for MainNet only. That way we're only going to need dev-flags to set things like activation heights and we can test it on local nodes just fine if we want to. Doing all of the work to support the different heights, p- vs t-assets, chain names, etc is wasted time. If we need TestNet support, we can more easily add it later.

Yay or nay?

fat-103 signing timestamp randomizer

Crossposting this PR: Factom-Asset-Tokens/fatd#52

Once this bug is fixed in the fatd code, we'll need to update pegnetd's import in order to pull in the changes. This affects someone who sends multiple identical transactions via the cli in the same second. The timestamp would be the same due to the bug and they would count as replay transactions.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.