Giter VIP home page Giter VIP logo

ortelius's Introduction

🔴WARNING: This has been deprecated, please read this. 🔴

Ortelius

A data processing pipeline for the Avalanche network.

Features

  • Maintains a persistent log of all consensus events and decisions made on the Avalanche network.
  • Indexes Exchange (X), Platform (P), and Contract (C) chain transactions.
  • An API allowing easy exploration of the index.

Prerequisite

https://docs.docker.com/engine/install/ubuntu/

https://docs.docker.com/compose/install/

Quick Start with Standalone Mode on Fuji (testnet) network

The easiest way to get started is to try out the standalone mode.

git clone https://github.com/ava-labs/ortelius.git $GOPATH/github.com/ava-labs/ortelius
cd $GOPATH/github.com/ava-labs/ortelius
make dev_env_start
make standalone_run

ortelius's People

Contributors

anilcse avatar arturrez avatar collincusce avatar dhrubabasu avatar learyce avatar liraxapp avatar patrick-ogrady avatar rajranjan0608 avatar stephenbuttolph avatar tasinco avatar tyler-smith avatar yulin-dong avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ortelius's Issues

Withdraw order

I have withdraw AVAX from binance to my AVAX vallet. It seems in PROCESS for 1 days. When i connect to the Binance Support they said it is related with AVAX blockchain.

Information is as below:
AVAX
Processing
2687.0068 2020-09-24 12:18:53
Address:
X-avax1em3dqvxglyxavtyt5zplyk4fl0tmqpszldwvyx
TxID:
LAwKv7p7UQ1xZsHJTQqaqamK1MGEpDZB3DGswX5cs6dwnb9HH

Indexer Erroring out with Duplicate Entries for Production Build

Hi,

We are trying to run data processing pipeline in Production mode.

Everything seems to be running alright, however, when we check Production Logs, we continue to see following,

indexer_1      | [2021-02-11T21:23:52.46329294Z]: job:bootstrap event:dbr.exec time:1746 μs kvs:[sql:INSERT INTO `address_chain` (`address`,`chain_id`,`created_at`) VALUES ('JYecDLqzKt7DUtrmLXnuk55k2o561AjW7','11111111111111111111111111111111LpoYY','2020-09-10 00:00:00.000000')]
indexer_1      | [2021-02-11T21:23:52.465773084Z]: job:bootstrap event:dbr.exec.exec err:Error 1062: Duplicate entry 'VD4KE3To81AfgGR4RGNooLhR49vRXt3UbWmzskgDuUn27isyK-JYecDLqzKt7DUt' for key 'avm_output_addresses.avm_output_addresses_output_id_addr' kvs:[sql:INSERT INTO `avm_output_addresses` (`output_id`,`address`,`created_at`) VALUES ('VD4KE3To81AfgGR4RGNooLhR49vRXt3UbWmzskgDuUn27isyK','JYecDLqzKt7DUtrmLXnuk55k2o561AjW7','2020-09-10 00:00:00.000000')]
indexer_1      | [2021-02-11T21:23:52.465882991Z]: job:bootstrap event:dbr.exec time:2 ms kvs:[sql:INSERT INTO `avm_output_addresses` (`output_id`,`address`,`created_at`) VALUES ('VD4KE3To81AfgGR4RGNooLhR49vRXt3UbWmzskgDuUn27isyK','JYecDLqzKt7DUtrmLXnuk55k2o561AjW7','2020-09-10 00:00:00.000000')]
indexer_1      | [2021-02-11T21:23:52.466110647Z]: job:bootstrap event:dbr.exec time:3 ms kvs:[sql:INSERT INTO `avm_output_addresses` (`output_id`,`address`,`created_at`) VALUES ('VD4KE3To81AfgGR4RGNooLhR49vRXt3UbWmzskgDuUn27isyK','JYecDLqzKt7DUtrmLXnuk55k2o561AjW7','2020-09-10 00:00:00.000000')]
indexer_1      | [2021-02-11T21:23:52.469401347Z]: job:bootstrap event:dbr.exec.exec err:Error 1062: Duplicate entry '2urSTLvfPC1GpRBvprQvgt4VTB2HJUrxA7iTSW7aFYWSxobN1A' for key 'avm_outputs.PRIMARY' kvs:[sql:INSERT INTO `avm_outputs` (`id`,`chain_id`,`transaction_id`,`output_index`,`asset_id`,`output_type`,`amount`,`locktime`,`threshold`,`group_id`,`payload`,`stake_locktime`,`stake`,`frozen`,`created_at`) VALUES ('2urSTLvfPC1GpRBvprQvgt4VTB2HJUrxA7iTSW7aFYWSxobN1A','11111111111111111111111111111111LpoYY','2k7yszSgGEw25wbK7DApfg9A181NysrTjN245YviqRCWZHcujs',126,'FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z',7,1642588235294,0,1,0,?,1638576000,1,0,'2020-09-10 00:00:00.000000')]
indexer_1      | [2021-02-11T21:23:52.469488256Z]: job:bootstrap event:dbr.exec time:3 ms kvs:[sql:INSERT INTO `avm_outputs` (`id`,`chain_id`,`transaction_id`,`output_index`,`asset_id`,`output_type`,`amount`,`locktime`,`threshold`,`group_id`,`payload`,`stake_locktime`,`stake`,`frozen`,`created_at`) VALUES ('2urSTLvfPC1GpRBvprQvgt4VTB2HJUrxA7iTSW7aFYWSxobN1A','11111111111111111111111111111111LpoYY','2k7yszSgGEw25wbK7DApfg9A181NysrTjN245YviqRCWZHcujs',126,'FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z',7,1642588235294,0,1,0,?,1638576000,1,0,'2020-09-10 00:00:00.000000')]
indexer_1      | [2021-02-11T21:23:52.469850122Z]: job:bootstrap event:dbr.exec time:3 ms kvs:[sql:INSERT INTO `avm_outputs` (`id`,`chain_id`,`transaction_id`,`output_index`,`asset_id`,`output_type`,`amount`,`locktime`,`threshold`,`group_id`,`payload`,`stake_locktime`,`stake`,`frozen`,`created_at`) VALUES ('2urSTLvfPC1GpRBvprQvgt4VTB2HJUrxA7iTSW7aFYWSxobN1A','11111111111111111111111111111111LpoYY','2k7yszSgGEw25wbK7DApfg9A181NysrTjN245YviqRCWZHcujs',126,'FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z',7,1642588235294,0,1,0,?,1638576000,1,0,'2020-09-10 00:00:00.000000')]

The containers are freshly started with MySQL having no data.

Can someone please point us what we are doing wrong?

DSNs that are not valid URLs do not work

We need the DSN to include the ?parseTime=true parameter. We attempt to ensure this is set here with a hack; trying to parse the DSN as a URL:

u, err := url.Parse(dsn)

Unfortunately DSN's are not actually URLs and so a something like an underscore in the username will prevent it from working. We need to address this in a better way.

Unknown Tx Shows up in Explorer

This transaction shows up as unknown when requesting its status from a bootstrapped node: 2quRTmhcevcUhiMX8As42Y5wrapet4W2qxhhtQibWGM6AtpPrv

dhcp-vl2042-15972:gecko aaronbuchwald$ curl -X POST --data '{
>     "jsonrpc":"2.0",
>     "id"     :1,
>     "method" :"avm.getTxStatus",
>     "params" :{
>         "txID":"2quRTmhcevcUhiMX8As42Y5wrapet4W2qxhhtQibWGM6AtpPrv"
>     }
> }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/X
{"jsonrpc":"2.0","result":{"status":"Unknown"},"id":1}
dhcp-vl2042-15972:gecko aaronbuchwald$ 

The explorer displays the transaction at this link: https://explorer.avax.network/tx/2quRTmhcevcUhiMX8As42Y5wrapet4W2qxhhtQibWGM6AtpPrv

get transaction response for different tx id

Hi,

I send a request via /x/transactions/:id on ortelius to FujiTestnet. I gave an TXID this method of Fuji. - and another tx id- but I got the same response in every request like this:

{ "count": 1, "transactions": [ { "id": "FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z", "chainID": "2oYMBNV4eNHyqk2fjjV5nVQLDbtmNJzq5s3qs3Lo6ftnC6FByM", "type": "create_asset", "inputs": null, "outputs": [ { "id": "RgbFSG3d2RfK3KmmMF5LkY8zFEmgKh724Mrg96yw3Z2ksbBqb", "transactionID": "FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z", "outputIndex": 2766, "assetID": "FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z", "outputType": 7, "amount": "2698470588235", "locktime": 0, "threshold": 1, "addresses": [ "avax1ww2qmnuzxzg5tzhljj60xc2h9avptdej4s57nv" ], "timestamp": "2020-09-10T00:00:00Z", "redeemingTransactionID": "" }, { "id": "2WfS9HHBUhnGoyzUVt9p2euEBFpr8EFRPpb9xo381ryBZWPnKk", "transactionID": "FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z", "outputIndex": 286, "assetID": "FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z", "outputType": 7, "amount": "39900000000", "locktime": 0, "threshold": 1, "addresses": [ "avax15th7g8s42dzr003ehc79n934fktn3f45559lux" ], "timestamp": "2020-09-10T00:00:00Z", "redeemingTransactionID": "" }, { "id": "sSe23trxC7kQusKRzUq8dMc65qbdek7AnosYYET2ifKiShT7Z", "transactionID": "FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z", "outputIndex": 704, "assetID": "FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z", "outputType": 7, "amount": "192600000000", "locktime": 0, "threshold": 1, "addresses": [ "avax1quu4vj5vna0ujawam5dv5zjxsjhymcpt5up3w6" ], "timestamp": "2020-09-10T00:00:00Z", "redeemingTransactionID": "" }, { "id": "2e5XAaDb5X8J7JbH9atfSzWvoAguR9vT6NJHhXzsV7SmVbNue7", "transactionID": "FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z", "outputIndex": 1501, "assetID": "FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z", "outputType": 7, "amount": "1000100000000", "locktime": 0, "threshold": 1, "addresses": [ "avax1ryvxr2zvzptlwmpdawgsawl8vev0uduch4sjjs" ], "timestamp": "2020-09-10T00:00:00Z", "redeemingTransactionID": "" },......
how is it fix?

Also my request is that:

http://0.0.0.0:8080/X/transactions/:2qdDV39k1SoxMnius2fFUEtaRn12iHUuyFjbH8QyFnf4PYyjCc

response is not mine tx id... why?

also I set the "network_id =5" and I set the chainId as 2JVSBoinj9C2J33VntvzYtVJNZdN2NKiwwKjcumHUWEb5DbBrm (the Fuji network XChain Id. but response chainId is Mainnet XChain Id) in the standalone docker_compose.yml .

thank you

Enhance /assets endpoint with aggregation data

In the Avalanche Explorer, an asset map is necessary for:

  • parsing asset IDs in utxos, transactions, and address balances

When each asset in the asset map is enhanced with metrics, we can describe the activity on a chain through:

  • ordered lists, where assets are sorted by a metric
  • tallies (e.g. volume) and descriptive statistics (e.g. avg), which are calculated across assets

Currently, the Explorer FE accomplishes this by:

  • recursively calling /assets?offset=...?limit=..., to get a full map of indexed assets
  • calling /x/transactions/aggregates?assetID=... for each asset to get its metrics

How can we improve load times of this information?

I propose to enhance /assets or create a new endpoint with aggregation data for each asset. The response would include aggregation data for fixed time periods like 1d, 1m, 1y, etc.

"aggregates": {
   "1d": {
        startTime: "2020-09-02T14:52:59.874Z"
        endTime: "2020-09-03T14:52:59.874Z" 
        addressCount: 0
        assetCount: 0
        outputCount: 0
        transactionCount: 0
        transactionVolume: "0"
   },
   "1m": {...},
    "1y": {...}
}

The time periods could be scheduled on a job by the backend or specified by the request.

transaction ID was not found in the Avalanche Explorer: AcHrhZQFHnAkbp5U4LmbpjjUnSFAZqnDcLwNr54NYJo8WAbjU

Transaction Details Not Found
A record for this transaction ID was not found in the Avalanche Explorer

AcHrhZQFHnAkbp5U4LmbpjjUnSFAZqnDcLwNr54NYJo8WAbjU
Screenshot 2020-12-23 135434

"Your withdrawal showed "Unconfirmed" or "Pending confirmation" on the blockchain, which means that the transaction has already been processed by Binance and waiting to be confirmed by the blockchain. After the transaction has been confirmed by the blockchain and gained a certain amount of confirmations, it will be credited to the recipient account."

Avax trasnfer

I did to times whitdraw avax from Paribu to Binance (January 8 ,14:33) . the transfer hasn't happened yet. A lot of time has passed since.

Transaction Details Not Found

A record for this transaction ID was not found in the Avalanche Explorer

2kyPiJdGkYLskNEXPyYX6f5iMBy9m2ZW1z6HGhoyWvJmbHxEyP

Can you help me?

producer and indexer error

Producer operation error, the log is as follows:

(docker logs -f ava_producer_1)

INFO [03-01|02:17:32] /go/src/github.com/ava-labs/ortelius/stream/producer_cchain.go#319: starting processing block -1
[2021-03-01T02:17:32.982266296Z]: job:update-tx-pool event:dbr.exec.exec err:Error 1146: Table 'ortelius.tx_pool' doesn't exist kvs:[sql:INSERT INTO `tx_pool` (`id`,`network_id`,`chain_id`,`msg_key`,`serialization`,`processed`,`topic`,`created_at`) VALUES ('2PnhAqiebtfg8uQ4gV7R8jeptxS7YgreiTbTrUJRn22yz7gLhs',1,'2q9e4r6Mu3U68nU1fYjgbR6JvwrRx36CohpAX5UQxse55x1Q5','2pAo2eMuKaMyhKWAyyKZe6FKwfEnDczfYjgYcLbJcJKzrLGb6F',?,0,'1-2q9e4r6Mu3U68nU1fYjgbR6JvwrRx36CohpAX5UQxse55x1Q5-cchain','2021-03-01 02:17:32.925920')]
[2021-03-01T02:17:32.982280517Z]: job:update-tx-pool event:dbr.exec time:607 μs kvs:[sql:INSERT INTO `tx_pool` (`id`,`network_id`,`chain_id`,`msg_key`,`serialization`,`processed`,`topic`,`created_at`) VALUES ('2PnhAqiebtfg8uQ4gV7R8jeptxS7YgreiTbTrUJRn22yz7gLhs',1,'2q9e4r6Mu3U68nU1fYjgbR6JvwrRx36CohpAX5UQxse55x1Q5','2pAo2eMuKaMyhKWAyyKZe6FKwfEnDczfYjgYcLbJcJKzrLGb6F',?,0,'1-2q9e4r6Mu3U68nU1fYjgbR6JvwrRx36CohpAX5UQxse55x1Q5-cchain','2021-03-01 02:17:32.925920')]
ERROR[03-01|02:17:32] /go/src/github.com/ava-labs/ortelius/stream/producer_cchain.go#475: Unknown error: Error 1146: Table 'ortelius.tx_pool' doesn't exist (tx_pool)
INFO [03-01|02:17:32] /go/src/github.com/ava-labs/ortelius/stream/producer_cchain.go#393: close producer 1 2q9e4r6Mu3U68nU1fYjgbR6JvwrRx36CohpAX5UQxse55x1Q5 cchain
[2021-03-01T02:17:32.982322394Z]: job:general event:close
INFO [03-01|02:17:32] /go/src/github.com/ava-labs/ortelius/stream/producer_cchain.go#509: Exiting worker for cchain
ERROR[03-01|02:17:32] /go/src/github.com/ava-labs/ortelius/stream/producer_cchain.go#340: Error running worker: Error 1146: Table 'ortelius.tx_pool' doesn't exist (tx_pool)
[2021-03-01T02:17:33.00088649Z]: job:write-buffer event:dbr.exec.exec err:Error 1146: Table 'ortelius.tx_pool' doesn't exist kvs:[sql:INSERT INTO `tx_pool` (`id`,`network_id`,`chain_id`,`msg_key`,`serialization`,`processed`,`topic`,`created_at`) VALUES ('ZJyGCcpBWXgaFgib1mFEaoQq86eVdV7GpBgnTbYSwSk3VLcib',1,'11111111111111111111111111111111LpoYY','2YHbbBzAtykroNag7jKToe5XZsnRHifJsNtfiMYJjgnLANXzpL',?,0,'1-11111111111111111111111111111111LpoYY-decisions','2021-03-01 02:17:28.997657')]
[2021-03-01T02:17:33.000899854Z]: job:write-buffer event:dbr.exec time:429 μs kvs:[sql:INSERT INTO `tx_pool` (`id`,`network_id`,`chain_id`,`msg_key`,`serialization`,`processed`,`topic`,`created_at`) VALUES ('ZJyGCcpBWXgaFgib1mFEaoQq86eVdV7GpBgnTbYSwSk3VLcib',1,'11111111111111111111111111111111LpoYY','2YHbbBzAtykroNag7jKToe5XZsnRHifJsNtfiMYJjgnLANXzpL',?,0,'1-11111111111111111111111111111111LpoYY-decisions','2021-03-01 02:17:28.997657')]
WARN [03-01|02:17:33] /go/src/github.com/ava-labs/ortelius/stream/write_buffer.go#256: Error writing to db (retry):%!(EXTRA *fmt.wrapError=Error 1146: Table 'ortelius.tx_pool' doesn't exist (tx_pool))
[2021-03-01T02:17:33.001367943Z]: job:write-buffer event:dbr.exec.exec err:Error 1146: Table 'ortelius.tx_pool' doesn't exist kvs:[sql:INSERT INTO `tx_pool` (`id`,`network_id`,`chain_id`,`msg_key`,`serialization`,`processed`,`topic`,`created_at`) VALUES ('2Cc9zanTncTyeenYHF7fXj1b4gm5QgUkdfy3aVHxmN6eMToKrm',1,'11111111111111111111111111111111LpoYY','2YHbbBzAtykroNag7jKToe5XZsnRHifJsNtfiMYJjgnLANXzpL',?,0,'1-11111111111111111111111111111111LpoYY-consensus','2021-03-01 02:17:27.997342')]
[2021-03-01T02:17:33.001379821Z]: job:write-buffer event:dbr.exec time:399 μs kvs:[sql:INSERT INTO `tx_pool` (`id`,`network_id`,`chain_id`,`msg_key`,`serialization`,`processed`,`topic`,`created_at`) VALUES ('2Cc9zanTncTyeenYHF7fXj1b4gm5QgUkdfy3aVHxmN6eMToKrm',1,'11111111111111111111111111111111LpoYY','2YHbbBzAtykroNag7jKToe5XZsnRHifJsNtfiMYJjgnLANXzpL',?,0,'1-11111111111111111111111111111111LpoYY-consensus','2021-03-01 02:17:27.997342')]
WARN [03-01|02:17:33] /go/src/github.com/ava-labs/ortelius/stream/write_buffer.go#256: Error writing to db (retry):%!(EXTRA *fmt.wrapError=Error 1146: Table 'ortelius.tx_pool' doesn't exist (tx_pool))
INFO [03-01|02:17:33] /go/src/github.com/ava-labs/ortelius/stream/producer_cchain.go#412: Starting worker for cchain
[2021-03-01T02:17:33.183998403Z]: job:get-block event:dbr.select time:298 μs kvs:[sql:SELECT cast(case when max(block) is null then -1 else max(block) end as char) as block FROM cvm_blocks]
INFO [03-01|02:17:33] /go/src/github.com/ava-labs/ortelius/stream/producer_cchain.go#319: starting processing block -1
[2021-03-01T02:17:33.246287019Z]: job:update-tx-pool event:dbr.exec.exec err:Error 1146: Table 'ortelius.tx_pool' doesn't exist kvs:[sql:INSERT INTO `tx_pool` (`id`,`network_id`,`chain_id`,`msg_key`,`serialization`,`processed`,`topic`,`created_at`) VALUES ('2PnhAqiebtfg8uQ4gV7R8jeptxS7YgreiTbTrUJRn22yz7gLhs',1,'2q9e4r6Mu3U68nU1fYjgbR6JvwrRx36CohpAX5UQxse55x1Q5','2pAo2eMuKaMyhKWAyyKZe6FKwfEnDczfYjgYcLbJcJKzrLGb6F',?,0,'1-2q9e4r6Mu3U68nU1fYjgbR6JvwrRx36CohpAX5UQxse55x1Q5-cchain','2021-03-01 02:17:33.187759')]
[2021-03-01T02:17:33.24630214Z]: job:update-tx-pool event:dbr.exec time:502 μs kvs:[sql:INSERT INTO `tx_pool` (`id`,`network_id`,`chain_id`,`msg_key`,`serialization`,`processed`,`topic`,`created_at`) VALUES ('2PnhAqiebtfg8uQ4gV7R8jeptxS7YgreiTbTrUJRn22yz7gLhs',1,'2q9e4r6Mu3U68nU1fYjgbR6JvwrRx36CohpAX5UQxse55x1Q5','2pAo2eMuKaMyhKWAyyKZe6FKwfEnDczfYjgYcLbJcJKzrLGb6F',?,0,'1-2q9e4r6Mu3U68nU1fYjgbR6JvwrRx36CohpAX5UQxse55x1Q5-cchain','2021-03-01 02:17:33.187759')]
ERROR[03-01|02:17:33] /go/src/github.com/ava-labs/ortelius/stream/producer_cchain.go#475: Unknown error: Error 1146: Table 'ortelius.tx_pool' doesn't exist (tx_pool)
INFO [03-01|02:17:33] /go/src/github.com/ava-labs/ortelius/stream/producer_cchain.go#393: close producer 1 2q9e4r6Mu3U68nU1fYjgbR6JvwrRx36CohpAX5UQxse55x1Q5 cchain
[2021-03-01T02:17:33.246354515Z]: job:general event:close
INFO [03-01|02:17:33] /go/src/github.com/ava-labs/ortelius/stream/producer_cchain.go#509: Exiting worker for cchain
ERROR[03-01|02:17:33] /go/src/github.com/ava-labs/ortelius/stream/producer_cchain.go#340: Error running worker: Error 1146: Table 'ortelius.tx_pool' doesn't exist (tx_pool)

Indexer operation error, the log is as follows

(docker logs -f ava_indexer_1)

2021/03/01 02:20:08 Failed to run: Failed to index avm genesis tx 0
 --- at /go/src/github.com/ava-labs/ortelius/services/indexes/avm/writer.go:253 (Writer.insertGenesis) ---
Caused by: Error 1054: Unknown column 'transaction_id' in 'field list' (output_addresses_accumulate_out)
INFO [03-01|02:21:10] /go/src/github.com/ava-labs/ortelius/stream/consumers/indexer.go#100: bootstrap 1 vm pvm chain 11111111111111111111111111111111LpoYY
INFO [03-01|02:21:11] /go/src/github.com/ava-labs/ortelius/stream/consumers/indexer.go#108: bootstrap complete 1 vm avm chain 2oYMBNV4eNHyqk2fjjV5nVQLDbtmNJzq5s3qs3Lo6ftnC6FByM
INFO [03-01|02:21:11] /go/src/github.com/ava-labs/ortelius/stream/consumers/indexer.go#100: bootstrap 1 vm avm chain 2oYMBNV4eNHyqk2fjjV5nVQLDbtmNJzq5s3qs3Lo6ftnC6FByM
INFO [03-01|02:21:12] /go/src/github.com/ava-labs/ortelius/stream/consumers/indexer.go#108: bootstrap complete 1 vm avm chain 2oYMBNV4eNHyqk2fjjV5nVQLDbtmNJzq5s3qs3Lo6ftnC6FByM
[2021-03-01T02:21:12.684095903Z]: job:general event:close
2021/03/01 02:21:12 Failed to run: Failed to index avm genesis tx 0
 --- at /go/src/github.com/ava-labs/ortelius/services/indexes/avm/writer.go:253 (Writer.insertGenesis) ---
Caused by: Error 1054: Unknown column 'transaction_id' in 'field list' (output_addresses_accumulate_out)

docker ps view and print as follows

CONTAINER ID        IMAGE                                   COMMAND                  CREATED             STATUS                          PORTS                                              NAMES
054b1d9f196c        avaplatform/avalanchego:v1.2.0          "/bin/sh -cx 'exec .…"   44 minutes ago      Up 44 minutes                   127.0.0.1:9650->9650/tcp                           ava_avalanche_1
ee66b7f97d1d        avaplatform/ortelius:c562648            "/opt/orteliusd api …"   44 minutes ago      Up 43 minutes                   127.0.0.1:8080->8080/tcp                           ava_api_1
893335b0da78        avaplatform/ortelius:c562648            "/opt/orteliusd stre…"   44 minutes ago      Restarting (1) 40 seconds ago                                                      ava_indexer_1
77b52de7d235        avaplatform/ortelius:c562648            "/opt/orteliusd stre…"   44 minutes ago      Up 44 minutes                                                                      ava_producer_1
45203d1bd38e        confluentinc/cp-kafka:5.4.3             "/etc/confluent/dock…"   44 minutes ago      Up 44 minutes                   0.0.0.0:9092->9092/tcp, 0.0.0.0:29092->29092/tcp   ava_kafka_1
973f0b9b1c39        mysql:8.0.22                            "docker-entrypoint.s…"   44 minutes ago      Up 44 minutes                   0.0.0.0:3306->3306/tcp, 33060/tcp                  ava_mysql_1
c724993dc3b6        redis:6.0.9-alpine3.12                  "docker-entrypoint.s…"   44 minutes ago      Up 44 minutes                   0.0.0.0:6379->6379/tcp                             ava_redis_1
0a3d1e1f44d8        confluentinc/cp-zookeeper:5.4.3         "/etc/confluent/dock…"   44 minutes ago      Up 44 minutes                   2888/tcp, 3888/tcp, 0.0.0.0:32774->2181/tcp        ava_zookeeper_1

This is my docker-compose.yml file

version: '3.5'
volumes:
  avalanche-ipcs:
services:
  mysql:
    image: "mysql:8.0.22"
    volumes:
      - ./data/mysql:/var/lib/mysql
      - ./my.cnf:/etc/mysql/my.cnf
    ports:
      - "3306:3306"
    environment:
      MYSQL_ROOT_PASSWORD: password
      MYSQL_DATABASE: ortelius
    restart: on-failure
  migrate:
    image: "migrate/migrate:v4.13.0"
    volumes:
      - ./ortelius/services/db/migrations:/migrations
    depends_on:
      - mysql
    entrypoint: ["/bin/sh"]
    command: |
      -c 'while ! migrate -path=/migrations/ -database "mysql://root:password@tcp(mysql:3306)/ortelius" up; do
        sleep 1
      done'
    restart: on-failure
  redis:
    image: "redis:6.0.9-alpine3.12"
    command: redis-server
    ports:
      - "6379:6379"
    restart: on-failure
  zookeeper:
    image: "confluentinc/cp-zookeeper:5.4.3"
    ports:
      - 2181
    environment:
      - ZOOKEEPER_SERVER_ID=1
      - ZOOKEEPER_CLIENT_PORT=2181
      - ZOOKEEPER_SERVERS=zookeeper:4182:5181
    volumes:
      - ./data/zookeeper/data:/var/lib/zookeeper/data/
      - ./log/zookeeper/logs:/var/lib/zookeeper/log/
    restart: on-failure
  kafka:
    image: "confluentinc/cp-kafka:5.4.3"
    depends_on:
      - zookeeper
    ports:
      - "9092:9092"
      - "29092:29092"
    volumes:
      - ./data/kafka:/var/lib/kafka/data/
    restart: on-failure
    environment:
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://127.0.0.1:29092

      KAFKA_BROKER_ID: 1
      CONFLUENT_SUPPORT_METRICS_ENABLE: "false"
      KAFKA_HEAP_OPTS: -Xms256M -Xmx256M -verbose:gc
      KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true"
      KAFKA_DELETE_TOPIC_ENABLE: "true"
      KAFKA_NUM_PARTITIONS: 8
      KAFKA_OFFSETS_RETENTION_MINUTES: 446400

      # Disable replication and lower thread count
      KAFKA_DEFAULT_REPLICATION_FACTOR: 1
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
      KAFKA_MIN_INSYNC_REPLICAS: 1

      KAFKA_NUM_RECOVERY_THREADS_PER_DATA_DIR: 1
      KAFKA_NUM_NETWORK_THREADS: 3
      KAFKA_NUM_IO_THREADS: 3

      # Set retention policies
      KAFKA_LOG_CLEANUP_POLICY: compact
      KAFKA_LOG_RETENTION_BYTES: -1
      KAFKA_LOG_RETENTION_CHECK_INTERVAL_MS: 300000
      KAFKA_LOG_RETENTION_HOURS: -1
      KAFKA_LOG_ROLL_HOURS: 24
      KAFKA_LOG_SEGMENT_BYTES: 1048576
      KAFKA_LOG_SEGMENT_DELETE_DELAY_MS: 60000
  avalanche:
    env_file:
      - ./production.env
    image: "avaplatform/avalanchego:v1.2.0"
    command: /bin/sh -cx "exec ./build/avalanchego
      --network-id=$${NETWORKID}
      --db-dir=/var/lib/avalanche
      --log-level=info
      --http-host=0.0.0.0
      --ipcs-chain-ids=$${P_CHAINID},$${X_CHAINID}
      --coreth-config='{\"rpc-gas-cap\":2500000000,\"rpc-tx-fee-cap\":100,\"eth-api-enabled\":true,\"debug-api-enabled\":true,\"tx-pool-api-enabled\":true}'
      "
    ports:
      - 127.0.0.1:9650:9650
    volumes:
      - ./data/avalanche:/var/lib/avalanche
      - avalanche-ipcs:/tmp
    depends_on:
      - producer
    restart: always
  kafkatopics:
    env_file:
      - ./production.env
    depends_on:
      - kafka
    image: "confluentinc/cp-kafka:5.4.3"
    command: bash -cx "kafka-topics --bootstrap-server $${KAFKA_HOST} --list &&
      kafka-topics --create --if-not-exists --zookeeper $${ZOOKEEPER_HOST} --replication-factor $${KAFKA_REPLICATIONFACTOR} --partitions $${KAFKA_NUMPARTITIONS} --topic $${NETWORKID}-$${C_CHAINID}-cchain &&
      kafka-topics --create --if-not-exists --zookeeper $${ZOOKEEPER_HOST} --replication-factor $${KAFKA_REPLICATIONFACTOR} --partitions $${KAFKA_NUMPARTITIONS} --topic $${NETWORKID}-$${P_CHAINID}-consensus &&
      kafka-topics --create --if-not-exists --zookeeper $${ZOOKEEPER_HOST} --replication-factor $${KAFKA_REPLICATIONFACTOR} --partitions $${KAFKA_NUMPARTITIONS} --topic $${NETWORKID}-$${P_CHAINID}-decisions &&
      kafka-topics --create --if-not-exists --zookeeper $${ZOOKEEPER_HOST} --replication-factor $${KAFKA_REPLICATIONFACTOR} --partitions $${KAFKA_NUMPARTITIONS} --topic $${NETWORKID}-$${X_CHAINID}-consensus &&
      kafka-topics --create --if-not-exists --zookeeper $${ZOOKEEPER_HOST} --replication-factor $${KAFKA_REPLICATIONFACTOR} --partitions $${KAFKA_NUMPARTITIONS} --topic $${NETWORKID}-$${X_CHAINID}-decisions
      "
  producer: &ortelius-app
    image: "avaplatform/ortelius:c562648"
    command: ["stream", "producer", "-c", "/opt/config.json"]
    external_links:
      - zookeeper
      - kafka
      - mysql
      - redis
    depends_on:
      - kafkatopics
    volumes:
      - avalanche-ipcs:/tmp
    restart: on-failure
  indexer:
    <<: *ortelius-app
    command: ["stream", "indexer", "-c", "/opt/config.json"]
  api:
    <<: *ortelius-app
    command: ["api", "-c", "/opt/config.json"]
    ports:
      - 127.0.0.1:8080:8080

How should I solve this problem?

Duplicate txs

Saw a bunch of dups appear in the transactions endpoint around 2020-08-31 at 8 25 19 PM.

Screen Shot 2020-08-31 at 8 25 19 PM

Example ID: 2jBHZ8fb8LfeCdSGyNwWw2yrKRyeNeLZ7DUDPVKD6hbTTbs5ea

Axav ; Transaction Details Not Found

I did to times whitdraw avax from Paribu to TRBinance. Transaction Details Not Found
A record for this transaction ID was not found in the Avalanche Explorer

  1. 2DTi9ernZWF1Y7wXQp6MBAirxhS5CejQuJpMcENcxBaGjcXFwF

  2. 2Qvymv7WmAfAFf3v3TxEFRCY1TBpr6Nu9Q7NBMdxrGmuBJ2rVg

Can you help me?

Redis and mysql too many open files error

I have set up avalanchego v1.1.1 using config below

node config
{
	"plugin-dir": "/root/avalanchego/build/plugins",
	"public-ip": <my-public-ip>,
	"log-level": "Info",
	"log-dir":"logs",
	"signature-verification-enabled":"true",
	"staking-enabled":"true",
	"api-admin-enabled": "true",
	"api-info-enabled": "true",
	"api-ipcs-enabled":"true",
	"api-metrics-enabled": "false",
	"db-dir":"db",
	"p2p-tls-enabled":"true",
	"http-port": "8000",
	"ipcs-path":"/tmp",
	"ipcs-chain-ids":["2oYMBNV4eNHyqk2fjjV5nVQLDbtmNJzq5s3qs3Lo6ftnC6FByM","2q9e4r6Mu3U68nU1fYjgbR6JvwrRx36CohpAX5UQxse55x1Q5","GF5KEgiGs2yhDFTL1KgfmszU61U3ipSMpQJZsra5nFysots68","29ZujU1g8h2qc2JgQjFo7Wby3F2hJhVGohkbeQ9iZ8LjMy2fmZ","iybTYo3sgDgL8c9hyH8qBJaHJ4V5x5Pzy8mXcTSqneoHHPPfN","2ghHT5a5kbtm4rn2vfod7sZcLRBQHLFa5827X72hKRAGXCKkAq"],
	"network-id":"mainnet",
	"coreth-config": {
		  	"snowman-api-enabled": true,
        		"coreth-admin-api-enabled": true,
		        "net-api-enabled": true,
		        "rpc-gas-cap": 2500000000,
		        "rpc-tx-fee-cap": 100,
		        "eth-api-enabled": true,
		        "tx-pool-api-enabled": true,
		        "debug-api-enabled": true,
	        	"web3-api-enabled": true
			}
}	

Also I built and ran ortelius master branch with such config:

{
  "networkID": 1,
  "logDirectory": "/var/log/ortelius",
  "ipcRoot": "ipc:///tmp",
  "listenAddr": "localhost:8080",
  "chains": {
    "11111111111111111111111111111111LpoYY": {
      "id": "11111111111111111111111111111111LpoYY",
      "alias": "P",
      "vmType": "pvm"
    },
    "2oYMBNV4eNHyqk2fjjV5nVQLDbtmNJzq5s3qs3Lo6ftnC6FByM": {
      "id": "2oYMBNV4eNHyqk2fjjV5nVQLDbtmNJzq5s3qs3Lo6ftnC6FByM",
      "alias": "X",
      "vmType": "avm"
    },
    "2q9e4r6Mu3U68nU1fYjgbR6JvwrRx36CohpAX5UQxse55x1Q5":{
	    "id":"2q9e4r6Mu3U68nU1fYjgbR6JvwrRx36CohpAX5UQxse55x1Q5",
	    "alias":"C",
	    "vmType":"evm"
    }
  },
  "stream": {
    "kafka": {
      "brokers": [
        "127.0.0.1:9092"
      ],
      "groupName": "indexer"
    }
  },
  "services": {
    "redis": {
      "addr": "127.0.0.1:6379"
    },
    "db": {
      "dsn": "myuser:pass@tcp(127.0.0.1:3306)/ortelius",
      "driver": "mysql"
    }
  }
}

After indexing for several minutes indexer starts to produce logs of too many open files

Ortelius indexer logs
INFO [12-05|15:36:16] /root/ortelius/stream/processor.go#134: Exiting worker for chain 11111111111111111111111111111111LpoYY
ERROR[12-05|15:36:16] /root/ortelius/stream/processor.go#83: Error running worker: dial tcp 127.0.0.1:3306: socket: too many open files
[2020-12-05T15:36:16.762975572Z]: job:general event:connect.redis err:dial tcp 127.0.0.1:6379: socket: too many open files kvs:[addr:127.0.0.1:6379 db:0]
INFO [12-05|15:36:16] /root/ortelius/stream/processor.go#134: Exiting worker for chain 2q9e4r6Mu3U68nU1fYjgbR6JvwrRx36CohpAX5UQxse55x1Q5
ERROR[12-05|15:36:16] /root/ortelius/stream/processor.go#83: Error running worker: dial tcp 127.0.0.1:6379: socket: too many open files
INFO [12-05|15:36:16] /root/ortelius/stream/processor.go#128: Starting worker for chain 11111111111111111111111111111111LpoYY
[2020-12-05T15:36:16.957194407Z]: job:general event:connect.redis kvs:[addr:127.0.0.1:6379 db:0]
[2020-12-05T15:36:16.957294503Z]: job:general event:connect.db err:dial tcp 127.0.0.1:3306: socket: too many open files kvs:[driver:mysql dsn:root:[removed]@tcp(127.0.0.1:3306)/ortelius rodsn:root:[removed]@tcp(127.0.0.1:3306)/ortelius]
INFO [12-05|15:36:16] /root/ortelius/stream/processor.go#134: Exiting worker for chain 11111111111111111111111111111111LpoYY
ERROR[12-05|15:36:16] /root/ortelius/stream/processor.go#83: Error running worker: dial tcp 127.0.0.1:3306: socket: too many open files
INFO [12-05|15:36:16] /root/ortelius/stream/processor.go#128: Starting worker for chain 2q9e4r6Mu3U68nU1fYjgbR6JvwrRx36CohpAX5UQxse55x1Q5
[2020-12-05T15:36:17.030333158Z]: job:general event:connect.redis err:dial tcp 127.0.0.1:6379: socket: too many open files kvs:[addr:127.0.0.1:6379 db:0]
INFO [12-05|15:36:17] /root/ortelius/stream/processor.go#134: Exiting worker for chain 2q9e4r6Mu3U68nU1fYjgbR6JvwrRx36CohpAX5UQxse55x1Q5
ERROR[12-05|15:36:17] /root/ortelius/stream/processor.go#83: Error running worker: dial tcp 127.0.0.1:6379: socket: too many open files
INFO [12-05|15:36:17] /root/ortelius/stream/processor.go#128: Starting worker for chain 11111111111111111111111111111111LpoYY
[2020-12-05T15:36:17.158067059Z]: job:general event:connect.redis kvs:[addr:127.0.0.1:6379 db:0]
[2020-12-05T15:36:17.158188056Z]: job:general event:connect.db err:dial tcp 127.0.0.1:3306: socket: too many open files kvs:[driver:mysql dsn:root:[removed]@tcp(127.0.0.1:3306)/ortelius rodsn:root:[removed]@tcp(127.0.0.1:3306)/ortelius]
INFO [12-05|15:36:17] /root/ortelius/stream/processor.go#134: Exiting worker for chain 11111111111111111111111111111111LpoYY
ERROR[12-05|15:36:17] /root/ortelius/stream/processor.go#83: Error running worker: dial tcp 127.0.0.1:3306: socket: too many open files                                  
INFO [12-05|15:36:17] /root/ortelius/stream/processor.go#128: Starting worker for chain 2q9e4r6Mu3U68nU1fYjgbR6JvwrRx36CohpAX5UQxse55x1Q5
[2020-12-05T15:36:17.291841428Z]: job:general event:connect.redis err:dial tcp 127.0.0.1:6379: socket: too many open files kvs:[addr:127.0.0.1:6379 db:0]
INFO [12-05|15:36:17] /root/ortelius/stream/processor.go#134: Exiting worker for chain 2q9e4r6Mu3U68nU1fYjgbR6JvwrRx36CohpAX5UQxse55x1Q5
ERROR[12-05|15:36:17] /root/ortelius/stream/processor.go#83: Error running worker: dial tcp 127.0.0.1:6379: socket: too many open files
INFO [12-05|15:36:17] /root/ortelius/stream/processor.go#128: Starting worker for chain 11111111111111111111111111111111LpoYY
[2020-12-05T15:36:17.368255495Z]: job:general event:connect.redis kvs:[addr:127.0.0.1:6379 db:0]
[2020-12-05T15:36:17.36838255Z]: job:general event:connect.db err:dial tcp 127.0.0.1:3306: socket: too many open files kvs:[driver:mysql dsn:root:[removed]@tcp(127.0.0.1:3306)/ortelius rodsn:root:[removed]@tcp(127.0.0.1:3306)/ortelius]

It seems that connections are not processed correctly and is leaking, because I set max opened file numbers to 100500 in the system before runing the indexer.

root@avalanche:~# ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 63917
max locked memory       (kbytes, -l) 65536
max memory size         (kbytes, -m) unlimited
open files                      (-n) 100500
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 63917
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

root@avalanche:~# tail -15 /etc/security/limits.conf 
# hard limit for max opened files for linuxtechi user
root       hard    nofile          100500
# soft limit for max opened files for linuxtechi user
root       soft    nofile          100500

mysql       hard    nofile          512
# soft limit for max opened files for linuxtechi user
mysql       soft    nofile           512

redis       hard    nofile          2048
# soft limit for max opened files for linuxtechi user
redis       soft    nofile          2048
#
#
# End of file

Mysql connection and opened files limits set to 512, Redis limits for opened files set to 2048.

Discrepancy between count and offset in /x/addresses endpoint

Description

There is a discrepancy between count and offset in /x/addresses endpoint.

How to reproduce

  1. curl -s https://explorerapi.ava.network/x/addresses | jq .count
    It returns 654754. So I expect there are 654754 addresses and I can retrieve each of them.

  2. curl -s https://explorerapi.ava.network/x/addresses?offset=654753 | jq
    I skip the first 654_753 addresses and I expect to retrieve the next addresses, but it returns:

{
  "count": 654753,
  "addresses": []
}

Indexer service returns error

Error: ERROR[08-31|09:12:58] /go/src/github.com/ava-labs/ortelius/stream/processor.go#153: Unknown error: context deadline exceeded
image

failed to load resource

I started the node using docker-compose, and then there were two problems when requesting the api

  • some transactions could not get transaction information
  • after the node was running for a period of time, the request interface returned the following error:
{
    "code": 500,
    "message": "failed to load resource"
}

MySQL and API listening publicly intended?

Is it an intended behaviour that MySQL and the ortelius API are by default listening publicly and not only on localhost?
I tried to block the ports 8080 and the MySQL port with UFW, but due to dockers direct interaction with iptables, the ports remain accessible from the internet.

I think it could be sufficient for the default configuration to only listen to 127.0.0.1:8080 ?

TWEAK: Don't dress Genesis

Genesis is special both in that it's "well known" (hardcoded in avalanchego) and it's huge (~1.4MB). We don't show it's details in the explorer in most places because of this and adding this data to any query that returns Genesis is just wasted.

We should have a special flag/endpoint to make Genesis available but we should not dress it otherwise.

Binance withdrawal pending

I initiated a withdrawal from Binance but the transaction never materialized and it still says processing on the Binance website.
TxID: 2wQpkJ52L3Ss9F6say2zwGXA5d6mZUwPd7R1av9MyYJagGGi5P

What can I do?

PANIC when calling X/addresses

Hello,

I finally managed to start Ortelius locally, but when I call /X/addresses it crash with the following error :

api_1              | INFO [07-10|16:17:06] /go/src/github.com/ava-labs/ortelius/api/server.go#59: Server listening on :8080
stream-consumer_1  | [2020-07-10T16:17:06.489773002Z]: job:bootstrap event:dbr.exec time:1427 μs kvs:[chain_id:rrEWX7gc7D9mwcdrdBxBTdqh1a7WDVsMuadhTZgyXfFcRz45L sql:INSERT INTO `avm_outputs` (`id`,`chain_id`,`transaction_id`,`output_index`,`asset_id`,`output_type`,`amount`,`created_at`,`locktime`,`threshold`) VALUES ('yDzUcttbbkkedkFfF3SewV7SM1kKhNaH52Hfkqkmw15LKSuSK','rrEWX7gc7D9mwcdrdBxBTdqh1a7WDVsMuadhTZgyXfFcRz45L','21d7KVtPrubc5fHr6CGNcgbUb4seUjmZKr35ZX7BZb5iP8pXWA',9,'21d7KVtPrubc5fHr6CGNcgbUb4seUjmZKr35ZX7BZb5iP8pXWA',255,45000000000000000,'2020-07-10 16:17:06.000000',0,1)]
stream-consumer_1  | [2020-07-10T16:17:06.490852831Z]: job:bootstrap event:dbr.exec time:1003 μs kvs:[chain_id:rrEWX7gc7D9mwcdrdBxBTdqh1a7WDVsMuadhTZgyXfFcRz45L sql:INSERT INTO `avm_output_addresses` (`output_id`,`address`) VALUES ('yDzUcttbbkkedkFfF3SewV7SM1kKhNaH52Hfkqkmw15LKSuSK','DpL8PTsrjtLzv5J8LL3D2A6YcnCTqrNH9')]
stream-consumer_1  | [2020-07-10T16:17:06.492047253Z]: job:bootstrap event:dbr.exec time:1124 μs kvs:[chain_id:rrEWX7gc7D9mwcdrdBxBTdqh1a7WDVsMuadhTZgyXfFcRz45L sql:INSERT INTO `avm_outputs` (`id`,`chain_id`,`transaction_id`,`output_index`,`asset_id`,`output_type`,`amount`,`created_at`,`locktime`,`threshold`) VALUES ('Rb3ysWb2GcsHXGU6wePDZwKU6y1hEhRsQzjfyZ9EFsYENHtKf','rrEWX7gc7D9mwcdrdBxBTdqh1a7WDVsMuadhTZgyXfFcRz45L','21d7KVtPrubc5fHr6CGNcgbUb4seUjmZKr35ZX7BZb5iP8pXWA',10,'21d7KVtPrubc5fHr6CGNcgbUb4seUjmZKr35ZX7BZb5iP8pXWA',255,45000000000000000,'2020-07-10 16:17:06.000000',0,1)]
stream-consumer_1  | [2020-07-10T16:17:06.498880712Z]: job:bootstrap event:dbr.exec time:6 ms kvs:[chain_id:rrEWX7gc7D9mwcdrdBxBTdqh1a7WDVsMuadhTZgyXfFcRz45L sql:INSERT INTO `avm_output_addresses` (`output_id`,`address`) VALUES ('Rb3ysWb2GcsHXGU6wePDZwKU6y1hEhRsQzjfyZ9EFsYENHtKf','JLrYNMYXANGj43BfWXBxMMAEenUBp1Sbn')]
stream-consumer_1  | [2020-07-10T16:17:06.519417243Z]: job:bootstrap event:dbr.exec time:20 ms kvs:[chain_id:rrEWX7gc7D9mwcdrdBxBTdqh1a7WDVsMuadhTZgyXfFcRz45L sql:INSERT INTO `avm_assets` (`id`,`chain_Id`,`name`,`symbol`,`denomination`,`alias`,`current_supply`) VALUES ('21d7KVtPrubc5fHr6CGNcgbUb4seUjmZKr35ZX7BZb5iP8pXWA','rrEWX7gc7D9mwcdrdBxBTdqh1a7WDVsMuadhTZgyXfFcRz45L','AVA','AVA',9,'AVA',450000000000000000)]
stream-consumer_1  | [2020-07-10T16:17:06.525772515Z]: job:bootstrap event:dbr.exec time:6 ms kvs:[chain_id:rrEWX7gc7D9mwcdrdBxBTdqh1a7WDVsMuadhTZgyXfFcRz45L sql:INSERT INTO `avm_transactions` (`id`,`chain_id`,`type`,`created_at`,`canonical_serialization`) VALUES ('21d7KVtPrubc5fHr6CGNcgbUb4seUjmZKr35ZX7BZb5iP8pXWA','rrEWX7gc7D9mwcdrdBxBTdqh1a7WDVsMuadhTZgyXfFcRz45L','create_asset','2020-07-10 16:17:06.000000',?)]
stream-consumer_1  | [2020-07-10T16:17:06.545563423Z]: job:bootstrap event:dbr.commit kvs:[chain_id:rrEWX7gc7D9mwcdrdBxBTdqh1a7WDVsMuadhTZgyXfFcRz45L]
stream-consumer_1  | [2020-07-10T16:17:06.545619909Z]: job:bootstrap status:success time:128 ms kvs:[chain_id:rrEWX7gc7D9mwcdrdBxBTdqh1a7WDVsMuadhTZgyXfFcRz45L]
stream-consumer_1  | ERROR[07-10|16:17:16] /go/src/github.com/ava-labs/ortelius/stream/processor.go#153: Unknown error: context deadline exceeded
stream-consumer_1  | ERROR[07-10|16:17:26] /go/src/github.com/ava-labs/ortelius/stream/processor.go#153: Unknown error: context deadline exceeded
stream-consumer_1  | ERROR[07-10|16:17:36] /go/src/github.com/ava-labs/ortelius/stream/processor.go#153: Unknown error: context deadline exceeded
stream-consumer_1  | ERROR[07-10|16:17:46] /go/src/github.com/ava-labs/ortelius/stream/processor.go#153: Unknown error: context deadline exceeded
stream-consumer_1  | ERROR[07-10|16:17:56] /go/src/github.com/ava-labs/ortelius/stream/processor.go#153: Unknown error: context deadline exceeded
stream-consumer_1  | ERROR[07-10|16:18:06] /go/src/github.com/ava-labs/ortelius/stream/processor.go#153: Unknown error: context deadline exceeded
stream-consumer_1  | ERROR[07-10|16:18:16] /go/src/github.com/ava-labs/ortelius/stream/processor.go#153: Unknown error: context deadline exceeded
stream-consumer_1  | ERROR[07-10|16:18:26] /go/src/github.com/ava-labs/ortelius/stream/processor.go#153: Unknown error: context deadline exceeded
stream-consumer_1  | ERROR[07-10|16:18:36] /go/src/github.com/ava-labs/ortelius/stream/processor.go#153: Unknown error: context deadline exceeded
stream-consumer_1  | ERROR[07-10|16:18:46] /go/src/github.com/ava-labs/ortelius/stream/processor.go#153: Unknown error: context deadline exceeded
stream-consumer_1  | ERROR[07-10|16:18:56] /go/src/github.com/ava-labs/ortelius/stream/processor.go#153: Unknown error: context deadline exceeded
stream-consumer_1  | ERROR[07-10|16:19:06] /go/src/github.com/ava-labs/ortelius/stream/processor.go#153: Unknown error: context deadline exceeded
stream-consumer_1  | ERROR[07-10|16:19:16] /go/src/github.com/ava-labs/ortelius/stream/processor.go#153: Unknown error: context deadline exceeded
stream-consumer_1  | ERROR[07-10|16:19:26] /go/src/github.com/ava-labs/ortelius/stream/processor.go#153: Unknown error: context deadline exceeded
stream-consumer_1  | ERROR[07-10|16:19:36] /go/src/github.com/ava-labs/ortelius/stream/processor.go#153: Unknown error: context deadline exceeded
stream-consumer_1  | ERROR[07-10|16:19:46] /go/src/github.com/ava-labs/ortelius/stream/processor.go#153: Unknown error: context deadline exceeded
stream-consumer_1  | ERROR[07-10|16:19:56] /go/src/github.com/ava-labs/ortelius/stream/processor.go#153: Unknown error: context deadline exceeded
api_1              | [2020-07-10T16:19:59.331351046Z]: job:request.x.addresses status:success time:554 μs
api_1              | [2020-07-10T16:19:59.662912197Z]: job:request.favicon.ico status:success time:38 μs
api_1              | [2020-07-10T16:20:05.459470988Z]: job:request. status:success time:22 μs
stream-consumer_1  | ERROR[07-10|16:20:06] /go/src/github.com/ava-labs/ortelius/stream/processor.go#153: Unknown error: context deadline exceeded
api_1              | [2020-07-10T16:20:09.872044923Z]: job:request.X. status:success time:147 μs
stream-consumer_1  | ERROR[07-10|16:20:16] /go/src/github.com/ava-labs/ortelius/stream/processor.go#153: Unknown error: context deadline exceeded
api_1              | ERROR 2020/07/10 16:20:16.710169 panic_handler.go:26: PANIC
api_1              | URL: /X/addresses
api_1              | ERROR: runtime error: invalid memory address or nil pointer dereference
api_1              | STACK:
api_1              | goroutine 11 [running]:
api_1              | github.com/gocraft/web.(*Router).handlePanic(0xc0004ffba0, 0xc000198180, 0xc0001981a0, 0xce3100, 0x16c1110)
api_1              | 	/go/pkg/mod/github.com/gocraft/[email protected]/router_serve.go:277 +0x4fc
api_1              | github.com/gocraft/web.(*Router).ServeHTTP.func1(0xc0004ffba0, 0xc000198180)
api_1              | 	/go/pkg/mod/github.com/gocraft/[email protected]/router_serve.go:43 +0x6f
api_1              | panic(0xce3100, 0x16c1110)
api_1              | 	/usr/local/go/src/runtime/panic.go:969 +0x166
api_1              | github.com/ava-labs/ortelius/services/cache.(*Cache).Get(0x0, 0x1059320, 0xc0002f81e0, 0xc00027c180, 0x59, 0x59, 0xc00001e000, 0xd06840, 0xd71420, 0x1055160)
api_1              | 	/go/src/github.com/ava-labs/ortelius/services/cache/cache.go:46 +0x4a
api_1              | github.com/ava-labs/ortelius/api.(*RootRequestContext).WriteCacheable(0xc000186700, 0x1055160, 0xc000198180, 0xc0002f83c0, 0x5, 0x6, 0xc0001924f8)
api_1              | 	/go/src/github.com/ava-labs/ortelius/api/root.go:55 +0xb5
api_1              | github.com/ava-labs/ortelius/services/indexes/avm.(*APIContext).ListAddresses(0xc000072820, 0x10624c0, 0xc000198180, 0xc0001981a0)
api_1              | 	/go/src/github.com/ava-labs/ortelius/services/indexes/avm/api.go:169 +0x263
api_1              | reflect.Value.call(0xcb8660, 0xf4b950, 0x13, 0xded953, 0x4, 0xc000192980, 0x3, 0x3, 0x45a9c7, 0xcda140, ...)
api_1              | 	/usr/local/go/src/reflect/value.go:460 +0x8ab
api_1              | reflect.Value.Call(0xcb8660, 0xf4b950, 0x13, 0xc000192980, 0x3, 0x3, 0x101000007dc20, 0xc88c20, 0xc000192990)
api_1              | 	/usr/local/go/src/reflect/value.go:321 +0xb4
api_1              | github.com/gocraft/web.middlewareStack.func1(0x10624c0, 0xc000198180, 0xc0001981a0)
api_1              | 	/go/pkg/mod/github.com/gocraft/[email protected]/router_serve.go:139 +0x3a1
api_1              | github.com/ava-labs/ortelius/services/indexes/avm.NewAPIRouter.func1(0xc000072820, 0x10624c0, 0xc000198180, 0xc0001981a0, 0xc0001b22f0)
api_1              | 	/go/src/github.com/ava-labs/ortelius/services/indexes/avm/api.go:43 +0xa7
api_1              | reflect.Value.call(0xcce860, 0xc0000bd380, 0x13, 0xded953, 0x4, 0xc000192d60, 0x4, 0x4, 0xc000388205, 0xd3c500, ...)
api_1              | 	/usr/local/go/src/reflect/value.go:460 +0x8ab
api_1              | reflect.Value.Call(0xcce860, 0xc0000bd380, 0x13, 0xc000192d60, 0x4, 0x4, 0xc000186700, 0x16, 0xc000192dc0)
api_1              | 	/usr/local/go/src/reflect/value.go:321 +0xb4
api_1              | github.com/gocraft/web.(*middlewareHandler).invoke(0xc0003d33e0, 0xdac240, 0xc000072820, 0x16, 0x10624c0, 0xc000198180, 0xc0001981a0, 0xc0001b22f0)
api_1              | 	/go/pkg/mod/github.com/gocraft/[email protected]/router_serve.go:159 +0x26a
api_1              | github.com/gocraft/web.middlewareStack.func1(0x10624c0, 0xc000198180, 0xc0001981a0)
api_1              | 	/go/pkg/mod/github.com/gocraft/[email protected]/router_serve.go:148 +0x18b
api_1              | github.com/ava-labs/ortelius/api.(*RootRequestContext).setHeaders(0xc000186700, 0x10624c0, 0xc000198180, 0xc0001981a0, 0xc0001b22f0)
api_1              | 	/go/src/github.com/ava-labs/ortelius/api/root.go:158 +0x26d
api_1              | reflect.Value.call(0xcce800, 0xf4b920, 0x13, 0xded953, 0x4, 0xc000193350, 0x4, 0x4, 0x0, 0x3, ...)
api_1              | 	/usr/local/go/src/reflect/value.go:460 +0x8ab
api_1              | reflect.Value.Call(0xcce800, 0xf4b920, 0x13, 0xc000193350, 0x4, 0x4, 0x7097432def9a94c6, 0x300000002, 0x203000)
api_1              | 	/usr/local/go/src/reflect/value.go:321 +0xb4
api_1              | github.com/gocraft/web.(*middlewareHandler).invoke(0xc0002f7ad0, 0xd3c500, 0xc000186700, 0x16, 0x10624c0, 0xc000198180, 0xc0001981a0, 0xc0001b22f0)
api_1              | 	/go/pkg/mod/github.com/gocraft/[email protected]/router_serve.go:159 +0x26a
api_1              | github.com/gocraft/web.middlewareStack.func1(0x10624c0, 0xc000198180, 0xc0001981a0)
api_1              | 	/go/pkg/mod/github.com/gocraft/[email protected]/router_serve.go:148 +0x18b
api_1              | github.com/ava-labs/ortelius/api.newContextSetter.func1(0xc000186700, 0x10624c0, 0xc000198180, 0xc0001981a0, 0xc0001b22f0)
api_1              | 	/go/src/github.com/ava-labs/ortelius/api/root.go:115 +0x1b3
api_1              | reflect.Value.call(0xcce800, 0xc000188f80, 0x13, 0xded953, 0x4, 0xc000193958, 0x4, 0x4, 0x412f0b, 0xc000193920, ...)
api_1              | 	/usr/local/go/src/reflect/value.go:460 +0x8ab
api_1              | reflect.Value.Call(0xcce800, 0xc000188f80, 0x13, 0xc000193958, 0x4, 0x4, 0x0, 0x0, 0xc0001939a8)
api_1              | 	/usr/local/go/src/reflect/value.go:321 +0xb4
api_1              | github.com/gocraft/web.(*middlewareHandler).invoke(0xc0002f7aa0, 0xd3c500, 0xc000186700, 0x16, 0x10624c0, 0xc

dial unix /tmp/1-2oYMBNV4eNHyqk2fjjV5nVQLDbtmNJzq5s3qs3Lo6ftnC6FByM-decisions: connect: no such file or directory

follow the below steps:

git clone https://github.com/ava-labs/ortelius.git $GOPATH/github.com/ava-labs/ortelius
cd $GOPATH/github.com/ava-labs/ortelius
make dev_env_start
make standalone_run

ortelius is running, but get an error:
producer_1 | INFO [09-24|06:07:39] /go/src/github.com/ava-labs/ortelius/stream/processor.go#123: Starting worker for chain 2oYMBNV4eNHyqk2fjjV5nVQLDbtmNJzq5s3qs3Lo6ftnC6FByM
producer_1 | INFO [09-24|06:07:39] /go/src/github.com/ava-labs/ortelius/stream/processor.go#133: Exiting worker for chain 2oYMBNV4eNHyqk2fjjV5nVQLDbtmNJzq5s3qs3Lo6ftnC6FByM
producer_1 | ERROR[09-24|06:07:39] /go/src/github.com/ava-labs/ortelius/stream/processor.go#84: Error running worker: dial unix /tmp/1-2oYMBNV4eNHyqk2fjjV5nVQLDbtmNJzq5s3qs3Lo6ftnC6FByM-decisions: connect: no such file or directory
producer_1 | INFO [09-24|06:07:39] /go/src/github.com/ava-labs/ortelius/stream/processor.go#123: Starting worker for chain 2oYMBNV4eNHyqk2fjjV5nVQLDbtmNJzq5s3qs3Lo6ftnC6FByM
producer_1 | INFO [09-24|06:07:39] /go/src/github.com/ava-labs/ortelius/stream/processor.go#133: Exiting worker for chain 2oYMBNV4eNHyqk2fjjV5nVQLDbtmNJzq5s3qs3Lo6ftnC6FByM
producer_1 | ERROR[09-24|06:07:39] /go/src/github.com/ava-labs/ortelius/stream/processor.go#84: Error running worker: dial unix /tmp/1-2oYMBNV4eNHyqk2fjjV5nVQLDbtmNJzq5s3qs3Lo6ftnC6FByM-consensus: connect: no such file or directory
producer_1 | INFO [09-24|06:07:39] /go/src/github.com/ava-labs/ortelius/stream/processor.go#123: Starting worker for chain 2oYMBNV4eNHyqk2fjjV5nVQLDbtmNJzq5s3qs3Lo6ftnC6FByM
producer_1 | INFO [09-24|06:07:39] /go/src/github.com/ava-labs/ortelius/stream/processor.go#133: Exiting worker for chain 2oYMBNV4eNHyqk2fjjV5nVQLDbtmNJzq5s3qs3Lo6ftnC6FByM
producer_1 | ERROR[09-24|06:07:39] /go/src/github.com/ava-labs/ortelius/stream/processor.go#84: Error running worker: dial unix /tmp/1-2oYMBNV4eNHyqk2fjjV5nVQLDbtmNJzq5s3qs3Lo6ftnC6FByM-decisions: connect: no such file or directory

I request to curl http://localhost:8080/X/transactions and always get the same response...
So what is the problem?

Thanks.

Why I got an fuji address on ortelius X chain request

Hi everyone,

I send a request for createAddress method via Ortelius Standalone mode.

I'm using this commands:

curl -X POST --data '{ "jsonrpc": "2.0", "method": "avm.createAddress", "params": { "username":"myUsername", "password":"myPassword" }, "id": 1 }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/X

I use "/ext/bc/X" for the request url. (I would like the X Chain address) but I got an fuji testnet address after the request. (like this: "X-fuji1njlfwas3queakekdgf6jcv30wux249xjx5n6k7")

Why?

how can I set parameter for X chain address?

thank you

TPS calculation

Hello!

Last 36 hours I was sending transactions with average rate of around 30 TPS with following way. I created 10k users at gecko node then fund 5k users and started to send 1nAVA from first 5k users to second 5k users. So many users are intended not to wait transaction confirmation after single send. But explorer shows only 2 TPS for last 24 hours.
explorer-tps2

Is there some problem or not? Or I did something wrong. For each user I send transaction if previous one for this user was Accepted.

Missed transaction

As you can see this address returns:
https://explorerapi.avax.network/x/addresses/gbAzVifoVBeUDQYNptfdeMkZCimxShZz

  • totalReceived: 2000
  • totalSent: 20000

However, /x/transactions returns only a received transaction (and not even a sent one):
https://explorerapi.avax.network/x/transactions?address=gbAzVifoVBeUDQYNptfdeMkZCimxShZz

There is another example of missed transaction. Let's take this transaction:
https://explorerapi.avax.network/x/transactions/2fvcavmcBd4JvE7WYzGMMuSnjKwSmeRCnDoqZ1KsnPE82M3tVt

As you can see this transaction uses as input an output generated in this transaction 2sHJpXEXdoVzcsX3cqYBj53gKbDRzyyaVLXVWabAbKwqkzwUEZ, that seems to miss:
https://explorerapi.avax.network/x/transactions/2sHJpXEXdoVzcsX3cqYBj53gKbDRzyyaVLXVWabAbKwqkzwUEZ

why ortelius db empty?

hi,
I install the ortelius on my machine in that:

> git clone https://github.com/ava-labs/ortelius.git $GOPATH/github.com/ava-labs/ortelius
> make dev_env_start
> make standalone_run 

and then I checked the ortelius db like that:


mysql> use ortelius;
mysql> show tables;
Empty set (0.00 sec)

there is no table in ortelius db. Why?

also I changed the ortelius_standalone, dev_env_mysql_1 , standalone_avalanche default ports, because i used their defaults ports for another containers:

**- avaplatform/avalanchego:v1.0.0 : 9651/tcp, 0.0.0.0:9651->9650/tcp

  • avaplatform/ortelius:516bb13 : 0.0.0.0:8081->8080/tcp
  • redis:6.0.5-alpine3.12 : 0.0.0.0:6380->6379/tcp
  • dev_env_mysql_1 : 33060/tcp, 0.0.0.0:3307->3306/tcp**

I updated this new ports&hosts in all project documents. Am I making a mistake because I don't use those default ports?

if not so, what should I do for fix?

thank you

Ortelius missing transactions

hi everyone,

I updated the my avalanchego node approximate 10 days before to avalanche go 1.1.1. But the ortelius missing quite a lot the transactions. (some transactions there aren't in ortelius, -ortelius didn't index its-) It's missing before versions (1.0 and 1.1) too, but not this much. I'm thinking return to avalanche 1.0. (first version). or what can I do else? I guess, if I update the avalanchego to 1.1.4, this problem will continue again :(

thank you

Decide Kafka stream deduplication strategy

Description

Kafka does not deduplicate items coming in so it's inevitable that we'll get duplicate items. To deal with this we have two mutually inclusive options:

  1. Write to an input topic and have a deduplication consumer write unique items to the final canonical topic.
  2. Require all consumers to handle duplicate items

Segment.io wrote in 2017 about their implementation of Option 1 (https://segment.com/blog/exactly-once-delivery/). However now it may be possible and better to use a kstream over an external consumer.

If Option 1 can only be done over some non-complete window (as suggested in the Segment article), then Option 2 is going to be necessary regardless.

Action Items

  • Decide our deduplication strategy
  • Create Issues to implement the decided strategy

Suggestion

I suggest we go with both options: We ensure all consumers are written to handle duplicate items when possible, but we also implement a deduplication consumer between an ingress topic and the canonical topic to protect the integrity of our canonical data as much as feasible.

Not enough arguments in call to client.cmdable.Ping

Running docker-compose Yields error:

sudo docker-compose -f docker/docker-compose.yml up

Building stream-consumer
Step 1/12 : FROM golang:1.14-alpine
 ---> 3289bf11c284
Step 2/12 : WORKDIR /go/src/github.com/ava-labs/ortelius
 ---> Using cache
 ---> 4bd53078b885
Step 3/12 : RUN apk add git
 ---> Using cache
 ---> 6560f4274925
Step 4/12 : COPY . .
 ---> Using cache
 ---> ae70d19018f4
Step 5/12 : RUN go get ./cmds/...
 ---> Running in 8c72d7b6d846
# github.com/ava-labs/ortelius/services
services/connections.go:69:23: not enough arguments in call to client.cmdable.Ping
	have ()
	want (context.Context)
ERROR: Service 'stream-consumer' failed to build: The command '/bin/sh -c go get ./cmds/...' returned a non-zero code: 2

Running ava 0.5.1

May the ortelius call an http api on X chain transactions

Hi i was opened same issue for avalanchego too but not sure if it get considered,
can such feature could be done?
When an successful transfer that has been accepted by the network has been done on x chain ,
node will post the details to an http api can be speficied by cmd arg .

ortelius stream callbacker -c path/to/config.json
in the config file
"networkID": 1, "callback_http": "http://127.0.0.1:8080",
and ortelius will post something like that to our endpoint

POST http://127.0.0.1:8888/new_transaction HTTP/1.1
Host: 127.0.0.1
Content-Length: 1056
Content-Type: application/json

{
    "transaction_id": "2KwLJBZXsNa9tDgZbJix1DfYBBfmTRmqnyT4nPZk2n1HDntai",

    "inputs": [{
            "input_address": "avax1lmfwfq68f5autmpdx0j0ag3mvr94dqjr9ptjks",
            "asset_id": "FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z",
            "amount": 100223
        }, {
            "input_address": "avax1wdpn22pu2qaud9uxca4hdj5e90t6grk064xn49",
            "asset_id": "FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z",
            "amount": 200000
        }
    ],

    "outputs": [{
            "output_address": "avax1llaexasqcmlpsn3l254pzrf5qu8gqun323xgcx",
            "asset_id": "FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z",
            "amount": 100223
        }, {
            "output_address": "avax1l3dv3vrk0l8pjyhysz8kv354s6aqxw4nlmv0yl",
            "asset_id": "FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z",
            "amount": 200000
        }
    ],

}

currently the all exchanges and other websites are regularly scanning defined addresses of user's for new utxos, this is why it exchange sites takes time to credit users , we need something better to notify the backends , without it users needs wait uncertain time,so it makes avax coin slower even the slowest blockchains , even if it doesnt meet your roadmap, i would kindly ask the filename writes the new transactions to the mysql db , maybe i can find a way to do it somehow

get transaction request turns as empty on ortelius

hi everyone,

I send a request to x chain via ortelius, (standalone mode) in that:

curl http://localhost:8080/X/transactions/:24my4bhxn56rxcy2ZP7aAtdFf57aHv5U3qJ5DQdt83dBSzCmMt

the server response turns as empty to me:

curl: (52) Empty reply from server

what should I do?

thank you

Add optional address whitelist to indexer

It has been required to optionally be able to not index data that the operator is not interested, specifically ones related to unknown addresses.

Note that this presents some potential operational challenges with addresses management; if an address is added to the whitelist after it has had transactions those transactions will have not been kept. A re-index would be required in this case.

Transaction Details Not Found

A record for this transaction ID was not found in the Avalanche Explorer

2RmTUwWsRXx86BHm6ELGkUGjxE49pRKzH3sub3mD8zNj152yHn

Waiting for your kind help regarding with the non-completed transaction details given above.

Cache collisions on requests with time params

Design bug.

Because startTime/endTime have so many potential valid values we round the time for the cache key only to 15s intervals in an attempt to reduce the surface area for blowing up the cache. This was done originally when our only time params were for the aggregates endpoint where the slight inprecision is tolerable but we now use these params for ListTransactions and potentially others in the future and this becomes a noticeable issue.

Specifically a request with startTime=0 will cache results for that value but subsequent requests for e.g. startTime=14 will show the old results for 5 minutes. After that a request for startTime=14 will now cache those results which will collide with startTime 0-13.

Total burned AVAX

Two important metrics for users are:

  • the circulating supply of AVAX
  • the total burned amount of AVAX
    Currently Ortelius can't provide these information. Can they be implemented somehow?

Migration for PostgreSQL is not correct

Migrations are not suitable for PostgreSQL, because of different types. For example, there is no varbinary or smallint/bigint unsigned data types in PostgreSQL. It should be corrected in order to work corretly with PostgreSQL.

indexer uses a method(debug_traceTransaction) that doesn't exist

Indexer Verison:avaplatform/ortelius:v1.7.10-rc.2
Core Version:1.7.15

The log output of Indexer is as follows:

INFO [08-04|02:36:57.242] /go/src/github.com/ava-labs/ortelius/stream/producer_cchain.go#353: Starting worker for cchain
INFO [08-04|02:36:57.243] /go/src/github.com/ava-labs/ortelius/stream/producer_cchain.go#131: starting processing block 18 cnt 9
INFO [08-04|02:36:57.243] /go/src/github.com/ava-labs/ortelius/stream/producer_cchain.go#183: refill 2
INFO [08-04|02:36:57.243] /go/src/github.com/ava-labs/ortelius/stream/producer_cchain.go#183: refill 3
INFO [08-04|02:36:57.243] /go/src/github.com/ava-labs/ortelius/stream/producer_cchain.go#183: refill 5
INFO [08-04|02:36:57.243] /go/src/github.com/ava-labs/ortelius/stream/producer_cchain.go#183: refill 6
INFO [08-04|02:36:57.243] /go/src/github.com/ava-labs/ortelius/stream/producer_cchain.go#183: refill 8
INFO [08-04|02:36:57.243] /go/src/github.com/ava-labs/ortelius/stream/producer_cchain.go#183: refill 9
INFO [08-04|02:36:57.243] /go/src/github.com/ava-labs/ortelius/stream/producer_cchain.go#183: refill 10
INFO [08-04|02:36:57.243] /go/src/github.com/ava-labs/ortelius/stream/producer_cchain.go#183: refill 12
INFO [08-04|02:36:57.243] /go/src/github.com/ava-labs/ortelius/stream/producer_cchain.go#183: refill 13
INFO [08-04|02:36:57.243] /go/src/github.com/ava-labs/ortelius/stream/producer_cchain.go#183: refill 14
INFO [08-04|02:36:57.243] /go/src/github.com/ava-labs/ortelius/stream/producer_cchain.go#190: catchup complete
ERROR[08-04|02:36:57.246] /go/src/github.com/ava-labs/ortelius/stream/producer_cchain.go#408: Catchup error: the method debug_traceTransaction does not exist/is not available
INFO [08-04|02:36:57.248] /go/src/github.com/ava-labs/ortelius/stream/producer_cchain.go#451: Exiting worker for cchain
ERROR[08-04|02:36:57.248] /go/src/github.com/ava-labs/ortelius/stream/producer_cchain.go#316: Error running worker: the method debug_traceTransaction does not exist/is not available

The configuration file of the core service is as follows:

{
    "snowman-api-enabled": false,
    "coreth-admin-api-enabled": false,
    "coreth-admin-api-dir": "",
    "eth-apis": [
      "admin",
      "eth",
      "eth-filter",
      "net",
      "web3",
      "internal-eth",
      "internal-blockchain",
      "internal-transaction",
      "debug",
      "debug-tracer",
      "internal-tx-pool",
      "internal-debug",
      "internal-account",
      "internal-personal"
    ],
    "continuous-profiler-dir": "",
    "continuous-profiler-frequency": 900000000000,
    "continuous-profiler-max-files": 5,
    "rpc-gas-cap": 50000000,
    "rpc-tx-fee-cap": 100,
    "preimages-enabled": false,
    "snapshot-async": true,
    "snapshot-verification-enabled": false,
    "pruning-enabled": false,
    "allow-missing-tries": false,
    "populate-missing-tries-parallelism": 1024,
    "metrics-enabled": false,
    "metrics-expensive-enabled": false,
    "local-txs-enabled": false,
    "api-max-duration": 0,
    "ws-cpu-refill-rate": 0,
    "ws-cpu-max-stored": 0,
    "api-max-blocks-per-request": 0,
    "allow-unfinalized-queries": false,
    "allow-unprotected-txs": false,
    "keystore-directory": "",
    "keystore-external-signer": "",
    "keystore-insecure-unlock-allowed": false,
    "remote-tx-gossip-only-enabled": false,
    "tx-regossip-frequency": 60000000000,
    "tx-regossip-max-size": 15,
    "log-level": "info",
    "http-host": "0.0.0.0",
    "index-enabled": "true",
    "network-minimum-timeout": "3s",
    "max-outbound-active-requests": 8,
    "state-sync-enabled": true,
    "state-sync-skip-resume": false,
    "state-sync-min-blocks": 300000,
    "state-sync-ids": "",
    "state-sync-server-trie-cache": 64,
    "offline-pruning-enabled": false,
    "offline-pruning-bloom-filter-size": 1024,
    "offline-pruning-data-directory": "/mnt/avaxmain/node/offline-pruning"
}

Weird result when using the api to fetch CChain transactions

I was having a look at the documentation for Ortelius, and while testing this query I got some weird results .

First it only yielded 500 errors, then it suddenly started to give back a response, but with unexpected results.

You can see in the screenshot below that the blocknumber of the first transaction we get is out of the range we passed as an input.

image

Here you can see that the start time is ... weird ?

image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.