Giter VIP home page Giter VIP logo

blockstack-explorer-api's People

Contributors

charliec3 avatar dependabot[bot] avatar hstove avatar kantai avatar kyranjamie avatar semantic-release-bot avatar yknl avatar zone117x avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

blockstack-explorer-api's Issues

Process pending transactions

Currently we're consuming the Blockstack core API for transaction data. Core doesn't process a block until it has 7 confirmations. This means the explorer + wallet is unaware of transactions that have less than 7 confirmations. This makes it hard to keep an accurate balance in the wallet and it takes 1.5+ hours for transactions to show up for the recipient. Can the explorer query a bitcoin node directly to get unconfirmed and <7 confirmation transaction data and expose that through the API?

Originally posted by @yknl in deprecated repo

Dynamically calculate 'unlocked supply'

In a time pinch, we hard-coded the 'unlocked supply' number on the home page. This changes every two weeks, so we need to automate it. I have a query, but it's slightly off from the number Jude and Jesse came up with.

Need to work with @jcnelson on the right query. Here is his response:

​​two things:
​​it doesn’t count value that was granted immediately without locks
​​it counts placeholder accounts that don’t have valid addresses and are currently unlocking tokens.
​​
​​I don’t think this can be done as a single query. The script I wrote just queried the balance of every address directly and summed them (and the way to do that is to first get the list of all well-formed account addresses, and then run SELECT credit_value - debit_value FROM accounts WHERE address = ?1 ORDER BY block_id DESC LIMIT 1​)

So, it seems we will need to build an aggregator that does this. Shouldn't be a problem.

Some naming and keys missing

As originally posted by @aulneau in deprecated repo (see conversation on that issue for more context and input):


const tx = {
      address: "mrifWsShtgpyQ7kUpPQFvuddZ2vzbDaQxc", // should this be named sender?
      block_id: 549703,
      credit_value: "100", // what does this mean? is this balance at time of tx?
      debit_value: "10",
      lock_transfer_block_id: 0,
      txid: "fca73d510b083311291623773a69febf0dd236e395f1aeab64bf197ce0111ad9",
      type: "STACKS",
      vtxindex: 1091, // what is this?
      operationType: "$",
      consensusHash: "038a1541a4f314ae37cf3e43d3ad3e54",
      tokenType: "STACKS",
      tokensSent: "10",
      scratchData: "",
      recipientBitcoinAddress: "1BXSdgcRLtrjw5AXZPe1eawmQr75qckfMo",
      recipient: "SP1SQ68HK9N4GFT3CG694FBFAYWS28C39NF1TZG9X",
      tokenSentHex: "000000000000000a",
      tokensSentSTX: "0", // what is this?
      operation: "SENT" // what is the opposite operation? 'RECEIVED'?
    }

We should also include a confirmed key with a timestamp of the confirmation.

Setup CircleCI to run the full test suite

Unfortunately, we don't run the full test suite on CircleCI right now, because it requires a bunch of setup. You need to have access to both Bitcore and the full PG database.

I see two options here:

  • Give CircleCI access to both our Bitcore and PG database, read-only of course
  • Manually setup PG during the test run. This is only practical for PG, not Bitcore

Code coverage

Enable unit test code coverage generation, and integration into Codecov & github.

Remove dependency on blockchain.info

Over the summer, we migrated most of our internal data sources away from external APIs, which were causing a bunch of issues. It appears that we missed a spot, where we still rely on blockchain.info for fetching information related to a BTC address.

My recommendation for how to handle this:

  • Write tests around the BTC address API to get an idea of the interface
  • Refactor the BTC address code to only fetch information internally - likely from Bitcore's DB.
  • Make sure the tests pass, which will ensure backwards compatibility

In terms of backwards compatibility, we just want to make sure the front-end doesn't break. I don't think there are other consumers of that API. An alternative to backwards compatibility is to also update the front-end to handle any new API interface that we build from Bitcore data.

Create new v2 controller with new data sources

Since we have new data sources, the exact data returned might not match. Instead of changing the existing API, it's better to create a 'v2' api for our new data sources. This also ensures that we won't break projects that use our existing API, like the explorer.

Update total supply

Current value doesn't capture tokens materialized in the latest hard-fork.

Optimize home page query

Currently 100+ postgres queries are performed to gather names. This could be turned into a single query.
It also performs a couple queries to the core node API, which can be replaced with faster and more reliable postgres queries.

Demo for pub/sub cache maintenance

Implement a demo of #30 for one of the Bitcoin (MongoDB) data endpoints.

Note: the Stacks 1.0 data is "brute forced" into Postgres with table dumps and bulk inserts. The Stacks 2.0 version will not do this. For now, an architecture demo could be implemented for one of the Bitcoin data endpoints.

Efficient cache updating with pub/sub pattern

The two primary data ingest sources are Postgres (for Stacks related data) and mongoDB (for bitcoin related data). The Stacks 2.0 data-stream sidecar will also likely be using Postgres.

Both these data stores support building an architecture with an efficient cache maintenance service using their pub/sub features.

This is the LISTEN and NOTIFY features in Postgres, and the Change Streams features in MongoDB.

The Explorer will likely require this architecture for the Stacks 2.0 upgrades. The Explorer's existing aggregator-cache pattern can be augmented to build out this architecture.

For more context on pub/sub cache maintenance, see this article, which outlines how to architect such a system with Postgres: https://tapoueh.org/blog/2018/07/postgresql-listen-notify/

PostgreSQL LISTEN and NOTIFY support is perfect for maintaining a cache. Because notifications are only delivered to client connections that are listening at the moment of the notify call, our cache maintenance service must implement the following behavior, in this exact order:

  1. Connect to the PostgreSQL database we expect notifications from and issue the listen command.

  2. Fetch the current values from their single source of truth and reset the cache with those computed values.

  3. Process notifications as they come and update the in-memory cache, and once in a while synchronize the in-memory cache to its materialized location, as per the cache invalidation policy.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.