Giter VIP home page Giter VIP logo

core's People

Contributors

cfb-qubic avatar cyber-pc avatar frog-rabbit avatar j0et0m avatar krypdkat avatar leonalabel avatar olteanusorin avatar philippwerner avatar qsilver97 avatar swissnerd avatar vagabondkjo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

core's Issues

Making more data available to SC

via qpi the access to internal data is constrained

In the past the spectrum data was accessible and this would be a very good thing for many SC along with universe data.
The more data that is safe for SC to have access the better, so any other data that can be made available (tickdata?) the better

[QA] are the publicKeys same?

unsigned long long solutionScore = (*::score)(processorNumber, transaction->sourcePublicKey, solution_nonce);

and

core/src/qubic.cpp

Lines 3503 to 3504 in 1d3dacd

getPrivateKey(computorSubseeds[i].m256i_u8, computorPrivateKeys[i].m256i_u8);
getPublicKey(computorPrivateKeys[i].m256i_u8, computorPublicKeys[i].m256i_u8);

are the publicKey same?

Unable to fetch log on/after endEpoch event

A few 3rd party services reported that they were unable to fetch logs and data of the last 3-5 ticks of an epoch. I suspect it's because of too fast endEpoch event (which includes clearing all tick data and log of old epoch).
Solution: reset tick data after the first tick of the new epoch has 451+ votes

Persisting node state

Currently, every time the node restarts it needs to sync from scratch (initial tick), network overhead takes more than 70% of the syncing time, and this somewhat puts stress on the network.
We need a feature to save all important data (quorum Tick, tick data, transaction data) from RAM to disk so that the node can reload them when it gets started.
We should save the data every 1500 ticks (appx ~1hour) (or every time the user presses F8)

Real use case:

  • Node operator runs 2 nodes MAIN (good machine) and AUX (weak machine). He enables "auto persisting node state" on the AUX machine. When his MAIN machine crashes because of unexpected incidents (lost electricity, faulty RAM, overheated CPU...), he can copy state files from AUX machine to MAIN machine and start from it.
  • Operators can share the state files with someone else to have faster syncing.

TokenList data structure to reduce Universe size by 75%

The Universe file has low entropy due to its design, by splitting the Universe into two parts: TokenList and Universe2 the capacity of Universe file increases 4x

This is because each entry of the Universe file is 48 bytes, 40 of which are the (name + pubkey), but this can be encoded into 32 bits as follows:

For SC shares just use the contract index directly
For user created tokens, set bit 31 and assign a TokenIndex in order of creation tick, alphabetical order if more than one token created in the same tick.

This reduces the total number of SC from 4 billion to 2 billion, which seems acceptable.

With this change each Universe2 entry would be 12 bytes instead of 48 bytes. Currently there are about 10 total tokens (SC shares + user defined) so the total size of the TokenList would be less than 512 bytes! A very small cost to expand the Universe capacity 4x

It also solves the issue of quickly finding all existing tokens by having a network request that just returns the TokenList, as the number of tokens increases, we might want to add "pagination" to the return to keep the max size of the results reasonable.

As it is our AIRDROP SC is on hold as after 42 airdrops the Universe file is totally full. By my analysis the Universe data is the lowest entropy (by far) in Qubic and thus it is the weakest link.

NewToken(name,pubkey) -> returns token index in order of creation would be called during tick processing that detects a new user defined asset being issued.

convenience function: TokenIndex(name,pubkey) would return the 32 bit index and could handle both SC and user tokens

GetTokenList would be a network command that returns the full TokenList

Universe2 would use the token index in place of (name + pubkey)

Above are just one example of how it could be implemented efficiently. Alternative would be to just store all the asset creation Universe entries directly in the TokenList with the added field of TokenIndex. That would probably minimize the code changes needed.

Add `qxListAssets` or similar to list all assets

Hi,

playing around I noticed that to get Qx bid/asks for an asset one needs to specify the asset name and issuer. That's because apparently it's possible to have multiple assets with the same name (CfB).

While for SC it's easy as the issuer is always the all-A (AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFXIB), getting the issuer of an asset is more complicated (*). Furthermore, with more and more Tokens being created it will be hard to keep track of which assets exist, and most probably we don't want UI/tool developers to maintain an hard-coded list of assets and their issuers.

It would be great therefore to have a way of asking nodes for a list of all existing assets, SC and tokens. Opening this issue as of CfB advice.

I haven't found where this data can be found in memory yet, but judging from the other Qx calls the implementation could be something like:

struct Assets_input
{
    uint64 offset;
};

struct Assets_output
{
    struct Asset
    {
        id issuer;
        uint64 assetName;
     };

     array<Asset, 256> assets;
};

struct Assets_locals
{
    // ...
};

PUBLIC_FUNCTION_WITH_LOCALS(Assets)
   // ...
_

This assuming is a Qx function/responsibility. It could also be just a core network message. Especially dubious after readong #102 ... Again, I'm not sure where this info is in memory yet, I'll investigate more and if able I may try and provide a PR.

(*) I haven't found a way to do it yet

tick value in entity response does not always match the siblings data

some interesting stats about tick alignment in entity info:

  1. prevSpectrumDigest as voted in tick N put into slot N
  2. whenever entity data comes in, compare against a range of 1), N-1, N, N+1, N+2
  3. based on assumption we would expect the match to be with slot N+1 as 1) is the prevSpectrumDigest
  4. results:
    N-1 175
    N 5644
    N+1 21481
    N+2 52

So indeed N+1 wins but a surprising number matches N due to off by one reporting of entity data

Not sure the reason, but it seems that the tick number in the entity response it not obtained in the same context as the sibling merkle calculations.

missing QX transaction types and rcf

Qwallet is doing a QX UI and while the current functionality is sufficient for getting a trade done there are several areas that need extra functionality to be able to make a user experience that people expect.

  1. A permissionless way to get the most recent orders. Otherwise there is no way currently to determine the results of the ordermatch.
  2. A way to cancel pending orders. Options would be ALL/specific asset, bids/asks/both. The reason is that it takes a transaction to cancel a single order, so if someone has many orders for all assets, it could take many minutes to complete the cancellation. The restriction of one pending tx per address cannot be worked around with the orders all from the same address. Based on a fast rising market, you might just want to cancel all asks, just one tx for all of your asks. or maybe limit it to a single asset. Or just cancel all orders if you are done day trading for the day, etc.

Complete Freeze of Qubic Core

from yura: "Hello qubic team. My BM server freezes about once a day. 7950ั…/128Gb/ loaded from USB Flash. SMT Mode
disabled. MB X670 AORUS ELITE AX. The latest BIOS version is installed. Internet bandwidth is 1GBit (in/out), what else can I check? (
in the screenshot the moment of freezing)"

the node freezes completely and must be restarted with a hw reset.

image

qpi function request to return the total supply for a token

it is important in some cases to know how many tokens exist
the total issued can be returned if that is easier than current unburned supply

the usecase is to fit microtoken count in 64 bits. for SC shares that only have 676 this is not a problem but if an asset has over 18 trillion tokens then the microtoken supply for it would overflow 64 bits.

Since there is no need to make a microtoken out of such a large supply asset, we can just return an error if the user tries to make a microtoken out of an asset that could overflow 64 bits.

Currently this is not possible so we need to restrict microtokenization to SC shares, even though it would be quite useful for expensive tokens in general

Add Special Command to set time

An Operator should be able to set the current time of the core-node via a special command (similar to shutdown).

An accurate time on the node is important because the network peers currently must be in sync (~5s tolerance).

more of Collection implementation

  1. Change code in a way that head has higher priority than tail if it's not the case.
  2. Create sint64 headIndex(const id& pov, sint64 maxPriority) similar to sint64 headIndex(const id& pov), it should ignore elements with priority greater than maxPriority.
  3. Create sint64 tailIndex(const id& pov, sint64 minPriority) similar to sint64 tailIndex(const id& pov), it should ignore elements with priority less than minPriority.
    The idea behind #2 and #3 is to get ability to work with subcollections.

Add a Network Message to get basic System Information

We want to be able to query a Qubic node with REQUEST_SYSTEM_INFO

this should return:

version;
epoch;
tick;
initialTick;
latestCreatedTick;

initialMillisecond;
initialSecond;
initialMinute;
initialHour;
initialDay;
initialMonth;
initialYear;

randomMiningSeeds;
numberOfEntties;

QX get bids/asks field request

Trying to maintain the state of the orderbook externally from a remote node poses some challenges.
The returned information does not include the starting offset of the returned quotes. While it is possible to use the dejavu to correlate this, it would be better if there was a field that had the offset to enable verification of this. That would also allow a stateless implementation as the starting offset would be in the returned data and stateless is more reliable

More importantly,the total number of entries is not returned anywhere, so it seems we must query as long as we get all 256 quotes filled, but there is no way to allocate memory ahead of time this way. If the total number of quotes was included with each return (very small overhead considering the over 20kb size) then memory for the entire orderbook can be allocated after the initial response.

The starting offset and total entries fields would help the remote nodes a lot.

Improvement on computor list

Currently list of future computors is just copied as is. We need the following improvement: If a computor has kept its status then it must keep its computor index.

Qx transaction limit

Now there is a limit of 1024 SC tx per tick that makes things better but in order to handle a lot more Qx volume I think we need to have a multiple Qx command transaction. Similar to the way QUTIL does 25x normal sends to boost the throughput 25x

The Qx processing is around 25 microseconds per swap, so it seems that we can easily boost the throughput 25x by having a combined command. Maybe a vector of commands that allow for any command to be included in the 1024 extra bytes, or limit it to the same Qx transaction type.

Anyway, there is a lot of room to encode a lot more Qx transactions, especially if shortcuts are allowed to specify the token to be traded for SC shares as currently it is mostly 0s for the pubkey and issuer. Could we get it down to:
1 byte for action
1 byte (or 41 if not SC share to encode the asset/issuer or just SC index)
8 bytes for price
8 bytes for amount

Seems that up to 60 QX commands can be squeezed into a single tx!
Even the tokens can get around 15

trustless txid confirmations

https://discord.com/channels/768887649540243497/768890555564163092/1237734073103810571

Above is discussion about making the current txstatus bits part of one of the voting digests
a zero footprint method can be used by hashing the txstatus bits into the transactionDigest, or just making a new digest

The incremental overhead to cryptographically validate the txstatus bits is very small, thought the overhead used by the addon might need to be reduced to handle large amounts of transactions. Since the order of txid being returned matches the TickData order, all that is needed is 1 bit per txid

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.