Giter VIP home page Giter VIP logo

neo-go's People

Contributors

fyrchik avatar kevaundray avatar pmdcosta avatar stevenjack avatar suzumi49n avatar

Watchers

 avatar  avatar

neo-go's Issues

Clean up utils

There are some functions that need to be put into their respective util files, this will be an ongoing process

Transactioner Interface

The transactioner interface can contain all of the common fields of a transaction, doing this will make it so that we only need a type switch when we need to access the special fields.

Refactor Uint256 and Uint160

The Fixed8 package was refactored to be cleaner and more verbose in it's implementation. It would be great if the uint256 and uint160 files, were also refactored in the same way

Wire Protocol

The common framework for the wire protocol has now been completed.

There's a couple more types of network messages commands to implement.

This issue will present the list of network message commands left to implement.

Commands Left to implement are:

[ ] CMDVerack
[ ] CMDGetAddr
[ ] CMDAddr
[ ] CMDGetHeaders
[ ] CMDHeaders
[ ] CMDGetBlocks
[ ] CMDInv
[ ] CMDGetData
[ ] CMDBlock
[ ] CMDTX
[ ] CMDConsensus

Please check the readme in that package for progress. There is a template on how to do the rest of the methods. It is the VersionMessage as of writing.

Review of the Wire Protocol

The wire protocol is now usable, however there are still things that I believe need to be implemented, some now and some further down the line:

MSGPING, MSGPONG: These will be used to verify that the node is still alive without having to do the handshake with it.

MSGReject: This is an important message for those looking to debug code, instead of silently disconnecting. The node will send a MSGReject stating why he has rejected the request.

MSGFilters: These will be implemented later down the line when either GCS/Bloom has been made.


This package has been designed so that the inner workings can be easily changed without the outsider having to edit their code if they rely upon it. I believe that optimisations can always be made and so this package should be visited periodically to review, add more tests and optimise.

Writing to an io.writer is duplicated a lot

There is a lot of code duplication with the binary writer, for example;

A lot of:

	 if err := binary.Write(w, binary.LittleEndian, v.Version); err != nil {
	 	return err
	 }

We can create a struct which holds the writer and err then encapsulates the above code

Adding Logs

Currently the fmt.Println statements are being used, in place of Logs. When logs are implemented, we would just need to change all fmt.Println statements to Logs

Wire - Redundancy

In message.go;

we are passing in an io.Writer to message.EncodePayload, which comes from a binaryWriter:

message.EncodePayload(bw.w)

Then in message.EncodePayload,

we are converting the writer to an io.writer to a binaryWriter

func (v *VersionMessage) EncodePayload(w io.Writer) error {
	bw := &binWriter{w: w}

	bw.Write(v.Version)
...
}

AddressManager - Blacklisting

It is almost impossible to successfully blacklist nodes unresponsive because some nodes are ran on the users pc, and if blacklisted, then they will have no way to connect to the network from their laptops as a full node.

This should not be a problem however, as running a full node implies that it will be kept up for a long time.

Peer manager

Having a peer manager would allow each module to call the peer mgr for and not need to deal with the Peers abstract itself. They would just need to know what message they would like to send.

Doing this however will cause an extra layer of dependency. Where moduleX depends on PeerMgr and PeerMgr depends on Peer.

Changing the peer would then cause the Mgr to behave differently, if part of the changed code effects the exposed functionality of Peer. This could in turn cause moduleX to behave differently and or break.

Orphan Blocks

It's still unclear whether the block manager should cater for orphan blocks. Once everything is done, this can be added later on.

Removal of Subscription model

The current pubsub model was adapted from geth, however after using it and trying to use the original unmodified version, we will have to remove all of it because a more functional approach would work better, allowing us to give the publisher the function with the locks and pre-defined behaviour for execution when we see a certain event.

Seperate Repo for wallet - Discussion

I think that we can separate concerns by making the wallet in a separate repo that this repo, will call upon.

Making this nodes core logic all about the important parts of the app.

Inconsistency in the Wire package

MSG_INV and MSG_GetData essentially are the same except for the command type.

Because of this, one is embedded within the other, and the command type is changed.

This is not done with MSG_Verack and MSG_GetAddr. The reasoning was because the abstraction brings in added complexity while shortening code. However, since verack and getaddr do not have a payload, the code is short and so this was not needed

Make sure we have 70% + Test coverage

No, other feature should be implemented until we have at least 70% Test coverage, for each component in Wire.

Once the 70% is reached, we will continue to refactor and increase.

I think a TDD style of development would mix well with the interface approach that we are aiming for. Although, I have not stuck to it with this package, I want to stick to it moving forward as a convention, or at the very least do package then package_test.go before moving on.

Wire Protocol redundancies

Currently the wire protocol has a GetLength func, which encodes the payload and returns the length.

It also has a checksum function, which does the same thing.

we could avoid this by embedding the MessageHeader into the MessageType Structs and then calculating the values for it once upon init, then fetch it from MessageHeader.

Case study To Focus on the Discrepency with syncing

The below case study will give the rationale to continue development however spend a large amount of time on the syncmanager.

Sync times on blockchains take a relatively large amount of time, however with NEO it seems to be more pronounced.

Bitcoin Core had seen a massive increase in this in release 0.14.0 . Except for the IBD that bitcoin uses, this is where NEO and BTC's syncing algorithms started to fork.

0.14 Release notes: https://github.com/bitcoin/bitcoin/blob/0.14/doc/release-notes.md

Article on changes: https://bitcoincore.org/en/2017/03/08/release-0.14.0/#ibd

The main reason for this was due to Bitcoin replacing checkpoints with "Assumed Valid Block".

The article states that Bitcoin Core 0.13.2 took 1 day, 12 hours, and 40 minutes to complete and Bitcoin Core 0.14.0 took 0 days, 6 hours, and 24 minutes to complete

At the time of the article, Bitcoin had ~250Million transactions. https://www.blockchain.com/charts/n-transactions-total

NEO had ~15 million: https://neodepot.org/charts/transactions-count-total

The average size of a bitcoin tx in 2017 ~ 600bytes. https://tradeblock.com/bitcoin/historical/1w-f-tsize_per_avg-01101

The average size of a NEO transaction in 2017 was roughly 225 bytes.

Noting that all things are not equal; Bitcoin's scripting language and smart contract support is much more limited, therefore Bitcoin will always sync faster than NEO if all other variables are equal. The gap should however be reasonable and this unfortunately is a subjective matter.

With all this in mind, the syncing process for NEO to get to block 1.5Million should only take a day at max, because tx size was almost a third of Bitcoin, while BTC had roughly 15 times more tx's, this should negate the fact that NEO has a more expressive smart contract language(s).

From syncing I have done in the past, it takes a couple of days at least to get to this number and increasingly slows down as time goes by for NEO. Note that the comparison is being done with bitcoin 0.3.2 when assumed-valid-block was off.

In order to match or possibly beat the bitcoin 0.4.2 benchmark, assumed-valid-block would be a beneficial feature to have. Assume-valid-block is not the same as a checkmark.

Article on assumed-valid-block: https://bitcoincore.org/en/2017/03/08/release-0.14.0/#assumed-valid-blocks

PR: bitcoin/bitcoin#9779

In addition to this:

  • The modified IBD would need to be altered to monitor block times and not peer stalling times. One reason why bitcoin can have a better syncing process is because the software has a better monitoring process for when a peer dies or is stalling. This allows the syncing algorithm which relies on peers to be more efficient.

Notes

  • Assumed valid will have to be done around block 1.8Million.

Critique of this case study is welcomed.

BinReader and Writer Optimisation

The BinReader function looks like this:

func (r *BinReader) Read(v interface{}) {
	if r.Err != nil {
		return
	}
	binary.Read(r.R, binary.LittleEndian, v)
}

If an error is occured, it will not read anymore values, but will just keep returning.

Therefore, we could change the signature to be br.Read(val1, val2, val3, val4)

And call it like so:

binreader.Read(v ...interface{}) error
br := binreader.New(r)
br.Read(val1, val2, val3, val4) 

This optimisation was suggested by anth, for reference:

   Binreader Will keep reading all, also if an error is occured before.
   You should wrap all in one function:
   binreader.Read(v ...interface{}) error
   br := binreader.New(r)
   br.Read(val1, val2, val3, val4)

BlockHeader - Dependency

The blockheader struct is a struct that is used both in the decode and encoding of blocks which is in the core. it is also used in the wire.

We will define it inside of the wire, so that the core is dependent on the wire.

The alternative would have been to define this inside of the core package, however we want the wire to be standalone

Incompatibility With Docs

For the InvPayload;

The docs state that we have:

   type inv_vect struct {
      Type 
      Hash
   }

Then an invPayload is inv_vect[]

However in all implementations, we have:

InvPayload as Type,Hashes[]

So one type with an array of hashes.

Radix Tree Implementation

This issue will be left open to discuss, where to put the radix tree. It is not currently supported in NEO, however it can be added and hidden from other nodes.

The only actors who need to know about it are light clients. If implemented, we will also be able to rollback state a lot easier.

I will keep this open for discussions on where to put it, ideally it should be calculated at the same time as the merkle tree is calculated. However, there was a recent issue raised in the neo-project on decoupling the state changes for the vm and implementing state_height so we may have to make it a standalone also.

Connection Manager - Actor

The actor model used negates the use of any mutexes, however would like to keep an eye out for this, as I think we may need to add some in due to the way that maps are handled in go.

New Testing Convention

Possible new convention for organising tests

For testing external methods :

package name : package_test
file name: package_external_test.go

For testing internal methods:

package name : package
file name: package_internal_test.go

Name Change into New Repo

This implementation differs from the original neo-go implementation by large measures. It is not a refactor and is more of a re-write. Similar to how neo-sharp is a re-write of the core code. For this, I think that instead of over-writing the original neo-go, a new repo will be made renaming this node to NEO-DIG.

[V2] - Sync Manager

The first thing we will work on is the sync manager.

The sync manager will be broken down further into separate modules:

  • DownloadManager: Will be responsible for downloading data from peers
  • UploadManager: Will be responsible for sending data to peer
  • ConnectionManager: Will be responsible for maintaining a pool of connections to peers; the handshake protocol will be in this package, a timeout will be in this package for slow peers.

V3: Ideally we would like a timeout that adjusts itself for each node, as we do not want to disconnect nodes because they are naturally just slow. We would then pass more requests to the faster nodes. This is more advanced and a static timeout would be fine for now.

How do the three of these packages communicate?

The ConnectionManager First establishes a connection with the peers and does the handshake. It then has an interface like so GetPeer() Peer . This method will send back a peer to the downloadManager, to use, if the peer is not working or is slow, timeout is defined in the DM, then the downloadManager will ask for another peer. The connectionManager is in charge of checking if peers are invalid or slow. When a Peer is sent back to the pool, it will be checked to see if it is still good.

getData and inv payload duplicate

GetData and InvPayload are the same except for the Command.

We could refactor, however at the expense of readability.

For now, I have left it as is.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.