Giter VIP home page Giter VIP logo

Comments (18)

rverdier avatar rverdier commented on July 21, 2024 3

I'm not actively working on the project these days, but it doesn't mean it is "stopped".

from zerio.

rverdier avatar rverdier commented on July 21, 2024 1

Hi,

Yes, sorry about that. Zerio is currently depending on the not yet released UnmanagedDisruptor<T> and UnmanagedRingBuffer<T> types. My colleague (and accomplice) @ocoanet will release a new version of the disruptor soon, but until then, you need to clone the disruptor-net repo locally and reference the project manually to make Zerio compile.

I will reference the nuget package as soon as possible.

I'm currently working on updating the readme to document the main design changes.

from zerio.

rverdier avatar rverdier commented on July 21, 2024 1

The purpose of having busy waiting strategies is to have better latencies. But if you do not have strong requirements on this side you can very easily get the CPU usage of Zerio down to almost 0 (when idle) using blocking strategies for example, and a SpinWait in the reception loop.

Note that I pushed a new HybridWaitStrategy that only busy spins for the first, more critical, RequestProcessor event handler.

Is there anything that can be done without too many headaches to adapt to zerio (using a simple network tcp layer)?

It is very easy to write a simple TCP client for Zerio (you could take example on the TcpFeedClient), you don't have to use RIO on the client side. The only "protocol" constraint is that ZerioServer will echo back the first message it receives as a handshake.

from zerio.

rverdier avatar rverdier commented on July 21, 2024 1

No problem! And yes, there is currently an issue with the queue sizing. I'm working on it and will try to push a fix soon.

from zerio.

rverdier avatar rverdier commented on July 21, 2024 1

@jclitwin It should be a bit better now. Don't forget to change the ZerioConfiguration.SessionCount if you want more than 2 client sessions. Right now session contexts are preallocated so you have to set the max number of session a ZerioServer can handle concurrently.

from zerio.

rverdier avatar rverdier commented on July 21, 2024 1

Yes, I need to make more values configurable. The TCP based implementations will be used for temporary benchmarks but I will get rid of them at some point.

I have no roadmap yet but I plan to work full time on the project next week. I guess I'll see things more clearly after that and maybe I will be able to update the documentation and create a bunch of issues directly on Github.

from zerio.

rverdier avatar rverdier commented on July 21, 2024 1

I just pushed new configuration options to make the disruptor wait strategy configurable, as well as the polling wait strategies used both for sends and receives.

If you want minimal CPU usage when idle, you can try these settings for example:

config.RequestEngineWaitStrategyType = RequestEngineWaitStrategyType.BlockingWaitStrategy;
config.ReceiveCompletionPollingWaitStrategyType = CompletionPollingWaitStrategyType.SpinWaitWaitStrategy;
config.SendCompletionPollingWaitStrategyType = CompletionPollingWaitStrategyType.SpinWaitWaitStrategy;

there is a problem when you start the server and client and close the client, you can't connect a new client.

Yes, I'm aware of it. Both ZerioServer and ZerioClient are pretty broken regarding starts and stops for now; I'll be addressing these issues this week I think. I'm afraid you'll have dispose and re-instantiate the client each time you want to reconnect in the meanwhile.

from zerio.

juliolitwin avatar juliolitwin commented on July 21, 2024

Hi @rverdier ,

Currently (due to the current state of the experimental project) is high CPU consumption expected? When I turn on the server and client, my cpu reaches 90% usage (i7).

Thanks.

from zerio.

rverdier avatar rverdier commented on July 21, 2024

Hello,

Currently, Zerio uses 3 threads by peer:

  • One for the RequestProcessor (first event handler of the disruptor) responsible for dispatching I/O requests to the RIO request queue
  • One for the SendCompletionProcessor (second and last event handler of the disruptor) responsible for polling send request completions
  • One for the ReceiveCompletionProcessor, acting as the reception loop (polling receive request completions and resubmitting receive requests after incoming message handling)

With the current implementation, the CPU usage is expected to be quite high (especially if you run both the client and the server on your local machine), because I use a very aggressive WaitStrategy in the disruptor: the BusySpinWaitStrategy. You can play with others wait strategies by modifying the RequestProcessingEngine.CreateDisruptor method.

Also, you can reduce the aggressiveness of the reception loop using a SpinWait in ReceiveCompletionProcessor.ProcessCompletion:

var spinWait = new SpinWait();
while (_isRunning)
{
    var resultCount = completionQueue.TryGetCompletionResults(results, maxCompletionResults);
    if (resultCount == 0)
    {
        spinWait.SpinOnce();
        continue;
    }

    for (var i = 0; i < resultCount; i++)
    {
        var result = results[i];
        var sessionId = (int)result.ConnectionCorrelation;
        var bufferSegmentId = (int)result.RequestCorrelation;

        OnRequestCompletion(sessionId, bufferSegmentId, (int)result.BytesTransferred);
    }
    spinWait.Reset();
}

from zerio.

juliolitwin avatar juliolitwin commented on July 21, 2024

Thanks again for answering @rverdier .

I need to test the settings of the Disruptor, because working with other servers on the same machine and with high use of cpu is a bit complicated, but Zerio is amazing!

I'm wanting to use Zerio without Zerio's own client dependency (because I would like to use with Unity and trying to use disruptor and other reflection dependencies will only bring more headaches due to il2cpp and the high cpu usage , it is not a serious problem for the server to use so much cpu, but the client is already another story). Is there anything that can be done without too many headaches to adapt to zerio (using a simple network tcp layer)?

Happy New Year!
Regards.

from zerio.

juliolitwin avatar juliolitwin commented on July 21, 2024

Thanks, you are very receptive. xD

Problem I am checking now is that I can not connect 2 sessions at the same time, even changing sessioncount, winsocket is returned error 10055.

from zerio.

juliolitwin avatar juliolitwin commented on July 21, 2024

@rverdier woow! It is working perfectly!

There are some buffers that are initialized but their sizes are not defined by ZerioConfiguration itself, is that purposeful? Example: MessageFramer, TcpFrameReceiver, TcpFrameSender.

Do you have any kind of roadmap and you know current bugs or improvements? Apparently Zerio already looks stable.

I am willing to risk adapting Zerio into my project with #if and see how the performance will differ.
Thanks again with this great work!

from zerio.

juliolitwin avatar juliolitwin commented on July 21, 2024

Would it be a good idea to add disruptor configuration to the ZerioServer like in the constructor? In order to choose the type of WaitStrategy. Because there are types of servers that are not so crucial to be so aggressive (example something like the Login Server) and not having to change directly in the core.

@rverdier there is a problem when you start the server and client and close the client, you can't connect a new client.

from zerio.

juliolitwin avatar juliolitwin commented on July 21, 2024

@rverdier

Yo,

Was the project stopped?
Cheers.

from zerio.

juliolitwin avatar juliolitwin commented on July 21, 2024

Thanks for answering. I see enormous potential at Zerio, which is why I was concerned that the project had been stopped.

from zerio.

sgf avatar sgf commented on July 21, 2024

The purpose of having busy waiting strategies is to have better latencies.

Because of the delay, I chose the UDP protocol.
The delay problem is actually a TCP protocol problem.
Compared with the delay caused by the network protocol, the delay caused by the program logic can be ignored.

from zerio.

sgf avatar sgf commented on July 21, 2024

UDP protocol combined with aggressive ARQ algorithm can achieve very good results(less dela/latency ). For example, this protocol is https://github.com/skywind3000/kcp.

from zerio.

sgf avatar sgf commented on July 21, 2024

For me, if RIO can be used to provide higher throughput, also reducing CPU utilization is the most important goal.
Although if you are willing to spend money, server resources are abundant, but all this requires more cost-money.

from zerio.

Related Issues (3)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.