Giter VIP home page Giter VIP logo

Comments (7)

myzhan avatar myzhan commented on August 23, 2024

If the service being tested and the network are fast enough, it is very easy to hit 100 RPS, even 1000+ RPS. You don't need to do premature optimizations.

But indeed, things like garbage collection and the goroutine scheduler have impact on the response time, adding several milliseconds to it and make it inaccurate. Usually, the service should record how long does it take for handling a request in accesslog.

If you are using golang's builtin http client and pooling connections, make sure the number is greater than the number of users, to avoid internal locking.

If you want to do profiling on client side, have a look at profiling.

from boomer.

mangatmodi avatar mangatmodi commented on August 23, 2024

Oops My bad, sorry. I meant 100k rps

from boomer.

mangatmodi avatar mangatmodi commented on August 23, 2024

make sure the number is greater than the number of users, to avoid internal locking.

Greater than number of users per slave or overall?

Usually, the service should record how long does it take for handling a request in access-logs.

I was simply using default client as a proof of concept. The service reported 2ms(99%) to handle, and loader reported 400ms. I believe it is due to blocking at the client, and hence I am looking for the best possible way to generate load at slave.

from boomer.

myzhan avatar myzhan commented on August 23, 2024

The number of users you set in the web UI is divided by the number of slaves. If you run ten users with two slaves, each slave will spawn five goroutines to run you task function in a loop.

I believe it is due to blocking at the client

You can do a CPU profiling with a longer duration to confirm that. If you meet the locking situation inside http client, you can use a client pool instead of a connection pool in single client.

Is there any queueing situation at your service side? Some web framework will put client requests in a queue, then handle them in another thread pool, the queueing time is not added to the response time in access log.

And, it not easy to hit 100K RPS without OS level tuning, like CPU, memory, TCP backlog, etc...

from boomer.

mangatmodi avatar mangatmodi commented on August 23, 2024

@myzhan Thanks for being quick and active. So basically I optimized with bigger connection pool(1000) and used fasthttp. My test data is all in memory and I take random data point. I got around 10000 rps on the slave node.

Without OS level tuning, like CPU, memory, TCP backlog, etc.

I am able to generate 10K+ rps on a single slave. Do you think If I have 10+ more slaves, I would be able to produce 100K+ rps? Basically I am running on a big kubernetes cluster.

is there any queueing situation at your service side?

So my server nodes are written in kotlin + vertx. There is always some queuing + blocking. The time we measure is manually and I am sure it is not the time after the queue in we frameworks, as we take time from loadbalancer to time we write in Kafka.

from boomer.

myzhan avatar myzhan commented on August 23, 2024

Do you think If I have 10+ more slaves, I would be able to produce 100K+ rps?

Yes, if you have enough machines. BTW, try to avoid locking in math/rand.

from boomer.

mangatmodi avatar mangatmodi commented on August 23, 2024

Thanks for all the help myzhan. I will close the tickets with following conclusion.

  1. Use fast http library and use enough pooled connection to avoid blocking.
  2. Profile the client to understand where is the blocking.
  3. Verify if the time is correctly measured at server.

I will ask more questions if required. Thanks again!

from boomer.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.