Giter VIP home page Giter VIP logo

bench's People

Contributors

manuelcoenenvwd avatar newro avatar ngconsulti avatar pims avatar tylertreat avatar wilriker avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

bench's Issues

MQTT

Have you plans to implement also a requester for MQTT i have no experience with go but i have tested a
some mqtt
here is a mqtt_requester but im not sure what im doing :)

package requester

import (
"fmt"
"os"
"github.com/satori/go.uuid"
"github.com/tylertreat/bench"
MQTT "git.eclipse.org/gitroot/paho/org.eclipse.paho.mqtt.golang.git"
)

// MQTTRequesterFactory implements RequesterFactory by creating a Requester
// which publishes messages to an AMQP exchange and waits to consume them.
type MQTTRequesterFactory struct {
URL string
TOPICs []string
}

// GetRequester returns a new Requester, called for each Benchmark connection.
func (r *MQTTRequesterFactory) GetRequester(num uint64) bench.Requester {
return &mqttRequester{
url: r.URL,
topics: r.TOPICs,
}
}

// amqpRequester implements Requester by publishing a message to an MQTT
// exhcnage and waiting to consume it????.
type mqttRequester struct {
url string
topics []string
client *MQTT.Client
}

var f MQTT.MessageHandler = func(client *MQTT.Client, msg MQTT.Message) {
fmt.Printf("TOPIC: %s\n", msg.Topic())
fmt.Printf("MSG: %s\n", msg.Payload())
}

// Setup prepares the Requester for benchmarking.
func (r *mqttRequester) Setup() error {
u1 := uuid.NewV4()

opts := MQTT.NewClientOptions().AddBroker(r.url).SetClientID(u1.String())
//opts.SetDefaultPublishHandler(f)

c := MQTT.NewClient(opts)
r.client = c
if token := c.Connect(); token.Wait() && token.Error() != nil {
    panic(token.Error())
}

for i:= 0; i < len(r.topics); i++ {
    if token := c.Subscribe(r.topics[i], 0, nil); token.Wait() && token.Error() != nil {
        fmt.Println(token.Error())
        os.Exit(1)
    }       
}

return nil

}

// Request performs a synchronous request to the system under test.
func (r *mqttRequester) Request() error {

/*if token := r.client.Subscribe("/go-mqtt/sample", 0, nil); token.Wait() && token.Error() != nil {
    fmt.Println(token.Error())
    os.Exit(1)
}*/

text := fmt.Sprintf("this is msg #%d!", 1)
//token := r.client.Publish("/go-mqtt/sample", 0, false, text)
//r.client.Publish("/go-mqtt/sample", 0, false, text)

for i:= 0; i < len(r.topics); i++ {
    r.client.Publish(r.topics[i], 0, false, text)
}

return nil

}

// Teardown is called upon benchmark completion.
func (r *mqttRequester) Teardown() error {
r.client.Disconnect(250)
return nil
}

and main.go

package main

import (
"fmt"
"time"

"github.com/tylertreat/bench"
"./requester"

)

//"github.com/tylertreat/bench/requester"
func main() {
/r := &requester.WebRequesterFactory{
URL: "http://localhost:8080/",
}
/

r := &requester.MQTTRequesterFactory{
    URL:         "tcp://127.0.0.1:1883",
    TOPICs:      []string{"topic1", "topic2"},
}

benchmark := bench.NewBenchmark(r, 10000, 1, 30*time.Second)
summary, err := benchmark.Run()
if err != nil {
    panic(err)
}

fmt.Println(summary)
summary.GenerateLatencyDistribution(bench.Logarithmic, "mqtt.txt")

}

How about tracking error rate?

Currently, if a Requester returns an error, bench bails on the run. It could be useful (and interesting) to continue and capture some statistics around error rates, and error-case latency distribution as well. What do you think?

Kafka Requester Produce/Consume Concerns

It's not super-clear to me exactly what kind of round-trip behaviour you're trying to model, but I suspect the kafka requester isn't doing exactly what you think it's doing (or what you want it to do) for a few reasons:

  • Consuming the produced message is a whole different thing from waiting for the producer request to be ACKed at the protocol level - if the ACK is all you're after, use the SyncProducer instead and drop the consumer entirely.
  • There's no guarantee that the consumer is returning the message you produced. I suppose if you lock down the cluster such that this is the only process talking to it, and you never call Request concurrently, then you're probably OK, but I'm not sure.
  • The consumer sends its consume requests to the server asynchronously where they are held until messages become available, so you're missing 1/2 of one RTT worth of network latency if you really did mean to measure two RTTs per request in the first place.

Kinesis?

Would you consider doing an Amazon Kinesis requester? It's "very similar to Kafka".

Performance concerns with many concurrent clients

Forgive me for using an issue for something more conversational at this point - if there's a better / separate channel we can discuss in let me know.

I'm working on benchmarking a system that has (expectedly) fairly high latency characteristics (think on the order of 100ms), but should easily handle a large number of concurrent requests and overall throughput (1000+ TPS). In order to achieve those kinds of rates with those latencies, it'd require at least 100 concurrent clients. We'd like to test at larger scales as well.

Per bench.go#L229, each client uses a busy-spin loop to perform rate limiting. Can you discuss a bit more of the rationale for that? I'm assuming it's to minimize potential scheduler delays that could be incurred by using a time.Ticker or similar mechanism to sleep/wake the goroutine - but as soon as we have more client goroutines than CPU cores / GOMAXPROCS, I'd think contention from this busy-spin loop would make matters worse.

Did you have actual issues that necessitated that busy-spin? Have you thought through other potential designs? I'm going to take a stab at an alternative approach (while still correcting for the coordinated omission problem), but I want to be sure I'm not overlooking something subtle that would invalidate the results, so I'd appreciate any insights you may have off hand..

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.