tylertreat / bench Goto Github PK
View Code? Open in Web Editor NEWA generic latency benchmarking library.
License: Apache License 2.0
A generic latency benchmarking library.
License: Apache License 2.0
Have you plans to implement also a requester for MQTT i have no experience with go but i have tested a
some mqtt
here is a mqtt_requester but im not sure what im doing :)
package requester
import (
"fmt"
"os"
"github.com/satori/go.uuid"
"github.com/tylertreat/bench"
MQTT "git.eclipse.org/gitroot/paho/org.eclipse.paho.mqtt.golang.git"
)
// MQTTRequesterFactory implements RequesterFactory by creating a Requester
// which publishes messages to an AMQP exchange and waits to consume them.
type MQTTRequesterFactory struct {
URL string
TOPICs []string
}
// GetRequester returns a new Requester, called for each Benchmark connection.
func (r *MQTTRequesterFactory) GetRequester(num uint64) bench.Requester {
return &mqttRequester{
url: r.URL,
topics: r.TOPICs,
}
}
// amqpRequester implements Requester by publishing a message to an MQTT
// exhcnage and waiting to consume it????.
type mqttRequester struct {
url string
topics []string
client *MQTT.Client
}
var f MQTT.MessageHandler = func(client *MQTT.Client, msg MQTT.Message) {
fmt.Printf("TOPIC: %s\n", msg.Topic())
fmt.Printf("MSG: %s\n", msg.Payload())
}
// Setup prepares the Requester for benchmarking.
func (r *mqttRequester) Setup() error {
u1 := uuid.NewV4()
opts := MQTT.NewClientOptions().AddBroker(r.url).SetClientID(u1.String())
//opts.SetDefaultPublishHandler(f)
c := MQTT.NewClient(opts)
r.client = c
if token := c.Connect(); token.Wait() && token.Error() != nil {
panic(token.Error())
}
for i:= 0; i < len(r.topics); i++ {
if token := c.Subscribe(r.topics[i], 0, nil); token.Wait() && token.Error() != nil {
fmt.Println(token.Error())
os.Exit(1)
}
}
return nil
}
// Request performs a synchronous request to the system under test.
func (r *mqttRequester) Request() error {
/*if token := r.client.Subscribe("/go-mqtt/sample", 0, nil); token.Wait() && token.Error() != nil {
fmt.Println(token.Error())
os.Exit(1)
}*/
text := fmt.Sprintf("this is msg #%d!", 1)
//token := r.client.Publish("/go-mqtt/sample", 0, false, text)
//r.client.Publish("/go-mqtt/sample", 0, false, text)
for i:= 0; i < len(r.topics); i++ {
r.client.Publish(r.topics[i], 0, false, text)
}
return nil
}
// Teardown is called upon benchmark completion.
func (r *mqttRequester) Teardown() error {
r.client.Disconnect(250)
return nil
}
and main.go
package main
import (
"fmt"
"time"
"github.com/tylertreat/bench"
"./requester"
)
//"github.com/tylertreat/bench/requester"
func main() {
/r := &requester.WebRequesterFactory{
URL: "http://localhost:8080/",
}/
r := &requester.MQTTRequesterFactory{
URL: "tcp://127.0.0.1:1883",
TOPICs: []string{"topic1", "topic2"},
}
benchmark := bench.NewBenchmark(r, 10000, 1, 30*time.Second)
summary, err := benchmark.Run()
if err != nil {
panic(err)
}
fmt.Println(summary)
summary.GenerateLatencyDistribution(bench.Logarithmic, "mqtt.txt")
}
Currently, if a Requester
returns an error, bench bails on the run. It could be useful (and interesting) to continue and capture some statistics around error rates, and error-case latency distribution as well. What do you think?
The amqp example seems using the same channel for publishing and consuming which is an anti-pattern?
streadway/amqp#327
streadway/amqp#270
Is it by design?
It's not super-clear to me exactly what kind of round-trip behaviour you're trying to model, but I suspect the kafka requester isn't doing exactly what you think it's doing (or what you want it to do) for a few reasons:
SyncProducer
instead and drop the consumer entirely.Request
concurrently, then you're probably OK, but I'm not sure.Would you consider doing an Amazon Kinesis requester? It's "very similar to Kafka".
Forgive me for using an issue for something more conversational at this point - if there's a better / separate channel we can discuss in let me know.
I'm working on benchmarking a system that has (expectedly) fairly high latency characteristics (think on the order of 100ms), but should easily handle a large number of concurrent requests and overall throughput (1000+ TPS). In order to achieve those kinds of rates with those latencies, it'd require at least 100 concurrent clients. We'd like to test at larger scales as well.
Per bench.go#L229, each client uses a busy-spin loop to perform rate limiting. Can you discuss a bit more of the rationale for that? I'm assuming it's to minimize potential scheduler delays that could be incurred by using a time.Ticker or similar mechanism to sleep/wake the goroutine - but as soon as we have more client goroutines than CPU cores / GOMAXPROCS, I'd think contention from this busy-spin loop would make matters worse.
Did you have actual issues that necessitated that busy-spin? Have you thought through other potential designs? I'm going to take a stab at an alternative approach (while still correcting for the coordinated omission problem), but I want to be sure I'm not overlooking something subtle that would invalidate the results, so I'd appreciate any insights you may have off hand..
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.