Just a curious software engineer building things in the βοΈ.
bojand / ghz Goto Github PK
View Code? Open in Web Editor NEWSimple gRPC benchmarking and load testing tool
Home Page: https://ghz.sh
License: Apache License 2.0
Simple gRPC benchmarking and load testing tool
Home Page: https://ghz.sh
License: Apache License 2.0
When I using this great tool to test my service I got a rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: authentication handshake failed: tls: first record does not look like a TLS handshake".
My service works well:
#docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8ecd78ab381d test:my-test "/bin/sh -c 'python β¦" 32 minutes ago Up 32 minutes 0.0.0.0:50051->50051/tcp sad_goodall
And the command I used:
./ghz -proto ../api/grpc/test_service.proto -call mytest.TEST.evaluate -c 5 -n 15 -D ./input.json -o ./test_result.html -O html -name emacs-load-testing localhost:50051
I gave a try to follow our example, but I got some issues as follows.
[root@test /root/Workspace/go/src/github.com/ghz/testdata]
#netstat -lanp|grep httpd
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 751/httpd
unix 3 [ ] STREAM CONNECTED 18157 751/httpd
[root@test /root/Workspace/go/src/github.com/ghz/testdata]
#ghz -proto ./greeter.proto -call helloworld.Greeter.SayHello -d '{"name":"Joe"}' 0.0.0.0:80
Summary:
Count: 200
Total: 13.97 ms
Slowest: 0.00 ms
Fastest: 0.00 ms
Average: -9223372036854.78 ms
Requests/sec: 14319.87
Response time histogram:
Latency distribution:
Status code distribution:
Error distribution:
[200] rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: authentication handshake failed: tls: oversized record received with length 20527"
[root@test /root/Workspace/go/src/github.com/ghz/testdata]
#ghz -insecure -proto ./greeter.proto -call helloworld.Greeter.SayHello -d '{"name":"Joe"}' 0.0.0.0:80
Summary:
Count: 200
Total: 18.78 ms
Slowest: 0.00 ms
Fastest: 0.00 ms
Average: -9223372036854.78 ms
Requests/sec: 10649.79
Response time histogram:
Latency distribution:
Status code distribution:
Error distribution:
[150] rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: <nil>
[50] rpc error: code = Unavailable desc = transport is closing
how to solve this issue? thanks in advance!
It may be useful to be able to input data via stdin.
Some tool (ex. vegeta) provide option to name the run. This can be useful for organizing the results.
Add option:
-name string
Test name
And add it to the reporting.
I would like to increase the default size that can be received by the API that I am testing. Is there a way to provide this as a CLI option?
While the current flags are compact and succinct, it may be worth while to change (some) flags to longer more descriptive format to improve UX. Potentially we could keep the short format as optional shorter and quicker alternatives.
For example some ideas for potential changes:
-c -> -concurrency
-n -> -requests? keep the same?
-q -> -qps (or -rate?)
-t -> -timeout
-z -> -duration
-x -> -max-duration
-d -> -data
-D -> -data-path
-b -> -binary-data
-B -> -binary-data-path
-m -> -metadata
-M -> -metadata-path
-si -> -stream-interval
-rmd -> -reflect-metadata
-o -> -out
-O -> -format
-i -> -import-paths
-T -> -dial-timeout
-L -> -keepalive
-v -> -version
-h -> -help
It seems most benchmarking tools opt for short options. But from research a handful offer descriptive flags as well. Some ideas or inspirations: autocannon and vegeta.
This may likely be a breaking change for the config file format.
To support both short and long format we may switch from standard flag module to something more robust, like kingpin perhaps.
@peter-edge Feel free to share any thoughts.
Heya. Having been unblocked by the fix to #55 (thanks again), I've tried to use the web frontend. I seem to be having issues with the binary -- could you have a look, please? π
NA
web.toml
is
protoset="gateway.protoset"
cert="/hab/svc/gateway/config/service.crt"
key="/hab/svc/gateway/config/service.key"
cacert="/hab/svc/gateway/config/root_ca.crt"
cname="gateway"
call="gateway.api.users.UsersMgmt/GetUsers"
[m]
"api-token"="bASZ1UdqkTjEqK3V-h1npK5tyfs="
host="10.0.2.15:2001"
CLI call is ./ghc-web -config web.toml
It starts.
# ./ghz-web -config web.toml
panic: Binary was compiled with 'CGO_ENABLED=0', go-sqlite3 requires cgo to work. This is a stub
goroutine 1 [running]:
main.main()
/Users/bdjurkovic/dev/golang/ghz/cmd/ghz-web/main.go:60 +0x4c5
#
Get 0.26.0, run it with the config above.
Hi there,
While trying to call my server using ghz, I see on the wire that the initial setup does not have any metadata, and eventually tcp session gets reset (it works fine from ballerina client, but I would like to use the nice load testing and reporting facilities :)
Command in question (tried many -m variations without luck):
ghz -name "Testing" -c 1 -n 100 -insecure -proto ../server/target/grpc/HelloWorld.proto -d '{"req":"Sam"}' -m '{"IsServerStreaming":"true"}' -call service.HelloWorld/lotsOfReplies localhost:9095
.proto in question:
syntax = "proto3";
package service;
import "google/protobuf/wrappers.proto";
service HelloWorld {
rpc lotsOfReplies(google.protobuf.StringValue) returns (stream google.protobuf.StringValue);
}
Your thoughts on this would be highly appreciated!
David
The ghz output documentation provides samples of influxdb-details output. However this is not valid line protocol. If I try to copy-paste one of the influxdb-details examples into a POST, I get the following error:
Request:
POST /write?db=ghz HTTP/1.1
Host: localhost:8086
Content-Type: application/x-www-form-urlencoded
ghz_detail,proto="/testdata/greeter.proto",call="helloworld.Greeter.SayHello",host="0.0.0.0:50051",n=1000,c=50,qps=0,z=0,timeout=20,dial_timeout=10,keepalive=0,data="{"name":"{{.InputName}}"}",metadata="{"rn":"{{.RequestNumber}}"}",hasError=false latency=5157328,error=,status=OK 681023506
Response:
{
"error": "unable to parse 'ghz_detail,proto="/testdata/greeter.proto",call="helloworld.Greeter.SayHello",host="0.0.0.0:50051",n=1000,c=50,qps=0,z=0,timeout=20,dial_timeout=10,keepalive=0,data="{\"name\":\"{{.InputName}}\"}",metadata="{\"rn\":\"{{.RequestNumber}}\"}",hasError=false latency=5157328,error=,status=OK 681023506': missing field value"
}
There are two problems with the request, specifically the last two field values:
./ghz -proto -call com.proto.test.pingPong -skipTLS -insecure -D <REQUEST_PATH> -c 10 -n 200 localhost:8080 -O "csv" -o <PATH_TO_CSV>
Create the file if not exist and save the output in that file in csv format
Nothing happens, shows output in stdout
Run the command
We need to send binary data in our stringified JSON request metadata.
For example:
./ghz -proto grpc-service.proto -call service.Put -d {"primarykey": "6C-getList:8bb1e1f6f-loadtest1k","value": "Β½Ζ]Γ0/β’)Γ»ΓΒ΄ΕΈΓΉ.Γ₯Β³PΓΒΌΓ±ΓΒ»9Β¦WΒ‘wΒ¦ΓβsβΉΓ]Β±ΓΓβ‘,l`ΓΓtz(Β΄7,ΓΕΈ","ttl": 300} -c 1 -z 60s grpc.endpoint.service.com:8080
We tried to send using -D
flag with a path to a JSON file with the binary data as well. But I think it is either the shell or ghz can not interpret the binary data therefore fails. Are there any suggestion on how we can send binary data using ghz?
We get errors like invalid character 'Γ' looking for beginning of value
.
We are load testing an application and getting some different results on changing the different parameters for ghz. Details below
./ghz -config test.json
test.json:
{
"z": "5m",
"c": 5,
"q": 10,
"protoset": "./some.protoset",
"call": "someservice",
"host": "<POD_IP>:50051",
"D": "../request/request_data.json"
}
Results
Summary:
Count: 14824
Total: 300040.46 ms
Slowest: 2968.93 ms
Fastest: 9.43 ms
Average: 38.53 ms
Requests/sec: 49.41
Response time histogram:
9.430 [1] |
305.380 [14813] |ββββββββββββββββββββββββββββββββββββββββ
601.330 [5] |
897.279 [0] |
1193.229 [0] |
1489.179 [0] |
1785.129 [0] |
2081.079 [0] |
2377.029 [0] |
2672.979 [0] |
2968.929 [5] |
Latency distribution:
10% in 33.73 ms
25% in 34.86 ms
50% in 35.92 ms
75% in 36.94 ms
90% in 38.23 ms
95% in 40.98 ms
99% in 80.23 ms
Status code distribution:
[OK] 14824 responses
A configuration with -n 1000 instead of -z "5m" yielded:
Summary:
Count: 1000
Total: 41512.74 ms
Slowest: 7395.77 ms
Fastest: 10.36 ms
Average: 151.54 ms
Requests/sec: 24.09
Response time histogram:
10.360 [1] |
748.902 [974] |ββββββββββββββββββββββββββββββββββββββββ
1487.443 [10] |
2225.984 [5] |
2964.525 [0] |
3703.066 [0] |
4441.607 [0] |
5180.148 [5] |
5918.690 [0] |
6657.231 [0] |
7395.772 [5] |
Latency distribution:
10% in 26.56 ms
25% in 27.45 ms
50% in 28.55 ms
75% in 42.80 ms
90% in 296.37 ms
95% in 402.27 ms
99% in 5054.40 ms
Status code distribution:
[OK] 1000 responses
Can you please help in explaining the different behaviors? Is our understanding wrong
In my understanding n- represents the number of requests and z represent the length of test and there is no other difference.
thanks
Proto file(s)
helloworld.proto
syntax = "proto3";
option java_multiple_files = true;
option java_package = "io.grpc.examples.helloworld";
option java_outer_classname = "HelloWorldProto";
package helloworld;
// The greeting service definition.
service Greeter {
// Sends a greeting
rpc SayHello (HelloRequest) returns (HelloReply) {
}
}
// The request message containing the user's name.
message HelloRequest {
string name = 1;
}
// The response message containing the greetings
message HelloReply {
string message = 1;
}
Command line arguments / config
config.json is in the same directory with helloworld.proto, its content:
{
"insecure": true,
"proto": "helloworld.proto",
"call": "helloworld.Greeter.SayHello",
"total": 200,
"concurrency": 10,
"data": {
"name": "Joe"
},
"host": "localhost:8099"
}
Describe the bug
when I used this command ghz --config=config.json
, I got unknown format:
when I used another command ghz -config ./config.json
, I got ghz: error: strconv.ParseUint: parsing "onfig": invalid syntax, try --help
I can success running rpc test with pure command Line ghz --insecure --proto ./helloworld.proto --call helloworld.Greeter.SayHello -d '{"name":"Joe"}' localhost:8099
How can I change my config so that I can use json config file?
Environment
Additional context
My grpc service is started in localhost with port 8099
Hi @bojand ,
I am using ghz tool for single channel performance testing. I am also looking into github code and running this code.
Now I am getting different output when I am running from binary file and from source code.
Binary File Running input
ghz -config config.json
Source Code Running input
go run cmd/ghz/main.go cmd/ghz/config.go -config config.json
As you can see here that I am getting big difference in both the results. So I want to know what is happening and which once is accurate.
Note: I am not changing your source code while running it
i have config_test.json:
{
"proto": "C:/Users/1/Desktop/GHZ/protorepo/sr.proto",
"call": "grpc.refe.SR.GEInfo",
"host": "localhost:30058",
"c": 2,
"n": 4,
"x": "1s",
"o": "C:/Users/1/Desktop/GHZ/output",
"O": "html",
"insecure": true,
"i": [
"C:/Users/1/Desktop/GHZ/protorepo/grpc-proto/src/"
]
}
my sr.proto looks like:
syntax = "proto3";
package grpc.refe;
import "proto/common/e.proto";
import "proto/common/si.proto";
import "google/protobuf/empty.proto";
option java_multiple_files = true;
option objc_class_prefix = "ABC";
message EResponse {
repeated proto.common.E e = 1;
}
service SR {
rpc GEInfo(google.protobuf.Empty) returns (EResponse) {}
}
my e.proto looks like
syntax = "proto3";
package proto.common;
option java_multiple_files = true;
option objc_class_prefix = "ABC";
message E {
int32 id = 1;
string name = 2;
}
my si.proto looks like:
syntax = "proto3";
package proto.common;
option java_multiple_files = true;
option objc_class_prefix = "ABC";
i start ghz with ghz -config .\config_test.json
and get error:
failed to load imports for "sr.proto": proto/common/si.proto:1:1: syntax error: unexpected $unk
all files for import located: C:\Users\1\Desktop\GHZ\protorepo\grpc-proto\src\proto\common
i cant understand what this error means and how to fix it.
for this method i need only e.proto and empty.proto,
but i have another methods that need si.proto
For now i dont know what to do and need help.
using ghz v 0.22.0 on Windows
{
"proto": "xxxxx/user.proto",
"call": "user.User.GetVerificationCode",
"n": 20,
"c": 5,
"d": {
"mobile": "1"
},
"insecure": true,
"host": "192.168.1.95:50051"
}
xxx/ghz -config xxx/config.json
`Summary:
Count: 20
Total: 79.92 ms
Slowest: 48.07 ms
Fastest: 8.89 ms
Average: 19.70 ms
Requests/sec: 250.26
Response time histogram:
8.886 [1] |βββ
12.804 [12] |ββββββββββββββββββββββββββββββββββββββββ
16.722 [0] |
20.641 [0] |
24.559 [2] |βββββββ
28.477 [0] |
32.396 [0] |
36.314 [2] |βββββββ
40.232 [0] |
44.150 [0] |
48.069 [3] |ββββββββββ
Latency distribution:
10% in 9.06 ms
25% in 9.72 ms
50% in 12.05 ms
75% in 35.45 ms
90% in 48.01 ms
95% in 48.07 ms
0% in 0 ns
Status code distribution:
[OK] 20 responses`
`Summary:
Count: 20
Total: 19.09 s
Slowest: 0 ns
Fastest: 0 ns
Average: 0 ns
Requests/sec: 0.00
Response time histogram:
Latency distribution:
Status code distribution:
[Unavailable] 20 responses
Error distribution:
[5] rpc error: code = Unavailable desc = transport is closing
[15] rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: `
Hi, I get the right result on Mac, but I can't run it on Windows. (version v0.31.0)
I want to use Ghz for many subscribes in parallel calls
Can I it use? Is it opportunity supported by Ghz?
When I try to run ghz on my proto file, I get above mentioned error.
ghz -insecure -proto service.proto -call adapter.ScoreService.GetScore -d '{"body":"test", "fields": {"key1":"test1", "key2":"test2"}}' localhost:5300
Grpc Server is running on my local. My sample protobuf file is below -
syntax = "proto3";
package adapter;
service ScoreService {
rpc GetScore(ScoreRequest) returns (ScoreResponse) {}
}
message ScoreRequest {
string body = 1;
map<string, string> fields = 2;
}
message ScoreResponse {
int32 score = 1;
}
Help to identify missing piece will be appreciated. Thanks.
Hi,
I am unable to hit the GRPC service
Config:
{
"proto": "C:/Users/disha.duggal/Documents/JMeterTests/GRPCTests/MM/my.proto",
"call": "mypackage.myservice.Status",
"n": 2000,
"c": 50,
"d": {
"param1": "adhajdl",
"param2":56750,
"param3":"WEB"
},
"m": {
"foo": "bar",
"trace_id": "{{.RequestNumber}}",
"timestamp": "{{.TimestampUnix}}"
},
"x": "10s",
"host": "localhost:5001"
}
Output:
PS C:\Users\disha.duggal\Documents\JMeterTests\GRPCTests\MM> ghz -config .\MM_test.json
Summary:
Count: 2000
Total: 76.00 ms
Slowest: 0.00 ms
Fastest: 0.00 ms
Average: 0.00 ms
Requests/sec: 0.00
Response time histogram:
Latency distribution:
Status code distribution:
Error distribution:
[2000] rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: authentication handshake failed: EOF"
./ghz -insecure -proto document.proto
-call DocService.CreateDoc
-n 2
-c 2
-D ../SummaryDocs.json
0.0.0.0:3000
2 independent calls should go to the server, with 2 messages picked from the SummaryDocs.json file
2 calls sent, with same data
I want to be able to send custom data for every call, instead of same data being used for all 'n' calls. How can that be achieved?
The ghz-web documentation could be improved to provide more details and instructions on the intended workflow for and usage. Also a walk-through would probably be useful.
It would be useful to support protoset files that encapsulate multiple protocol buffer files.
syntax = "proto3";
package cnnsql;
service Prediction {
rpc Predict(Request) returns (Result){}
}
message Request {
string url = 1;
string ip = 2;
}
message Result {
int32 type = 1;
}
cnnsql.json
:
{
"proto": "cnnsql.proto",
"call": "cnnsql.Prediction.Predict",
"d": {
"url": "_%3D1498179095094%26list%3Dsh600030"
},
"insecure": true,
"host": "127.0.0.1:8889"
}
{
"type": 1
}
$ ./ghz -config cnnsql.json
Summary:
Count: 200
Total: 11.94 ms
Slowest: 0 ns
Fastest: 0 ns
Average: 0 ns
Requests/sec: 0.00
Response time histogram:
Latency distribution:
Status code distribution:
[Unavailable] 200 responses
Error distribution:
[200] rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing reading server HTTP response: unexpected EOF"
$ pip install grpcio grpcio-tools
$ python -m grpc_tools.protoc -I. --python_out=. --grpc_python_out=. cnnsql.proto
$ ls
cnnsql_pb2.py cnnsql_pb2_grpc.py
server.py
# coding: utf-8
from concurrent import futures
import time
import logging
import grpc
import cnnsql_pb2
import cnnsql_pb2_grpc
_ONE_DAY_IN_SECONDS = 60 * 60 * 24
class PredictionService(cnnsql_pb2_grpc.PredictionServicer):
def Predict(self, request, context):
response = cnnsql_pb2.Result(type=1)
return response
def serve():
server = grpc.server(futures.ThreadPoolExecutor(max_workers=10))
cnnsql_pb2_grpc.add_PredictionServicer_to_server(PredictionService(), server)
server.add_insecure_port('[::]:8889')
server.start()
try:
print("Server started.")
while True:
time.sleep(_ONE_DAY_IN_SECONDS)
except KeyboardInterrupt:
server.stop(0)
if __name__ == '__main__':
logging.basicConfig()
serve()
$ python server.py
$ ./ghz -config cnnsql.json
syntax = "proto3";
package protobuf;
message Request{
string method_name = 1;
bytes arg = 2;
}
message Response{
bytes res = 1;
}
service RemoteCall{
rpc get_result(Request) returns(Response){}
rpc get_feature(Request) returns(Response){}
rpc get_detectResult(Request) returns(Response){}
rpc get_emotion(Request) returns(Response){}
}
the protobuf file as above , how can i send bytes on your cli tools?
See TODO
's in requester.go
. It may be OK to swallow these up, or depending on specific error we may want to do something specific about these. Should investigate further.
It would be better if the Ingest API was using transnactions to ensure the complete action is atomic.
Presently all settings can be set via grpcannon.json
file if present in the same path as the grpcannon executable. It may be useful to have a flag argument for the settings file. Example:
grpcannon -config /path/to/config.json
Hi! This tool is super useful - thanks for putting it together!
It would be very handy, for the particular use case that i have, if it were possible to use standard Go templating to swap in variables related to the state of the current run. In particular, what i was hoping for is a unique numeric identifier that can be templated in for each individual request.
Not sure if it's feasible or not, but it'd be nifty!
Hi there,
While trying to load test my server using ghz, I want to call it use multiple connection or not persistent connection scenes, it will support in the future.
This is a general question more than a bug, but I'm trying to figure out if the ghz framework is capable of running multiple gRPC requests in a flow type end to end test? This would imply some kind of request/response chaining which would make it more complex, but wondering if this would be possible. Thanks!
It would be useful to have a threshold settings within project options for different statistical metrics (ie fastest, slowest, average, percentiles) so we can report which ones fail the threshold. Additionally we could have a "key metric" setting that would dictate if a test run / report fails based on the threshold setting for that metric (in addition to errors). So if the key metric fails the threshold even if no errors, the test run / report would be considered as a fail status.
This would involve changes to database and schemas.
We could graph thresholds along with the metrics in graphs. For example in change over time we could have the thresholds (or at least the "key metric" threshold) on the change over time. Additionally we could have it marked in histogram perhaps and the comparison charts.
It may be useful to measure the amount of time between individual messages received in streaming calls. While probably relatively simple to collect, more design and detail is needed now how this would look in the reporting.
Currently when data is provided using a file or stdin we read the full data. We chould improve this and support reading and parsing as a stream, probably using json.Decoder. However this may cause some breakage or limitation with supporting a bit more flexible data as we do now. For example presently if a single JSON object is passed in for a client streaming request and bidi request, we use that for all writes to the client. This may not be possible with a streaming input. Similarly for client streaming calls, it would have to be an array input. And also not sure how that should work. We should probably send + record until the end of the stream, and then replay the payload as writes for all subsequent calls.
First of all, thanks for a great load testing tool for working with GRPC.
I am trying to test a server that handles backpressure, which involves using the HTTP2 flow control.
For that purpose I prepared the config for ghz that would cause the flow window to fill (grpc-java sets the window to 1MB). Even though the tests should pass, ghz blocks and finishes with
rpc error: code = DeadlineExceeded desc = context deadline exceeded
ghz -config config.json
config.json
contents:
{ "proto": "greeter.proto", "call": "manualflowcontrol.StreamingGreeter.SayHelloStreaming", "n": 1, "c": 1, "t": 25, "host": "0.0.0.0:50051", "insecure": true, "d": [ {"name":"Joe"} # repeat 200000 times ] }
Status code distribution: [OK] 1 responses
Error distribution: [1] rpc error: code = DeadlineExceeded desc = context deadline exceeded
Here's the server implementation which you can test against (remove the 100ms sleep in line 80 to make your testing easier):
OS: Mac OS
grpcannon version: 0.4.1
When running grpcannon using the -z
option flag, grpcannon is still defaulting to running 200 requests instead of honoring the time duration value.
example:
grpcannon -z 1m -cert cert.pem --proto employeedirectory.proto -call directory.position.GetEmployee -d '{"name":"Steve", "position":"doctor"}' -M metadata.json jobs.search.com:8080
given a proto message
message SomeRequest{
string some_field_name = 1;
}
the following would be an acceptable json payload when calling grpcannon
'{"someFieldName":"value"}'
currently, the above json would appear to return Unknown field name
Instead of specify the proto file we could use reflection to build the client for making the requests. The user would still have to provide the call details that has to correctly match the reflection results. Reflection is only supported by a subset of languages.
None
None
Given a gRPC setup where both the client and the server are required to provide a TLS cert, and given the root CA cert, as well as a cert/key pair for the client, I would like to be able to use ghz
to benchmark the service's performance.
I can provide a root CA cert using -cert
, and I can provide a server name override using -cname
, but there's no way to set a client cert/key.
Have a server that requires the client to provide a TLS cert, and try to use ghz with it.
Sorry, this is brief -- It could be fleshed out, if need be; please let me know if this already is supported in some way I haven't found.
It's kinda messy and clunky. Might want to look into Functional Options and something like flagga or ff
Add more metrics such as duration of different parts and sizes.
The gRPC stats provides additional types for instrumenting detailed events such as ConnBegin
, InHeader
, InTrailer
, etc, along with providing size data. It may be useful to collect this information in the results and report. But I am really not sure what info specifically would be useful?
It would be a good thing to be able to set headers.
Hello,
I am using go-micro to develop my micro service and I use consul as service registry (it listens on 8500).
I tried:
ghz -proto ./hello.proto -insecure -call hello.Hello.Hi -d '{"name": "Joe"}' 127.0.0.1:8500
it said
Latency distribution:
Status code distribution:
Error distribution:
[143] rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: <nil>
[57] rpc error: code = Unavailable desc = transport is closing
If I change 8500 to the port on which service listens, I got
Latency distribution:
Status code distribution:
Error distribution:
[200] rpc error: code = Internal desc = transport: received the unexpected content-type "text/plain"
Here are two questions:
1, why this error happens "received the unexpected content-type "text/plain""?
2, how to make it work with consul?
The tests are missing some bad input and bad condition scenarios. It would be good to add these tests.
Currently the config for ghz-web app does not allow binding to a specific hostname and we automatically bind to localhost. Perhaps it would be useful to specify the host to bind to. This adds a little bit of complication to frontend app as we need to communicate that setting (and probably whole config) to the frontend app, which we currently do not.
I have a service running locally, registered in consul (which I use as service registry):
2018/10/08 16:31:29 [DEBUG] http: Request PUT /v1/agent/check/pass/service:enta-nimax-b51023cb-cb2f-11e8-b79b-a08cfd74f549?note= (299.368Β΅s) from=127.0.0.1:34708
2018/10/08 16:31:38 [DEBUG] agent: Skipping remote check "serfHealth" since it is managed automatically
2018/10/08 16:31:38 [DEBUG] agent: Service "enta-nimax-b51023cb-cb2f-11e8-b79b-a08cfd74f549" in sync
2018/10/08 16:31:38 [DEBUG] agent: Check "service:enta-nimax-b51023cb-cb2f-11e8-b79b-a08cfd74f549" in sync
I tried to run latest pre-built version of ghz downloaded from the download page and then ran:
$ ./ghz -proto="/home/comtom/Projects/src/github.com/TodayTix/ttproto-provider-interface/proto/ProviderService/ProviderService.proto" -call="enta-nimax-b51023cb-cb2f-11e8-b79b-a08cfd74f549.GenerateShows" -d="{}" 127.0.0.1:34708 -insecure
but failed with: **cannot find service "service:enta-nimax-b51023cb-cb2f-11e8-b79b-a08cfd74f549"**
also tried service:enta-nimax-b51023cb-cb2f-11e8-b79b-a08cfd74f549.GenerateShows as service name and just enta-nimax. the same happened. What I'm doing wrong?
When using this great tool to make 2K concurrent request, I find that the sock connection is still kept open:
# lsof -p 102372 | grep -c "sock"
1394
After double checked the code in this tool, it seems that the close
should be called.
Do you know is this a optimized design in gRPC that to keep a connection pool on demand ? or something else wrong.
Add godoc to document the API.
{Please write here}
{Please write here}
{Please write here}
1000 requests took 1246 MS - should be Requests/sec: 802.14294
rather than 802142.94
.
Count: 1000
Total: 1246.66 ms
Slowest: 458.03 ms
Fastest: 1.25 ms
Average: 52.86 ms
Requests/sec: 802142.94
Response time histogram:
1.248 [1] |
46.926 [717] |ββββββββββββββββββββββββββββββββββββββββ
92.604 [82] |βββββ
138.282 [54] |βββ
183.960 [46] |βββ
229.638 [38] |ββ
275.316 [29] |ββ
320.994 [21] |β
366.672 [7] |
412.350 [3] |
458.028 [2] |
Latency distribution:
10% in 2.53 ms
25% in 4.75 ms
50% in 12.71 ms
75% in 60.01 ms
90% in 185.27 ms
95% in 244.34 ms
99% in 325.27 ms
Status code distribution:
[OK] 1000 responses
A declarative, efficient, and flexible JavaScript library for building user interfaces.
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. πππ
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google β€οΈ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.