Giter VIP home page Giter VIP logo

go-web-framework-benchmark's Introduction

go-web-framework-benchmark

This benchmark suite aims to compare the performance of Go web frameworks. It is inspired by Go HTTP Router Benchmark but this benchmark suite is different with that. Go HTTP Router Benchmark suit aims to compare the performance of routers but this Benchmark suit aims to compare whole HTTP request processing.

Last Test Updated: 2020-05

test environment

  • CPU: KVM Virtual CPU version(2 GHz, 4 cores)
  • Memory: 16G
  • Go: go1.18.5 linux/amd64
  • OS: Ubuntu 22.04.1 LTS with Kernel 5.15.0-41-generic

Tested web frameworks (in alphabetical order)

Only test those webframeworks which are stable

some libs have not been maintained and the test code has removed them

Motivation

When I investigated performance of Go web frameworks, I found Go HTTP Router Benchmark, created by Julien Schmidt. He also developed a high performance http router: httprouter. I had thought I got the performance result until I created a piece of codes to mock the real business logics:

api.Get("/rest/hello", func(c *XXXXX.Context) {
	sleepTime := strconv.Atoi(os.Args[1]) //10ms
	if sleepTime > 0 {
		time.Sleep(time.Duration(sleepTime) * time.Millisecond)
	}

	c.Text("Hello world")
})

When I use the above codes to test those web frameworks, the token time of route selection is not so important in the whole http request processing, although performance of route selection of web frameworks are very different.

So I create this project to compare performance of web frameworks including connection, route selection, handler processing. It mocks business logics and can set a special processing time.

The you can get some interesting results if you use it to test.

Implementation

When you test a web framework, this test suit will starts a simple http server implemented by this web framework. It is a real http server and only contains GET url: "/hello".

When this server processes this url, it will sleep n milliseconds in this handler. It mocks the business logics such as:

  • read data from sockets
  • write data to disk
  • access databases
  • access cache servers
  • invoke other microservices
  • ……

It contains a test.sh that can do those tests automatically.

It uses wrk to test.

Basic Test

The first test case is to mock 0 ms, 10 ms, 100 ms, 500 ms processing time in handlers.

Benchmark (Round 3) the concurrency clients are 5000.

Latency (Round 3) Latency is the time of real processing time by web servers. The smaller is the better.

Allocs (Round 3) Allocs is the heap allocations by web servers when test is running. The unit is MB. The smaller is the better.

If we enable http pipelining, test result as below:

benchmark pipelining (Round 2)

Concurrency Test

In 30 ms processing time, the test result for 100, 1000, 5000 clients is:

concurrency (Round 3)

Latency (Round 3)

Latency (Round 3)

If we enable http pipelining, test result as below:

concurrency pipelining(Round 2)

cpu-bound case Test

cpu-bound (5000 concurrency)

Usage

You should install this package first if you want to run this test.

go get github.com/smallnest/go-web-framework-benchmark

It takes a while to install a large number of dependencies that need to be downloaded. Once that command completes, you can run:

cd $GOPATH/src/github.com/smallnest/go-web-framework-benchmark
go build -o gowebbenchmark .
./test.sh

It will generate test results in processtime.csv and concurrency.csv. You can modify test.sh to execute your customized test cases.

  • If you also want to generate latency data and allocation data, you can run the script:
./test-latency.sh
  • If you don't want use keepalive, you can run:
./test-latency-nonkeepalive.sh
  • If you want to test http pipelining, you can run:
./test-pipelining.sh
  • If you want to test some of web frameworks, you can modify the test script and only keep your selected web frameworks:
……
web_frameworks=("default" "atreugo" "beego" "bone" "chi" "denco" "don" "echo" "fasthttp" "fasthttp-routing" "fasthttp/router" "fiber" "gear" "gearbox" "gin" "goframe" "goji" "gorestful" "gorilla" "gorouter" "gorouterfasthttp" "go-ozzo" "goyave" "httprouter" "httptreemux" "indigo" "lars" "lion" "muxie" "negroni" "pat" "pulse" "pure" "r2router" "tango" "tiger" "tinyrouter" "violetear" "vulcan" "webgo")
……
  • If you want to test all cases, you can run:
./test-all.sh

NOTE: comparing 2 webframeworks consumes approx. 11-13 minutes (doesn't depend on the machine). Just test.sh with all the webframeworks enabled will take a couple of hours to run.

Plot

All the graphs are generated automatically as the ./test.sh finishes. However, if the run was interrupted, you can generate them manually of partial data by executing plot.sh in testresults directory.

Add new web framework

Welcome to add new Go web frameworks. You can follow the below steps and send me a pull request.

  1. add your web framework link in README
  2. add a hello implementation in server.go
  3. add your webframework in libs.sh

Please add your web framework alphabetically.

go-web-framework-benchmark's People

Contributors

2hmad avatar abahmed avatar abemedia avatar ajf-sa avatar aldas avatar billcoding avatar bnkamalesh avatar buaazp avatar ceriath avatar claygod avatar dependabot[bot] avatar dlsniper avatar fenny avatar flrdv avatar frederikhors avatar gqcn avatar kataras avatar kevwan avatar kirilldanshin avatar machinly avatar nbari avatar panjf2000 avatar razonyang avatar savsgio avatar siyul-park avatar smallnest avatar system-glitch avatar vardius avatar vmihailenco avatar zensh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

go-web-framework-benchmark's Issues

Question about pipelining test.

Hi, I found that pipelining test is using URL / instead of /hello for testing, is it correct behavior?

I wrote a simple middleware for verifying this issue:

func Logging(next http.Handler) http.Handler {
	return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
		resp := &response{ResponseWriter: w}
		defer func() {
			log.Printf("URI: %s, code: %d", r.RequestURI, resp.code)
		}()
		next.ServeHTTP(resp, r)
	})
}

type response struct {
	http.ResponseWriter
	code int
}

func (r *response) WriteHeader(code int) {
	r.code = code
	r.ResponseWriter.WriteHeader(code)
}

And then

$ go build -o gowebbenchmark

$ ./test.sh
2020/05/05 15:50:26 URI: /hello, code: 0
2020/05/05 15:50:26 URI: /hello, code: 0
2020/05/05 15:50:26 URI: /hello, code: 0
......

$ ./test-pipelining.sh
2020/05/05 15:48:25 URI: /, code: 404
2020/05/05 15:48:25 URI: /, code: 404
2020/05/05 15:48:25 URI: /, code: 404
......

throughput=`wrk -t$cpu_cores -c$4 -d30s http://127.0.0.1:8080/hello -s pipeline.lua --latency -- / 16| grep Requests/sec | awk '{print $2}'`

BTW, what does -- / 16 means here? I changed it to -- /hello 16, and works as expected.

-- / 16 looks like the parameters passed to lua script, / is the request uri and 16 is depth.

Processing time and Latency

According to the Benchmark of different processing time (Latency), In all frameworks except fasthttp, for 0ms processing time, we have more latencies compared to 10ms! It's not clear to me. Maybe because of sleeps in the processing code. What do you think?

Thanks for your useful benchmarks.

docker image not creating images with labels

Hi, first at all, many thanks for sharing this project, has helped me to improve and detect some issues within my code and therefore learn more about how to profile/test in general.

Currently, I am giving a try to the docker image but is not creating the images by just running this:

docker run  -v /opt/data:/data smallnest/go-web-framework-benchmark

What I am doing to try to generate them, is to execute the plot.sh something like:

docker run -v /tmp/data:/data -t -i --entrypoint /bin/bash smallnest/go-web-framework-benchmark

then within the container, I run:

docker-test.sh

When finished I run plot.sh within the testresults this creates the images but without labels, an example of the output:

benchmark_alloc

Any idea of how to fix this? and if is not too much asking in the meantime could the benchmark results be updated?

thanks in advance.

New Benchmark?

Hello can you please provide a new benschmark? I can't get in to run on my Alpine Linux Server... because no wrk on ARM available...

Thank you!

Import issue

goimports may break imports like this if it will not found this package in $GOPATH.
maybe we can use named imports to avoid this?

Unable to run

Hello, I tried installing this package using go get, but it threw this error:

$ go build -o  gowebbenchmark server.go
# command-line-arguments
./server.go:412:12: f.Config().SetPort undefined (type *fresh.Config has no field or method SetPort)
./server.go:414:3: f.Run undefined (type fresh.Fresh has no field or method Run)

我在一个16核虚拟机上的测试结果和您测试的结果相差很大。

测试default,10ms

./wrk -t16 -c500 -d30s http://127.0.0.1:9000/hello
Running 30s test @ http://127.0.0.1:9000/hello
16 threads and 500 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 10.48ms 593.22us 20.03ms 94.04%
Req/Sec 2.97k 71.19 3.13k 71.33%
1419118 requests in 30.04s, 173.23MB read
Requests/sec: 47247.64
Transfer/sec: 5.77MB

看你主页的结果,500 connection的结果在40w左右。 差距有些太大了。 您调整了什么网络相关的参数吗?

而且我发现16个核,平均每个核的CPU占用也就在5%左右,压不上去。

Macaron incorrect import path

Hello, you removed macaron from your benchmark in this commit.

But the reason it wasn't building was probably because you used the wrong import path

In your code you used github.com/Unknwon/macaron but in fact it should be gopkg.in/macaron.v1.
I'll add a pull request as soon as I can to fix this.

What happens ?

Before the last iris was first with fasthttp-raw,
before 18 days, as this commit also says:
0874ce6.

Also I didn't change any performance-releated with SetBody & router, I just tested with a version we had before a month and the benchmark results are the same as the latest iris version.

Can you tell me when you upgraded iris? what version did you had before (or a date) you did several tests within this period, this is the first time iris is down.

And if you didn't changed anything(iris-releated) from the previous week, could you please re-run the tests?

Edit2:

I just ran the benchmarks from docker with docker pull smallnest/go-web-framework-benchmark docker run -v /opt/data:/data smallnest/go-web-framework-benchmark .

Iris is still side-by-side with fasthttp. Some other frameworks I saw on this repo's README that seems to be top of iris, in real, they have lowest results...

For now, only the basic tests (0ms 5000 clients) are finished:

fasthttp and others

iris

I ran it from a 'clean/fresh' docker*

but on README's Basic tests are:

README page

How is that even possible?

Edit 3: 10ms 5k clients finished also:

fasthttp and others 10ms

iris 10ms

I will leave the machine open until the graphical results also... I really need to sleep now

But you got the idea.. something happened on your last machine's benchmark testing

Thanks for everything!!

Remove some projects from benchmarks

test.sh for only 2 frameworks already runs for about 11–13 minutes, the whole test-suit is going to be lasting for many and many more hours. However, we can optimize this by getting rid of some abandoned projects. I've checked some, so here's my report:
echo-slim: repository is deleted
fastrouter: abandoned, latest commit was 7 years ago
gojsonrest: abandoned
gongular: abandoned, no PRs are reviewed since 2017
gowww: author had big plans, however latest commit was 4 years ago. The package by itself has no tests
martini: unmaintained, stated in the README
neo: seems abandoned, all the issues for the past 2–3 years has no answers
traffic: unmaintained, as stated in Bio.

You can check each of the above by yourself. As we come to the consensus, I can open a PR.

机器配置可以说详细点嘛

E5-2630 v3 是8物理核心,16超线程。配置表里说的32 cores是32个物理核还是32个超线程。
其实就想知道用了2块e5还是4块e5。
CPU: Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz, 32 cores

Are frameworks with version below 1.0.0 forbidden?

In README is stated: Only test those webframeworks which are stable. However, there is still a clevergo with version 0.6.0 and a couple more without any explicit version at all.

I would like to add own framework to comparisons (powered by own http engine, so in local runs it visibly outperforms fasthttp). It's covered by tests (unlike some already-added projects), I've got own website powered by it for a while, yet the version is still under 1.0.0 (semver is used for versioning). How acceptable would it be?

Just in case, link

Add pipeline to tests

init = function(args)
   request_uri = args[1]
   depth = tonumber(args[2]) or 1

   local r = {}
   for i=1,depth do
     r[i] = wrk.format(nil, request_uri)
   end
   req = table.concat(r)
end

request = function()
   return req
end
$ wrk .... -s pipeline.lua 

Taken from here

remove low-performance web frameworks

I prepare to remove those web frameworks which performance are much lower than other frameworks in the next test. For example, possum, guava studio/web, gorilla.

Update results?

Maybe, it's time to update benchmark results with go 1.8?

create a docker image

create a docker image from this project so everyone can execute this benchmark test more conveniently

missing dependencies

Forked this repository and tried running. Following is the output

../go-web-framework-benchmark$ go build -o gowebfmbench
# github.com/cloudfoundry/gosigar
../../../../pkg/mod/github.com/cloudfoundry/[email protected]/concrete_sigar.go:20:11: cpuUsage.Get undefined (type Cpu has no field or method Get)
../../../../pkg/mod/github.com/cloudfoundry/[email protected]/concrete_sigar.go:30:13: cpuUsage.Get undefined (type Cpu has no field or method Get)
../../../../pkg/mod/github.com/cloudfoundry/[email protected]/concrete_sigar.go:49:10: l.Get undefined (type LoadAverage has no field or method Get)
../../../../pkg/mod/github.com/cloudfoundry/[email protected]/concrete_sigar.go:55:10: m.Get undefined (type Mem has no field or method Get)
../../../../pkg/mod/github.com/cloudfoundry/[email protected]/concrete_sigar.go:61:10: s.Get undefined (type Swap has no field or method Get)
../../../../pkg/mod/github.com/cloudfoundry/[email protected]/sigar_shared.go:12:20: procTime.Get undefined (type *ProcTime has no field or method Get)

Go env

$ go env
GO111MODULE=""
GOARCH="amd64"
GOBIN=""
GOCACHE="/Users/kamaleshwar/Library/Caches/go-build"
GOENV="/Users/kamaleshwar/Library/Application Support/go/env"
GOEXE=""
GOFLAGS=""
GOHOSTARCH="amd64"
GOHOSTOS="darwin"
GOINSECURE=""
GOOS="darwin"
GOPATH="/Users/kamaleshwar/go"
GOPROXY="https://proxy.golang.org,direct"
GOROOT="/usr/local/go"
GOSUMDB="sum.golang.org"
GOTMPDIR=""
GOTOOLDIR="/usr/local/go/pkg/tool/darwin_amd64"
GCCGO="gccgo"
AR="ar"
CC="clang"
CXX="clang++"
CGO_ENABLED="0"
GOMOD=""
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/lv/0wsxrb6123s8slgb2lp_6_3m0000gp/T/go-build461564637=/tmp/go-build -gno-record-gcc-switches -fno-common"

http: Accept error: accept tcp [::]:8080: accept4: too many open files

Hello,

I made a fresh setup today and started to get these errors. Any idea ?

Host machine is Ubuntu 20.04 with go 1.15.

2020/08/26 14:31:24 - atreugo - Temporary error when accepting new connections: accept tcp4 0.0.0.0:8080: accept4: too many open files
2020/08/26 14:31:25 - atreugo - Temporary error when accepting new connections: accept tcp4 0.0.0.0:8080: accept4: too many open files
2020/08/26 14:31:26 - atreugo - Temporary error when accepting new connections: accept tcp4 0.0.0.0:8080: accept4: too many open files
throughput: 118162.25 requests/second
./test.sh: line 18: 3216 Killed ./$server_bin_name $2 $3
finsihed testing atreugo

testing web framework: beego
• Initialization package=gramework version=1.7.0-rc3
• node info cputicks=2.53G package=gramework ram=911.18M used / 7.51G total swap=0.00 used / 2.00G total
• load average is good fifteen=0.600 five=1.670 one=4.920 package=gramework
• node uptime package=gramework uptime= 0:01
2020/08/26 14:31:30 http: Accept error: accept tcp [::]:8080: accept4: too many open files; retrying in 5ms
2020/08/26 14:31:30 http: Accept error: accept tcp [::]:8080: accept4: too many open files; retrying in 10ms
2020/08/26 14:31:30 http: Accept error: accept tcp [::]:8080: accept4: too many open files; retrying in 20ms
2020/08/26 14:31:30 http: Accept error: accept tcp [::]:8080: accept4: too many open files; retrying in 40ms
2020/08/26 14:31:30 http: Accept error: accept tcp [::]:8080: accept4: too many open files; retrying in 80ms
2020/08/26 14:31:30 http: Accept error: accept tcp [::]:8080: accept4: too many open files; retrying in 160ms
2020/08/26 14:31:30 http: Accept error: accept tcp [::]:8080: accept4: too many open files; retrying in 320ms
2020/08/26 14:31:30 http: Accept error: accept tcp [::]:8080: accept4: too many open files; retrying in 640ms
2020/08/26 14:31:31 http: Accept error: accept tcp [::]:8080: accept4: too many open files; retrying in 1s
2020/08/26 14:31:32 http: Accept error: accept tcp [::]:8080: accept4: too many open files; retrying in 1s

Best Regards,

本地测试结果与公布数据差距很大,

测试环境
go1.7 最新 beego ->default路径无返回值
硬件:
40 Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz
Mem: 32765096
测试结果:
[root@test08 wrk]# ./wrk -t10 -c5000 -d60s http://127.0.0.1:8080/
Running 1m test @ http://127.0.0.1:8080/
10 threads and 5000 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 402.30ms 565.62ms 2.00s 81.05%
Req/Sec 0.86k 639.03 13.29k 78.62%
476512 requests in 1.00m, 70.44MB read
Socket errors: connect 0, read 0, write 0, timeout 40477
Requests/sec: 7932.59
Transfer/sec: 1.17MB

Requests/sec 远远低于您第一项测试中0ms 100ms 等测试项的公布值, 不知道问题出在哪里.

dependency issue in aurora framework

I think there is an issue in the aurora web framework. After taking out the aurora, it works.

go: downloading github.com/aurora-go/minilog v0.0.7
../../../go/pkg/mod/github.com/aurora-go/[email protected]/aurora/level/level.go:3:8: reading github.com/aurora-go/minilog/go.mod at revision v0.0.7: git ls-remote -q origin in /home/dennis/go/pkg/mod/cache/vcs/7b82371cb7ac369cfc0fb0e4f4171ae244409e04b5edd64ecf7a989735e343d8: exit status 128:
        remote: Repository not found.
        fatal: repository 'https://github.com/aurora-go/minilog/' not found

why remove iris

I was wondered Why remove iris, it is said iris is the fastest go-web-framework in the world..

Potential case-insensitive import collision

Due to GitHub handle change (to lowercase) for long term purpose, go get may fail fetching github.com/Unknwon/com.
Please consider take some time to update it to github.com/unknwon/com in the go.mod file.
I truly apology for the inconvenience and unintended troubles caused.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.