tailcallhq / graphql-benchmarks Goto Github PK
View Code? Open in Web Editor NEWSetup to compare graphql frameworks
Setup to compare graphql frameworks
Hi, it seems that since #28 the Nginx proxy doesn't work as expected, which means that the graphql endpoints are returning errors. This largely explains why both Taillcall and Gqlgen saw a massive increase in throughput and RPS after the PR was merged in, since their underlying server / client is more performant than Caliban's. Also likely explains why DGS is crashing (probably doesn't handle the client failures well).
Steps to reproduce:
./nginx/run.sh
./graphql/tailcall/run.sh &
sleep 2
./test_query.sh
Running the above returns:
{"data":null,"errors":[{"message":"IOException: Request error: error sending request for url (http://jsonplaceholder.typicode.com/posts): error trying to connect: tcp connect error: Connection refused (os error 111)","locations":[{"line":1,"column":2}]}]}
Running the build on PRs is constantly causing merge conflicts because all of them keep updating the README and the image files.
Technical Requirements
Update the benchmark results in the README.md file into the following format
Query | Server | Requests/sec | Latency (ms) | Relative |
---|---|---|---|---|
1. | {posts {title body user {name}}} |
|||
Tailcall | 28,389.70 | 3.52 | 83.28x | |
async-graphql | 2,411.80 | 41.48 | 7.08x | |
Caliban | 1,416.62 | 70.59 | 4.16x | |
Gqlgen | 1,343.30 | 77.00 | 3.94x | |
Apollo GraphQL | 791.84 | 126.09 | 2.33x | |
Netflix DGS | 340.42 | 219.50 | 1.00x | |
2. | {posts {title body}} |
|||
Tailcall | 56,655.60 | 1.75 | 37.59x | |
Caliban | 8,833.37 | 11.79 | 5.86x | |
async-graphql | 7,027.24 | 14.34 | 4.66x | |
Gqlgen | 1,969.06 | 51.76 | 1.31x | |
Apollo GraphQL | 1,724.77 | 57.78 | 1.14x | |
Netflix DGS | 1,507.36 | 70.39 | 1.00x |
This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
graphql/async_graphql/Cargo.toml
anyhow 1.0.82
async-graphql 7.0.3
async-graphql-axum 7.0.3
axum 0.7.5
tokio 1.37.0
reqwest 0.12.4
serde 1.0.200
serde_json 1.0.116
futures 0.3.30
mimalloc 0.1.41
.devcontainer/Dockerfile
.github/workflows/bench.yml
actions/checkout v4
devcontainers/ci v0.3
peter-evans/commit-comment v3
stefanzweifel/git-auto-commit-action v5
graphql/gqlgen/go.mod
go 1.22.2
github.com/99designs/gqlgen v0.17.49
github.com/vektah/gqlparser/v2 v2.5.16
github.com/vikstrous/dataloadgen v0.0.6
graphql/netflix_dgs/settings.gradle
graphql/netflix_dgs/build.gradle
org.springframework.boot 3.3.1
io.spring.dependency-management 1.1.6
com.netflix.graphql.dgs:graphql-dgs-platform-dependencies 9.0.4
io.projectreactor:reactor-bom 2023.0.8
org.apache.httpcomponents.core5:httpcore5 5.2.5
org.apache.httpcomponents.client5:httpclient5 5.3.1
graphql/netflix_dgs/gradle/wrapper/gradle-wrapper.properties
gradle 8.9
graphql/apollo_server/package.json
@apollo/server ^4.9.3
axios ^1.5.0
dataloader ^2.2.2
graphql ^16.8.0
http-proxy-agent ^7.0.0
pm2 ^5.3.0
graphql/graphql_jit/package.json
@graphql-tools/schema ^10.0.4
axios ^1.7.2
dataloader ^2.2.2
express ^4.19.2
graphql ^15.9.0
graphql-jit ^0.8.6
graphql/hasura/package.json
express ^4.19.2
graphql/tailcall/package.json
@tailcallhq/tailcall 0.96.11
graphql/caliban/build.sbt
scala 3.4.2
com.github.ghostdogpr:caliban-quick 2.8.1
com.github.plokhotnyuk.jsoniter-scala:jsoniter-scala-core 2.30.7
com.github.plokhotnyuk.jsoniter-scala:jsoniter-scala-macros 2.30.7
org.apache.httpcomponents.client5:httpclient5 5.3.1
dev.zio:zio 2.1.6
graphql/caliban/project/build.properties
sbt/sbt 1.10.1
Like we have the new N + 1 benchmark add a simple "Hello World" benchmark
query { greet }
{
data: {greet: "Hello World!"}
}
Technical Requirements
Add Hasura to the benchmarks to compare GraphQL performance
Currently the benchmarks run in sequence one after the other. We want to change that so that all the benchmarks can run in parallel. For each server start with the "greet" query, moving on to a list of posts and then to posts with users.
Technical Requirements
A GH action has been recently introduced to run when a PR is raised. But, this is failing at checkout if the PR is from forked repos.
The CI workflow requires enhancements to handle benchmark execution and updating the README file correctly. We need to address the following scenarios:
On Main Branch:
On Pull Requests:
Seems that tailcall went from ~7k reqs/s to ~60k reqs/s even for the baseline query that doesn't contain N+1 queries.
Is this an issue with the benchmark or shall I add global request batching to Caliban as well?
Also, what's the reason for not using the batching endpoint of users such as /users?id=1&id=2
etc? I run the benchmark locally and more than 50-60% of the CPU was being used by the nginx proxy, meaning that we're no longer benchmarking graphql performance
Technical Requirements
Here is the log on the benchmark job:
[2023-11-17T14:55:22.168Z] #8 0.296
#8 0.296 The NodeSource Node.js Linux distributions GitHub repository contains
#8 0.296 information about which versions of Node.js and which Linux distributions
#8 0.296 are supported and how to install it.
#8 0.296 https://github.com/nodesource/distributions
#8 0.296
#8 0.296
#8 0.296 SCRIPT DEPRECATION WARNING
#8 0.296
#8 0.296 ================================================================================
#8 0.297 ▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
#8 0.297 ================================================================================
#8 0.297
#8 0.297 TO AVOID THIS WAIT MIGRATE THE SCRIPT
This slows down the benchmark as the script sleeps for 60 seconds before continuing.
Add benchmarks for GraphQL JIT
Add all the existing benchmarks
Shell files are hard to manage, test and maintain. Re-write the complete logic in JS.
The user fetched for the posts seems to be incorrect in the gqlgen. I noticed this inconsistency in the CI.
{
"data": {
"posts": [
{
"id": 1,
"userId": 1,
"title": "sunt aut facere repellat provident occaecati excepturi optio reprehenderit",
"user": {
"id": 6,
"name": "Mrs. Dennis Schulist",
"email": "[email protected]"
}
}, ...
]
}
}
Technical Requirements
posts id userId title user { id name email }
/user/:id
Currently all servers start on different ports.
.devcontainer/Dockerfile
EXPOSE 8080 8081 8082 8083 8084 3000
Ideally we would like to expose every service on a single port 8000
. While running the tests, we will start the service, run the benchmark and stop the service completely only after which move on to the next service.
Technical Requirements
benchmarking-runner
runner.Update all the links of the GraphQL servers to their open-source Github repository.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.