Giter VIP home page Giter VIP logo

graphql-benchmarks's Issues

Nginx proxy is not working

Hi, it seems that since #28 the Nginx proxy doesn't work as expected, which means that the graphql endpoints are returning errors. This largely explains why both Taillcall and Gqlgen saw a massive increase in throughput and RPS after the PR was merged in, since their underlying server / client is more performant than Caliban's. Also likely explains why DGS is crashing (probably doesn't handle the client failures well).

Steps to reproduce:

./nginx/run.sh
./graphql/tailcall/run.sh &
sleep 2
./test_query.sh

Running the above returns:

{"data":null,"errors":[{"message":"IOException: Request error: error sending request for url (http://jsonplaceholder.typicode.com/posts): error trying to connect: tcp connect error: Connection refused (os error 111)","locations":[{"line":1,"column":2}]}]}

Disable file updates on PRs

Running the build on PRs is constantly causing merge conflicts because all of them keep updating the README and the image files.

Technical Requirements

  • Run the benchmarks as it is without committing anything into git.
  • Once the results are ready print them on the CI logs, as a separate step.

reformat: Benchmark Results

Update the benchmark results in the README.md file into the following format

Query Server Requests/sec Latency (ms) Relative
1. {posts {title body user {name}}}
Tailcall 28,389.70 3.52 83.28x
async-graphql 2,411.80 41.48 7.08x
Caliban 1,416.62 70.59 4.16x
Gqlgen 1,343.30 77.00 3.94x
Apollo GraphQL 791.84 126.09 2.33x
Netflix DGS 340.42 219.50 1.00x
2. {posts {title body}}
Tailcall 56,655.60 1.75 37.59x
Caliban 8,833.37 11.79 5.86x
async-graphql 7,027.24 14.34 4.66x
Gqlgen 1,969.06 51.76 1.31x
Apollo GraphQL 1,724.77 57.78 1.14x
Netflix DGS 1,507.36 70.39 1.00x

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

Open

These updates have all been created already. Click a checkbox below to force a retry/rebase of any.

Detected dependencies

cargo
graphql/async_graphql/Cargo.toml
  • anyhow 1.0.82
  • async-graphql 7.0.3
  • async-graphql-axum 7.0.3
  • axum 0.7.5
  • tokio 1.37.0
  • reqwest 0.12.4
  • serde 1.0.200
  • serde_json 1.0.116
  • futures 0.3.30
  • mimalloc 0.1.41
dockerfile
.devcontainer/Dockerfile
github-actions
.github/workflows/bench.yml
  • actions/checkout v4
  • devcontainers/ci v0.3
  • peter-evans/commit-comment v3
  • stefanzweifel/git-auto-commit-action v5
gomod
graphql/gqlgen/go.mod
  • go 1.22.2
  • github.com/99designs/gqlgen v0.17.49
  • github.com/vektah/gqlparser/v2 v2.5.16
  • github.com/vikstrous/dataloadgen v0.0.6
gradle
graphql/netflix_dgs/settings.gradle
graphql/netflix_dgs/build.gradle
  • org.springframework.boot 3.3.1
  • io.spring.dependency-management 1.1.6
  • com.netflix.graphql.dgs:graphql-dgs-platform-dependencies 9.0.4
  • io.projectreactor:reactor-bom 2023.0.8
  • org.apache.httpcomponents.core5:httpcore5 5.2.5
  • org.apache.httpcomponents.client5:httpclient5 5.3.1
gradle-wrapper
graphql/netflix_dgs/gradle/wrapper/gradle-wrapper.properties
  • gradle 8.9
npm
graphql/apollo_server/package.json
  • @apollo/server ^4.9.3
  • axios ^1.5.0
  • dataloader ^2.2.2
  • graphql ^16.8.0
  • http-proxy-agent ^7.0.0
  • pm2 ^5.3.0
graphql/graphql_jit/package.json
  • @graphql-tools/schema ^10.0.4
  • axios ^1.7.2
  • dataloader ^2.2.2
  • express ^4.19.2
  • graphql ^15.9.0
  • graphql-jit ^0.8.6
graphql/hasura/package.json
  • express ^4.19.2
graphql/tailcall/package.json
  • @tailcallhq/tailcall 0.96.11
sbt
graphql/caliban/build.sbt
  • scala 3.4.2
  • com.github.ghostdogpr:caliban-quick 2.8.1
  • com.github.plokhotnyuk.jsoniter-scala:jsoniter-scala-core 2.30.7
  • com.github.plokhotnyuk.jsoniter-scala:jsoniter-scala-macros 2.30.7
  • org.apache.httpcomponents.client5:httpclient5 5.3.1
  • dev.zio:zio 2.1.6
graphql/caliban/project/build.properties
  • sbt/sbt 1.10.1

  • Check this box to trigger a request for Renovate to run again on this repository

Add a "Hello World" benchmark

Like we have the new N + 1 benchmark add a simple "Hello World" benchmark

query { greet }
{
  data: {greet: "Hello World!"}
}

Technical Requirements

  • Ensure that there is no duplication in this PR, any thing that can be abstracted to a common utility should be.
  • Results should be embedded into the README file.
  • Update all existing implementation for the "Hello World" benchmark.

Improve project quality

  • Use consistent naming convention for files. Stick to kebab case.
  • Move images to a separate folder
  • Display images on the readme
  • Update readme with steps to run it on code spaces
  • Add a link to setup and run on codespaces
  • Double the resolution of the images
  • Add the actual data in tabular format also on the Readme
  • Fix the legend getting clipped.
  • Add website links instead of github links
  • Need a way to configure load externally
  • Add architecture diagram
  • Add benchmark description

run all the benchmarks in parallel

Currently the benchmarks run in sequence one after the other. We want to change that so that all the benchmarks can run in parallel. For each server start with the "greet" query, moving on to a list of posts and then to posts with users.

Technical Requirements

  • Ensure each GraphQL server runs independent of the other in a single runner.

Fix CI Workflow for Benchmark Execution and Result Handling

Description

The CI workflow requires enhancements to handle benchmark execution and updating the README file correctly. We need to address the following scenarios:

  1. Main Branch: When changes are pushed to the main branch, the CI should run benchmarks and update the README file with the results, committing these changes back to the main branch.
  2. Pull Requests: For pull requests, the CI should run benchmarks but not update the README or commit any changes. This step is to ensure benchmarks run successfully without altering the repository.
  3. Failure Handling: The CI should fail if it encounters any issues in generating or updating benchmark results, making these failures visible for prompt resolution.

Expected Behavior

  • On Main Branch:

    • Run benchmarks.
    • If benchmarks run successfully, update the README file with the results and commit these changes to the main branch.
    • If there is a failure in running benchmarks or updating the README, the CI should fail and report the error.
  • On Pull Requests:

    • Run benchmarks without updating the README or committing any changes.
    • CI should fail if there is an issue in generating benchmark results.

Acceptance Criteria

  1. CI updates to run benchmarks on the main branch and update the README file with the results, followed by committing these changes.
  2. CI updates to run benchmarks on pull requests without updating the README or committing any changes.
  3. Ensure CI fails and reports errors if benchmark generation or README update fails.

Issue with the N+1 benchmark

Seems that tailcall went from ~7k reqs/s to ~60k reqs/s even for the baseline query that doesn't contain N+1 queries.

Is this an issue with the benchmark or shall I add global request batching to Caliban as well?

Also, what's the reason for not using the batching endpoint of users such as /users?id=1&id=2 etc? I run the benchmark locally and more than 50-60% of the CPU was being used by the nginx proxy, meaning that we're no longer benchmarking graphql performance

Install node from a non-deprecated source

Here is the log on the benchmark job:

[2023-11-17T14:55:22.168Z] #8 0.296 
#8 0.296   The NodeSource Node.js Linux distributions GitHub repository contains
#8 0.296   information about which versions of Node.js and which Linux distributions
#8 0.296   are supported and how to install it.
#8 0.296   https://github.com/nodesource/distributions
#8 0.296 
#8 0.296 
#8 0.296                           SCRIPT DEPRECATION WARNING
#8 0.296 
#8 0.296 ================================================================================
#8 0.297 ▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓
#8 0.297 ================================================================================
#8 0.297 
#8 0.297 TO AVOID THIS WAIT MIGRATE THE SCRIPT

This slows down the benchmark as the script sleeps for 60 seconds before continuing.

Wrong user fetched in gqlgen N+1 posts

The user fetched for the posts seems to be incorrect in the gqlgen. I noticed this inconsistency in the CI.

{
  "data": {
    "posts": [
      {
        "id": 1,
        "userId": 1,
        "title": "sunt aut facere repellat provident occaecati excepturi optio reprehenderit",
        "user": {
          "id": 6,
          "name": "Mrs. Dennis Schulist",
          "email": "[email protected]"
        }
      }, ...
    ]
  }
}

Setup benchmarks to perform an N + 1 query

Technical Requirements

  • Make a request to the respective servers with
     posts id userId title user { id name email }
  • Ensure the existing benchmarks don't break
  • Run the benchmark for 30 seconds
  • Use a data-loader to batch requests to /user/:id
  • NOTE: Do not make a bulk API call to load all users.
  • Test results should be combined with the previous tests and
    1. Comment on the commit like we do right now for each PR
    2. Update README with the new set of tests

Start all servers on port 8000

Currently all servers start on different ports.

.devcontainer/Dockerfile

EXPOSE 8080 8081 8082 8083 8084 3000

Ideally we would like to expose every service on a single port 8000. While running the tests, we will start the service, run the benchmark and stop the service completely only after which move on to the next service.

Run the benchmarks automatically on `benchmarking-runner`

Technical Requirements

  • Run benchmarks for all PRs on benchmarking-runner runner.
  • It should follow the steps we have documented and automatically generate
    1. The current README.md with a table sorted highest to lowest in terms by RPS.
    2. A histogram of RPS
    3. A histogram of Latency

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.