Giter VIP home page Giter VIP logo

wundergraph / graphql-go-tools Goto Github PK

View Code? Open in Web Editor NEW
667.0 20.0 121.0 21.42 MB

GraphQL Router / API Gateway framework written in Golang, focussing on correctness, extensibility, and high-performance. Supports Federation v1 & v2, Subscriptions & more.

Home Page: https://graphql-api-gateway.com

License: MIT License

Makefile 0.03% Go 94.57% HTML 0.05% JavaScript 5.35% Shell 0.01%
graphql golang lexer parser linter validation parsing ast printing ast-parser

graphql-go-tools's Introduction

GoDoc v2-ci

GraphQL Router / API Gateway Framework written in Golang

We're hiring!

Are you interested in working on graphql-go-tools? We're looking for experienced Go developers and DevOps or Platform Engineering specialists to help us run Cosmo Cloud. If you're more interested in working with Customers on their GraphQL Strategy, we also offer Solution Architect positions.

Check out the currently open positions.

Replacement for Apollo Router

If you're looking for a complete ready-to-use Open Source Router for Federation, have a look at the Cosmo Router which is based on this library.

Cosmo Router wraps this library and provides a complete solution for Federated GraphQL including the following features:

  • Federation Gateway
  • OpenTelemetry Metrics & Distributed Tracing
  • Prometheus Metrics
  • GraphQL Schema Usage Exporter
  • Health Checks
  • GraphQL Playground
  • Execution Tracing Exporter & UI in the Playground
  • Federated Subscriptions over WebSockets (graphql-ws & graphql-transport-ws protocol support) and SSE
  • Authentication using JWKS & JWT
  • Highly available & scalable using S3 as a backend for the Router Config
  • Persisted Operations / Trusted Documents
  • Traffic Shaping (Timeouts, Retries, Header & Body Size Limits, Subgraph Header forwarding)
  • Custom Modules & Middleware

State of the packages

This repository contains multiple packages joined via workspace.

Package Description Package dependencies Maintenance state
graphql-go-tools v2 GraphQL engine implementation consisting of lexer, parser, ast, ast validation, ast normalization, datasources, query planner and resolver. Supports GraphQL Federation. Has built-in support for batching federation entity calls - actual version, active development
execution Execution helpers for the request handling and engine configuration builder depends on graphql-go-tools v2 and composition actual version
examples/federation Example implementation of graphql federation gateway. This example is not production ready. For production ready solution please consider using cosmo router depends on execution package actual federation gateway example
graphql-go-tools v1 Legacy GraphQL engine implementation. This version 1 package is in maintenance mode and accepts only pull requests with critical bug fixes. All new features will be implemented in the version 2 package only. - deprecated, maintenance mode

Notes

This library is used in production at WunderGraph. We've recently introduced a v2 module that is not completely backwards compatible with v1, hence the major version bump. The v2 module contains big rewrites in the engine package, mainly to better support GraphQL Federation. Please consider the v1 module as deprecated and move to v2 as soon as possible.

We have customers who pay us to maintain this library and steer the direction of the project. Contact us if you're looking for commercial support, features or consulting.

Performance

The architecture of this library is designed for performance, high-throughput and low garbage collection overhead. The following benchmark measures the "overhead" of loading and resolving a GraphQL response from four static in-memory Subgraphs at 0,007459 ms/op. In more complete end-to-end benchmarks, we've measured up to 8x more requests per second and 8x lower p99 latency compared to Apollo Router, which is written in Rust.

cd v2/pkg/engine
go test -run=nothing -bench=Benchmark_NestedBatchingWithoutChecks -memprofile memprofile.out -benchtime 3s && go tool pprof memprofile.out
goos: darwin
goarch: arm64
pkg: github.com/wundergraph/graphql-go-tools/v2/pkg/engine/resolve
Benchmark_NestedBatchingWithoutChecks-10          473186              7134 ns/op          52.00 MB/s        2086 B/op         36 allocs/op

Tutorial

If you're here to learn how to use this library to build your own custom GraphQL Router or API Gateway, here's a speed run tutorial for you, based on how we use this library in Cosmo Router.

package main

import (
  "bytes"
  "context"
  "fmt"

  "github.com/cespare/xxhash/v2"
  "github.com/wundergraph/graphql-go-tools/v2/pkg/ast"
  "github.com/wundergraph/graphql-go-tools/v2/pkg/astnormalization"
  "github.com/wundergraph/graphql-go-tools/v2/pkg/astparser"
  "github.com/wundergraph/graphql-go-tools/v2/pkg/astprinter"
  "github.com/wundergraph/graphql-go-tools/v2/pkg/asttransform"
  "github.com/wundergraph/graphql-go-tools/v2/pkg/astvalidation"
  "github.com/wundergraph/graphql-go-tools/v2/pkg/astvisitor"
  "github.com/wundergraph/graphql-go-tools/v2/pkg/engine/datasource/staticdatasource"
  "github.com/wundergraph/graphql-go-tools/v2/pkg/engine/plan"
  "github.com/wundergraph/graphql-go-tools/v2/pkg/engine/resolve"
  "github.com/wundergraph/graphql-go-tools/v2/pkg/operationreport"
)

/*
ExampleParsePrintDocument shows you the most basic usage of the library.
It parses a GraphQL document and prints it back to a writer.
*/
func ExampleParsePrintDocument() {

	input := []byte(`query { hello }`)

	report := &operationreport.Report{}
	document := ast.NewSmallDocument()
	parser := astparser.NewParser()
	printer := &astprinter.Printer{}

	document.Input.ResetInputBytes(input)
	parser.Parse(document, report)

	if report.HasErrors() {
		panic(report.Error())
	}

	out := &bytes.Buffer{}
	err := printer.Print(document, nil, out)
	if err != nil {
		panic(err)
	}
	fmt.Println(out.String()) // Output: query { hello }
}

/*
Okay, that was easy, but also not very useful.
Let's try to parse a more complex document and print it back to a writer.
*/

// ExampleParseComplexDocument shows a special feature of the printer
func ExampleParseComplexDocument() {

	input := []byte(`
		query {
			hello
			foo {
				bar
			}
		}
	`)

	report := &operationreport.Report{}
	document := ast.NewSmallDocument()
	parser := astparser.NewParser()
	printer := &astprinter.Printer{}

	document.Input.ResetInputBytes(input)
	parser.Parse(document, report)

	if report.HasErrors() {
		panic(report.Error())
	}

	out := &bytes.Buffer{}
	err := printer.Print(document, nil, out)
	if err != nil {
		panic(err)
	}
	fmt.Println(out.String()) // Output: query { hello foo { bar } }
}

/*
You'll notice that the printer removes all whitespace and newlines.
But what if we wanted to print the document with indentation?
*/

func ExamplePrintWithIndentation() {

	input := []byte(`
		query {
			hello
			foo {
				bar
			}
		}
	`)

	report := &operationreport.Report{}
	document := ast.NewSmallDocument()
	parser := astparser.NewParser()

	document.Input.ResetInputBytes(input)
	parser.Parse(document, report)

	if report.HasErrors() {
		panic(report.Error())
	}

	out, err := astprinter.PrintStringIndent(document, nil, "  ")
	if err != nil {
		panic(err)
	}
	fmt.Println(out)
	// Output: query {
	//   hello
	//   foo {
	//     bar
	//   }
	// }
}

/*
Okay, fantastic. We can parse and print GraphQL documents.
As a next step, we could analyze the document and extract some information from it.
What if we wanted to know the name of the operation in the document, if any?
And what if we wanted to know about the Operation type?
*/

func ExampleParseOperationNameAndType() {

	input := []byte(`
		query MyQuery {
			hello
			foo {
				bar
			}
		}
	`)

	report := &operationreport.Report{}
	document := ast.NewSmallDocument()
	parser := astparser.NewParser()

	document.Input.ResetInputBytes(input)
	parser.Parse(document, report)

	if report.HasErrors() {
		panic(report.Error())
	}

	operationCount := 0
	var (
		operationNames []string
		operationTypes []ast.OperationType
	)

	for _, node := range document.RootNodes {
		if node.Kind != ast.NodeKindOperationDefinition {
			continue
		}
		operationCount++
		name := document.RootOperationTypeDefinitionNameString(node.Ref)
		operationNames = append(operationNames, name)
		operationType := document.RootOperationTypeDefinitions[node.Ref].OperationType
		operationTypes = append(operationTypes, operationType)
	}

	fmt.Println(operationCount) // Output: 1
	fmt.Println(operationNames) // Output: [MyQuery]
}

/*
We've now seen how to analyze the document and learn a bit about it.
We could now add some validation to our application,
e.g. we could check for the number of operations in the document,
and return an error if there are multiple anonymous operations.

We could also validate the Operation content against a schema.
But before we do this, we need to normalize the document.
This is important because validation relies on the document being normalized.
It was much easier to build the validation and many other features on top of a normalized document.

Normalization is the process of transforming the document into a canonical form.
This means that the document is transformed in a way that makes it easier to reason about it.
We inline fragments, we remove unused fragments,
we remove duplicate fields, we remove unused variables,
we remove unused operations etc...

So, let's normalize the document!
*/

func ExampleNormalizeDocument() {

	input := []byte(`
		query MyQuery {
			hello
			hello
			foo {
				bar
				bar
			}
			...MyFragment
		}

		fragment MyFragment on Query {
			hello
			foo {
				bar
			}
		}
	`)

	schema := []byte(`
		type Query {
			hello: String
			foo: Foo
		}
	
		type Foo {
			bar: String
		}
	`)

	report := &operationreport.Report{}
	document := ast.NewSmallDocument()
	parser := astparser.NewParser()

	document.Input.ResetInputBytes(input)
	parser.Parse(document, report)

	if report.HasErrors() {
		panic(report.Error())
	}

	schemaDocument := ast.NewSmallDocument()
	schemaParser := astparser.NewParser()
	schemaDocument.Input.ResetInputBytes(schema)
	schemaParser.Parse(schemaDocument, report)

	if report.HasErrors() {
		panic(report.Error())
	}

	// graphql-go-tools is very strict about the schema
	// the above GraphQL Schema is not fully valid, e.g. the `schema { query: Query }` part is missing
	// we can fix this automatically by merging the schema with a base schema
	err := asttransform.MergeDefinitionWithBaseSchema(schemaDocument)
	if err != nil {
		panic(err)
	}

	// you can customize what rules the normalizer should apply
	normalizer := astnormalization.NewWithOpts(
		astnormalization.WithExtractVariables(),
		astnormalization.WithInlineFragmentSpreads(),
		astnormalization.WithRemoveFragmentDefinitions(),
		astnormalization.WithRemoveNotMatchingOperationDefinitions(),
	)

	// It's generally recommended to always give your operation a name
	// If it doesn't have a name, just add one to the AST before normalizing it
	// This is not strictly necessary, but ensures that all normalization rules work as expected
	normalizer.NormalizeNamedOperation(document, schemaDocument, []byte("MyQuery"), report)

	if report.HasErrors() {
		panic(report.Error())
	}

	out, err := astprinter.PrintStringIndent(document, nil, "  ")
	if err != nil {
		panic(err)
	}

	fmt.Println(out)
	// Output: query MyQuery {
	//   hello
	//   foo {
	//     bar
	//   }
	// }
}

/*
Okay, that was a lot of work, but now we have a normalized document.
As you can see, all the duplicate fields have been removed and the fragment has been inlined.

What can we do with it?
Well, the possibilities are endless,
but why don't we start with validating the document against a schema?
Alright. Let's do it!
*/

func ExampleValidateDocument() {
	schemaDocument := ast.NewSmallDocument()
	operationDocument := ast.NewSmallDocument()
	report := &operationreport.Report{}
	validator := astvalidation.DefaultOperationValidator()
	validator.Validate(schemaDocument, operationDocument, report)
	if report.HasErrors() {
		panic(report.Error())
	}
}

/*
Fantastic, we've now got a GraphQL document that is valid against a schema.

As a next step, we could generate a cache key for the document.
This is very useful if we want to start doing expensive operations afterward that could be de-duplicated or cached.
At the same time, generating a cache key from a normalized document is not as trivial as it sounds.
Let's take a look!
*/

func ExampleGenerateCacheKey() {
	operationDocument := ast.NewSmallDocument()
	schemaDocument := ast.NewSmallDocument()
	report := &operationreport.Report{}

	normalizer := astnormalization.NewWithOpts(
		astnormalization.WithExtractVariables(),
		astnormalization.WithInlineFragmentSpreads(),
		astnormalization.WithRemoveFragmentDefinitions(),
		astnormalization.WithRemoveNotMatchingOperationDefinitions(),
	)

	normalizer.NormalizeNamedOperation(operationDocument, schemaDocument, []byte("MyQuery"), report)
	printer := &astprinter.Printer{}
	keyGen := xxhash.New()
	err := printer.Print(operationDocument, schemaDocument, keyGen)
	if err != nil {
		panic(err)
	}

	// you might be thinking that we're done now, but we're not
	// we've extracted the variables, so we need to add them to the cache key

	_, err = keyGen.Write(operationDocument.Input.Variables)
	if err != nil {
		panic(err)
	}

	key := keyGen.Sum64()
	fmt.Printf("%x", key) // Output: {cache key}
}

/*
Good job! We now have a correct cache key for the document.
We're using this ourselves in production to de-duplicate e.g. planning the execution of a GraphQL Operation.

There's just one problem with the above code.
An attacker could easily send the same document with a different Operation name and get a different cache key.
This could quite easily fill up our cache with duplicate entries.
To prevent this, we can make the operation name static.
Let's change out code to account for this.
*/

func ExampleGenerateCacheKeyWithStaticOperationName() {

	staticOperationName := []byte("O")

	operationDocument := ast.NewSmallDocument()
	schemaDocument := ast.NewSmallDocument()
	report := &operationreport.Report{}

	normalizer := astnormalization.NewWithOpts(
		astnormalization.WithExtractVariables(),
		astnormalization.WithInlineFragmentSpreads(),
		astnormalization.WithRemoveFragmentDefinitions(),
		astnormalization.WithRemoveNotMatchingOperationDefinitions(),
	)

	// First, we add the static operation name to the document and get an "address" to the byte slice (string) in the document
	// We cannot just add a string to an AST because the AST only stores references to byte slices
	// Storing strings in AST nodes would be very inefficient and would require a lot of allocations
	nameRef := operationDocument.Input.AppendInputBytes(staticOperationName)

	for _, node := range operationDocument.RootNodes {
		if node.Kind != ast.NodeKindOperationDefinition {
			continue
		}
		name := operationDocument.OperationDefinitionNameString(node.Ref)
		if name != "MyQuery" {
			continue
		}
		// Then we set the name of the operation to the address of the static operation name
		// Now we have renamed MyQuery to O
		operationDocument.OperationDefinitions[node.Ref].Name = nameRef
	}

	// Now we can normalize the modified document
	// All Operations that don't have the name O will be removed
	normalizer.NormalizeNamedOperation(operationDocument, schemaDocument, staticOperationName, report)

	printer := &astprinter.Printer{}
	keyGen := xxhash.New()
	err := printer.Print(operationDocument, schemaDocument, keyGen)
	if err != nil {
		panic(err)
	}

	_, err = keyGen.Write(operationDocument.Input.Variables)
	if err != nil {
		panic(err)
	}

	key := keyGen.Sum64()
	fmt.Printf("%x", key) // Output: {cache key}
}

/*
With these changes, the name of the operation doesn't matter anymore.
Independent of the name, the cache key will always be the same.

As a next step, we could start planning the execution of the operation.
This is a very complex topic, so we'll just show you how to plan the operation.
Going into detail would be beyond the scope of this example.
It took us years to get this right, so we won't be able to explain it in a few lines of code.

graphql-go-tools is not a GraphQL server by itself.
It's a library that you can use to build Routers, Gateways, or even GraphQL Server frameworks on top of it.
What this means is that there's no built-in support to define "resolvers".
Instead, you have to define DataSources that are used to resolve fields.

A DataSource can be anything, e.g. a static value, a HTTP JSON API, a GraphQL API, a WASM Lambda, a Database etc.
It's up to you to implement the DataSource interface.

The simplest DataSource is the StaticDataSource.
It's a DataSource that returns a static value for a field.
Let's see how to use it!

You have to attach the DataSource to one or more fields in the schema,
and you have to provide a config and a factory for the DataSource,
so that the planner knows how to create an execution plan for the DataSource and an "instance" of the DataSource.
*/

func ExamplePlanOperation() {
    staticDataSource, err := plan.NewDataSourceConfiguration[staticdatasource.Configuration](
      "StaticDataSource",
      &staticdatasource.Factory[staticdatasource.Configuration]{},
      &plan.DataSourceMetadata{
        RootNodes: []plan.TypeField{
          {
            TypeName:   "Query",
            FieldNames: []string{"hello"},
          },
        },
      },
      staticdatasource.Configuration{
        Data: `{"hello":"world"}`,
      },
    )
	if err != nil {
		panic(err)
    }
  
    config := plan.Configuration{
      DataSources: []plan.DataSource{
        staticDataSource,
      },
      Fields: []plan.FieldConfiguration{
        {
          TypeName:              "Query", // attach this config to the Query type and the field hello
          FieldName:             "hello",
          DisableDefaultMapping: true,              // disable the default mapping for this field which only applies to GraphQL APIs
          Path:                  []string{"hello"}, // returns the value of the field "hello" from the JSON data
        },
      },
      IncludeInfo: true,
    }

	operationDocument := ast.NewSmallDocument() // containing the following query: query O { hello }
	schemaDocument := ast.NewSmallDocument()
	report := &operationreport.Report{}
	operationName := "O"

	planner := plan.NewPlanner(context.Background(), config)
	executionPlan := planner.Plan(operationDocument, schemaDocument, operationName, report)
	if report.HasErrors() {
		panic(report.Error())
	}
	fmt.Printf("%+v", executionPlan) // Output: Plan...
}

/*
As you can see, the planner has created a plan for us.
This plan can now be executed by using the Resolver.
*/

func ExampleExecuteOperation() {
	var preparedPlan plan.Plan
	resolver := resolve.New(context.Background(), true)

	ctx := resolve.NewContext(context.Background())

	switch p := preparedPlan.(type) {
	case *plan.SynchronousResponsePlan:
		out := &bytes.Buffer{}
		err := resolver.ResolveGraphQLResponse(ctx, p.Response, nil, out)
		if err != nil {
			panic(err)
		}
		fmt.Println(out.String()) // Output: {"data":{"hello":"world"}}
	case *plan.SubscriptionResponsePlan:
		// this is a Query, so we ignore Subscriptions for now, but they are supported
	}
}

/*
Well done! You've now seen how to parse, print, validate, normalize, plan and execute a GraphQL document.
You've built a complete GraphQL API Gateway from scratch.
That said, this was really just the tip of the iceberg.

When you look under the hood of graphql-go-tools, you'll notice that a lot of its functionality is built on top of the AST,
more specifically on top of the "astvisitor" package.
It comes with a lot of useful bells and whistles that help you to solve complex problems.

You'll notice that almost everything, from normalization to printing, planning, validation, etc.
is built on top of the AST and the astvisitor package.

Let's take a look at a basic example of how to use the astvisitor package to build higher level functionality.
Here's a simple use case:

Let's walk through the AST of a GraphQL document and extract all tuples of (TypeName, FieldName).
This is useful, e.g. when you want to extract information about the fields that are used in a document.
*/

type visitor struct {
	walker                *astvisitor.Walker
	operation, definition *ast.Document
	typeFields            [][]string
}

func (v *visitor) EnterField(ref int) {
	// get the name of the enclosing type (Query)
	enclosingTypeName := v.walker.EnclosingTypeDefinition.NameString(v.definition)
	// get the name of the field (hello)
	fieldName := v.operation.FieldNameString(ref)
	// get the type definition of the field (String)
	definitionRef, exists := v.walker.FieldDefinition(ref)
	if !exists {
		return
	}
	// get the name of the field type (String)
	fieldTypeName := v.definition.FieldDefinitionTypeNameString(definitionRef)
	v.typeFields = append(v.typeFields, []string{enclosingTypeName, fieldName, fieldTypeName})
}

func ExampleWalkAST() {

	operationDocument := ast.NewSmallDocument() // containing the following query: query O { hello }
	schemaDocument := ast.NewSmallDocument()    // containing the following schema: type Query { hello: String }
	report := &operationreport.Report{}

	walker := astvisitor.NewWalker(24)

	vis := &visitor{
		walker:     &walker,
		operation:  operationDocument,
		definition: schemaDocument,
	}

	walker.RegisterEnterFieldVisitor(vis)
	walker.Walk(operationDocument, schemaDocument, report)
	if report.HasErrors() {
		panic(report.Error())
	}
	fmt.Printf("%+v", vis.typeFields) // Output: [[Query hello String]]
}

/*
This is just a very basic example of what you can do with the astvisitor package,
but you can see that it's very powerful and flexible.

You can register callbacks for every AST node and do whatever you want with it.
In addition, the walker helps you to keep track of the current position in the AST,
and it can help you to figure out the enclosing type of a field,or the ancestors or a node.
*/

I hope this tutorial gave you a good overview of what you can do with this library. If you have any questions, feel free to open an issue. Following, here's a list of all the important packages in this library and what problems they solve.

  • ast: the GraphQL AST and all the logic to work with it.
  • astimport: import GraphQL documents from one AST into another
  • astnormalization: normalize a GraphQL document
  • astparser: parse a string into a GraphQL AST
  • astprinter: print a GraphQL AST into a string
  • asttransform: transform a GraphQL AST, e.g. merge it with a base schema
  • astvalidation: validate a GraphQL AST against a schema
  • astvisitor: walk through a GraphQL AST and execute callbacks for every node
  • engine/datasource: the DataSource interface and some implementations
  • engine/datasource/graphql_datasource: the GraphQL DataSource implementation, including support for Federation
  • engine/plan: plan the execution of a GraphQL document
  • engine/resolve: execute the plan
  • introspection: convert a GraphQL Schema into an introspection JSON document
  • lexer: turn a string containing a GraphQL document into a list of tokens
  • playground: add a GraphQL Playground to your Go HTTP server
  • subscription: implements GraphQL Subscriptions over WebSockets and SSE

Contributors

  • Jens Neuse (Project Lead & Active Maintainer)
    • Initial version of graphql-go-tools
    • Currently responsible for the loader and resolver implementation
  • Sergiy Petrunin ๐Ÿ‡บ๐Ÿ‡ฆ (Active Maintainer)
    • Helped cleaning up the API of the pipeline package
    • Refactored the ast package into multiple files
    • Author of the introspection converter (introspection JSON -> AST)
    • Fixed various bugs in the parser & visitor & printer
    • Refactored and enhanced the astimport package
    • Current maintainer of the plan package
  • Patric Vormstein (Active Maintainer)
    • Fixed lexer on windows
    • Author of the graphql package to simplify the usage of the library
    • Refactored the http package to simplify usage with http servers
    • Author of the starwars package to enhance testing
    • Refactor of the Subscriptions Implementation
  • Mantas Vidutis (Inactive)
    • Contributions to the http proxy & the Context Middleware
  • Jonas Bergner (Inactive)
    • Contributions to the initial version of the parser, contributions to the tests
    • Implemented Type Extension merging (deprecated)
  • Vasyl Domanchuk (Inactive)
    • Implemented the logic to generate a federation configuration
    • Added federation example
    • Added the initial version of the batching implementation

Contributions

Feel free to file an issue in case of bugs. We're open to your ideas to enhance the repository.

You are open to contribute via PR's. Please open an issue to discuss your idea before implementing it so we can have a discussion. Make sure to comply with the linting rules. You must not add untested code.

graphql-go-tools's People

Contributors

aenimus avatar asoorm avatar benjaminyong avatar buger avatar buraksezer avatar chedom avatar csilvers avatar dastein1 avatar dependabot[bot] avatar devsergiy avatar dnerdy avatar fiam avatar furkansenharputlu avatar it512 avatar jackchuka avatar jensneuse avatar jivusayrus avatar jobergner avatar mistat avatar mvid avatar pedraumcosta avatar phob0s-pl avatar pvormste avatar schmidtw avatar soonick avatar starptech avatar thisisnithin avatar tmc avatar vvakame avatar yuribuerov avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

graphql-go-tools's Issues

Undefined types in middlewares

When using graphql defaults in schema definitions, such as String, the ValidationMiddleware throws an error:
ValidationMiddleware: Invalid Request: RuleName: RequiredArguments, Description: TypeNotDefined

This is because the scalars are undefined in the query provided. Users won't expect to need to provide GraphQL defaults in their schema, and this behavior will be a surprise.

refactor JSON decoding & avoid string conversions in the proxy

The current implementations of the proxy uses default json handling. That is, the Query String is parsed into a string which is casted into a byte slice for the middlewares as the middlewares work with byte slices only. As an optimization we should implement JSON parsing without strings which is possible with low level 3rd party libraries.

parser.ParseTypeSystemDefinition is O(n) where n is input size

In the current implementation there are being way too many objects created when parsing large schemas. This comes all down to the decision to store sub fields of a field in a reference slice. As I found out the common access path is to iterate through all child nodes. I came to the conclusion that a small change could greatly improve performance. I'll try to replace reference slices with a linked list of references where the actual content lives in the ast object cache (parser.ParsedDefinitions) itself. This should eliminate a lot of slices.

discussion: datasource definition via configuration file instead of directives from within the schema

Currently datasources are attached to the schema via directives applied to fields within the schema itself. This is very convenient for manual configuration using the sdl.
However an api driven approach might be easier to support using a separate configuration file, e.g. json or yaml.
This approach would be less convenient for configurations using the sdl but creating apis on top if it would be a lot easier.
Maybe there's also a way having both and have a translation layer between the two?
One tradeoff regarding the directive approach is that directives and data source capabilities need to stay in sync and input validation needs to be put in place. For json there's already json schema which solves this problem.
Additionally I think it's not that easy to wrap the existing functionality (directives) with an api and change the sdl programmatically.

This issue should be used to discuss options.

simplify proxy tests

Currently the tests in "pkg/proxy/http/proxy_test.go" use a repetitive pattern. We should refactor these to use simple check based tests with sub tests. This way it's easier to add additional tests and evolve the proxy with future features.

add a second level of classification for ident tokens

Besides the current keyword a token should also have a second level keyword to give further information. This is useful e.g. to parse fields named "type". While the keyword is TYPE in the current implementation it should be IDENT having keyword TYPE as second level keyword. This way we could easily parse all types as IDENT and then look into the second level type if needed.

Proxy does not support introspection

This is because the introspection query fails validation.
ValidationMiddleware: Invalid Request: RuleName: RequiredArguments, Description: TypeNotDefined

implements interface implementation wrong

If field types are interfaces, object type definitions should be able to implement the interface using derived field types too, e.g. using another object type definition as return type that implements the interface.

schema validation

We should add a schema validation thing to find issues like:

  • schema definition missing in schema
  • to be continued...

Also when validating a Query there should be a meaningful message in case the validation errs due to missing schema declaration. A possible solution might be to initialize the operation definition values with "-1".

schema validation should include custom directives

When generating the final schema the schema definitions from the execution package need to be included automatically. Next, a user might use these directives or create additional ones, to extend the schema. Finally there needs to be a validation step driven by the data sources itself. That is, the data sources should expose a validation function which validates against the schema if all directives usages are correct. E.g. the user might have changed the shape of the directive which could lead to a runtime error due to a misconfiguration. So this final validation step should validate if all directives look like expected and also if usages are correct. The latter might have already be caught by general schema validation, e.g. checking if the directive is in the correct place and if all fields are of correct type and value etc..

#import comments to combine multiple schema files

I found it annoying to copy schema definitions around in order to make parsers happy.
A comment at the top of a file declaring imports will solve the problem.
Nesting should also be possible.
This should be straightforward as we only have to scan for all files declared in input statements and merge them all into one big file.
Keep in mind that paths need to be relative to the file declaring the input statement.

Example:

main.graphql

#import "../common/schema.graphql"

../common/schema.graphql

#import "./specific/schema.graphql"

correct imports:
../common/schema.graphql
../common/specific/schema.graphql

wrong imports:
../common/schema.graphql
./specific/schema.graphql

should also allow regex for paths:
https://golang.org/pkg/path/filepath/#Match

Possible features

This issue is intended to track Ideas, URLs on what features could possibly be implemented.

add api to merge AST's with extensions

A graphql document might have multiple schema/type/etc. declarations including extensions via the "extend" keyword.

As both parser & lexer are now able to deal with "extend" (#81 ) we should now add an api which enables to merge an extension into the original declaration within the ast.

Example:

before merging->

schema {
  query: Query
}

extend schema {
  mutation: Mutation
}

after merging ->

schema {
  query: Query
  mutation: Mutation
}

The same should apply to object type definitions etc.
Possible use case:
Having this feature implemented opens the way to build schema stitching ontop.
That is, parse two schemas and stitch them together via extensions.
This way one could build a proxy that automagically can stitch together multiple graphql endpoints.

extract all http data source planning logic into a base http data source

Currently body and headers is only implemented for HTTPJSONDataSource. Most of the logic also applies to the GraphQL-, HTTP Polling DataSource. They only differ slightly when it comes to creating the payload. Refactoring the logic into a BaseHTTPDataSource would make these features instantly available to all HTTP based DataSources and will reduce code duplication. Overrides should be possible by design. E.g. the GraphQL DataSource will override the body behaviour.

the request AST should be merged before invoking middlewares

Nested inline fragments and fragment spreads should be resolved/merged before invoking the first middleware. This should simplify the middleware development because middlewares don't need to traverse complex nested structures but only work on a "clean" graph.

Request Config

We need to set some information on a per request basis because we handle many different schemas with the same proxy.

  • Schema
  • Backend Addr
  • Backend Host
  • Headers for the request that is made to the backend

All of the info is derived on a per request basis from data that comes from the request.
Currently, this is only the URL but it could be based on request headers in the future too.

It might be helpful in the future to be able to pass down data into middlewares via the context that can be derived from the request information.

Simplify lexer and parser struct by extracting AST into own struct and dividing between schema definition and operation

In the current implementation the lexer is the keeper of the input for both the schema definitions and the operation definitions.
These don't necessarily have to be tied to one lexer.
Also the parser ParsedDefinitions struct is the keeper of the raw AST objects. With the help of lookup and the walker one can make good use of the AST.

While this implementation works good it might make sense to extract the AST into its own struct. Lexer, Parser, lookup, walker and manual ast mod could then just interact with the AST struct. This should simplify further development as it clearly divides responsibilities.

add request headers to the execution context

Request headers from the client request should be made available to the request context/templating engine for the upstream request. That way we could simply forward client headers, e.g. Authorization, using templating syntax like: "key": "Authorization", "value:":"{{ .requestHeaders.Authorization }}"

Validation Middleware should return better error messages

It's a common issue that the validation middleware returns the following error:

ValidationMiddleware: Invalid Request: RuleName: RequiredArguments, Description: TypeNotDefined

This could be greatly improved to help the developer find the undefined type. One should look at other validation errors too and look for improvement.

lexer & parser should support the "extends" keyword

Having implemented the "extends" keyword should enable easy type extensions and make validation much easier as base types like Query can easily be extended with the system internal fields (e.g. "__schema") to make validation pass.

Implementing this feature makes solving #64 easy.
Additionally support of the extends keyword will prepare the library to support schema stitching which is an essential part of writing a proxy.

New Use Case: Chaos Proxy

In the vains of New Relics article about chaos testing with graphql we had the idea to use the proxy to add this feature to any graphql api.
Current ideas:

@chaos directive on field or type level to replace original response with error message

@delay directive on field or type level to slow down the whole response as soon as a field/type with this directive is used in the query

This is a placeholder issue and needs to be filled with more info!

add planning hooks

Currently directives like "@ListFilterFirstN" are implemented directly within the planner.
The planner will therefore be convoluted by adding more directives.

A better approach would be to introduce the concept of planning hooks.
E.g. for each field or node that gets added to the planning tree we could introduce a generic hook interface.
A middleware could define the hook itself, e.g. whenever a directive is attached to the field definition and the planning visitor visits a field within a query the hook should get invoked to override/alter the currently processed node.
This way the planner stays extensible without constant modification and will not get convoluted by all the various middlewares.
Testing also becomes easier because one has only to test the hook itself and how it alters a node and not the whole planner itself.

Currently existing directives like @ListFilterFirstN should be extracted using this newly introduced feature once it's implemented.

Extract standalone proxy into own repository

Currently this repo is a mix between graphql go tools and a standalone proxy.
After getting the current backlog done I'd like to extract the standalone proxy into its own repository to have clean separation between a low level library and a ready to use high level tool for the graphql community.

transform introspection json into ast.Document

Often you're able to use introspection to get information about an existing GraphQL service. The schema information in this case is in json format. If you wanted to work with the GraphQL SDL instead you'd need a transformation step. This is currently not implemented but would be highly useful if we had it.

improve documentation

This repo should improve documentation of its inner workings like parser, lexer, lookup, walker, , validation, printer. There should be both, inline explanations as well as external docs to explain how all the parts play together. Debugging should also be addressed as it's a bit unusual because of heavy use of references.

planning/execution should be operation aware

Currently planning & execution is not aware of the possibility of having multiple operations within a document. Planning should be extended to plan for multiple operations. Execution should take the operationName from the request and only execute this particular operation.

Suggest me the ways to dynamically update the test data in ginkgo _test.go file

Suggest me the ways to dynamically update the test data in ginkgo _test.go file

I am creating a test package to test the smart contract using mock and stub. So the created file should get some input from user regarding use cases, parameters, etc and based on that file should be created. Like i need to create a template for test.go file.

Parser is dropping query function variable signature

Currently if you have a query with variables defined on it, for instance
query Item($id:String!) {Item(id:$id) {name, description}}
then the proxy will spit out this as the proxied query:
query Item {Item(id:$id) {name, description}}
and this obviously causes an issue with variable definition.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.