Giter VIP home page Giter VIP logo

gramps--legacy's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

gramps--legacy's Issues

[1.0] Feature List

These are the features we want to have for 1.0.

- [ ] Refactor: Move to Typescript #27 <-- will revisit

  • Feature: Update export format to be compatible with external frameworks #26
  • Docs: Write documentation #28
  • Docs: Update README #29

Create helper function to convert data source to executable schema

In order to allow data sources to be used in frameworks that require individual data sources to be executable schemas (e.g. qewl), we should add a helper that converts a GrAMPS data source to an executable schema + context + namespace.

See this comment for a potential solution.

Question: should this be a named export from @gramps/gramps?

Add a build step before running gramps CLI command

The gramps command points to the dist/dev/server.js file, but doesn't force a build. This is confusing. It should either run the build before starting GrAMPS, or point to the source files and use nodemon.

Using nodemon may be preferable, because it would solve the problem of watching for changes.

Async context

Hey guys,
it seems like gramps doesn't allow for context callback to be async function.
Is it correct? Wouldn't it be advantageous to have such ability?

Cheers.

Cannot setup Subscription resolvers in top level resolvers map

I have the following source code

import pubsub from 'services/pubsub'

const identity = i => i

const FailingSubscriptionDataSource = {
    namespace: 'Subscription',
    resolvers: {
        Subscription: {
            test: {
                resolve: identity,
                subscribe: () => pubsub.asyncIterator('test')
            },
        }
    },
    typeDefs: `
        type Query {
            test: Float
        }

        type Subscription {
            test: Float
        }
    `
}

const WorkingSubscriptionDataSource = {
    namespace: 'Subscription',
    stitching: {
        resolvers: (mergeInfo) => ({
            Subscription: {
                test: {
                    resolve: identity,
                    subscribe: () => pubsub.asyncIterator('test')
                },
            }
        })
    },
    typeDefs: `
        type Query {
            test: Float
        }

        type Subscription {
            test: Float
        }
    `
}

setInterval(() => {
    const value = Math.random()

    pubsub.publish('test', value)
}, 250)

I am able to get this subscription working using the stitching method, but when I set the top level resolvers, I get this error in mapResolvers: Error: Expected Function for test resolver but received object Any ideas on how to support subscription resolvers at the top level of a data source.

I'm on @gramps/gramps beta 9.

Use with Schema Stitching

Hi,

I'd like to get your opinion on using GrAMPS such that I limit each resulting GQL server to one data source (in our case each REST API is maintained by a separate team/contractor) and use Schema Stitching to create the unified schema/server. This way separate teams can expose their own GQL end points (for use by customers) while allowing us to integrate all REST APIs under one GQL server for another use case. This way things are more modular.

What issues do you foresee if any?

Thanks & great to see momentum building up for the GrAMPS approach

Question about handling actions on a per-request basis

We've been struggling with how to do things on a "per request" basis. For example, we want to have a child logger created per request, forward headers found in a request, or setup a dataloader that is created for every request.

We're still on the old GrAMPS, not on 1.0 yet, but have always found performing these actions with GrAMPS to be awkward.

  • logging: we create logger in middleware piece, but need "hacks" to get to the logger in formatError or to swap the logger used in the base connector. I'd really like to create another child logger for each data-source with the context name.
  • header forwarding: pass context.req.headers from resolver, through model, to the connector to pass headers through request-promise
  • dataloader: hackery black magic

Do you have any insights into these use cases especially around forwarding headers and constructing dataloaders? It's been doable through hacks, but I'm wondering if the newer GrAMPS 1.0 makes these tasks any easier (less hacky).

Cannot extend types in data sources

In my base data source, I have created a Viewer type that other data sources can choose to extend. My base Viewer type is simply as follows:

type Viewer {
    id: ID!
}

In a different data source, I would like to be able to extend this type using the following syntax:

extend type Viewer {
    test: Float
}

This does not work, however, because each data source is being converted into a schema and then the schemas are being merged. If I just add my additional fields without the extend syntax, the first encountered instance of a type with a given name is used. mergeSchemas accepts an arg onTypeConflict for resolving issues like this, but as far as I can tell, there is no way to pass this in via gramps config. Perhaps this can be added to the config somewhere?

Export a helper function to make data sources executable

UPDATE 2017-12-19: After a lot of discussion we determined that it's a better idea to export a helper to make data sources executable rather than make each data source self-contained. See gramps-graphql/gramps#26 (comment) for the proposed helper function.


@jlengstorf commented on Mon Nov 20 2017

After a discussion today with @schickling, @kbrandwijk, and @nikolasburk about how GrAMPS data sources will fit with external tools (such as Graphcool and qewl), a couple things came up:

  1. GrAMPS data sources should be exporting an executable schema, and not the typeDefs/resolvers/mocks as I was previously pushing for
  2. GrAMPS data sources should not require the context to be attached externally (meaning there should be no intermediate step to attach a data source context to its resolvers)

(@ecwyne and @corycook, you're going to say, "I told you so.") πŸ˜„

Should we add a way to pass in external executable schemas?

Porting the discussion on external executable schemas to its own issue so we don't lose track of it. Original discussion is copy-pasted below:

@mfix22 gramps-graphql/gramps#26 (comment)

@jlengstorf I think this is a very neat way to merge data sources. Unless I am reading this wrong, the only use case we have that this does not cover is for linking in a remote executable schema:

makeRemoteExecutableSchema({
  schema: builtSchema,
  link: HttpLink, // which points at my remote schema
});

Technically we could stitch this remote schema in after gramps() has been called but that seems messy.

@jlengstorf gramps-graphql/gramps#26 (comment)

@mfix22 Yeah, that's a limitation of GrAMPS β€” in order to keep the data sources simple, we need to run all data sources through GrAMPS to get an executable schema. I suppose it would be possible to add an extra option to allow additional executable schemas to be passed in.

Right now, we create an executable schema right before returning.

We could modify this so you'd call something like:

const external = makeRemoteExecutableSchema({
  schema: builtSchema,
  link: HttpLink, // which points at my remote schema
});

const GraphQLOptions = gramps({
  dataSources: [XKCD],
  externalSchemas: [external],
});

And then we'd modify the gramps() function to spread that new prop as well:

  const schema = mergeSchemas({
    schemas: [...schemas, ...externalSchemas, ...linkTypeDefs],
    resolvers,
  });

Theoretically this would Just Workβ„’ and would even allow schema stitching between GrAMPS data sources and remote executable schemas.

However, I haven't worked with remote schemas yet, so I'm not sure if this would do what I think it'll do. Let me know if this looks like it'll work and I'll push a new beta package so you can try it out.

@mfix22 gramps-graphql/gramps#26 (comment)

@jlengstorf This is exactly how I am currently handling the use case! I am pretty sure it will Just Workβ„’ πŸ˜„ I think that option would fill in the gap perfectly while keeping data sources simple.

@kbrandwijk gramps-graphql/gramps#26 (comment)

@mfix22 @jlengstorf An alternative approach would be to keep the gramps part contained to a single schema, and leave the stitching of other schemas to other emerging tooling (separation of concerns). I don't think this is messy at all.

@jlengstorf gramps-graphql/gramps#26 (comment)

@kbrandwijk I agree, but I wonder if it's worth including this for simple use cases since it's really just two lines of code. I may end up eating these words later, but I can't really see how this could cause problems later on.

I do agree that advanced use cases should probably be handled outside of GrAMPS, but I see a good use case in something like this:

  • I have a gateway that's built of my own custom data sources
  • I'm stitching them together with GrAMPS
  • I have a new requirement that means I need to include GitHub repos for users
  • I add the GitHub schema as a remote schema and stitch it together with my User type via GrAMPS

If it went too far beyond that, other tools would be easier/recommended, but I can see this being valuable.

Any objections to including it?

@kbrandwijk gramps-graphql/gramps#26 (comment)

@jlengstorf Well, I guess it wouldn't hurt anyone, but I would strongly advise against making this a best practice. Mainly because in most use cases, it will involve more than just calling makeRemoteExecutableSchema. For most endpoints, there's authentication, there's .graphqlconfig.yml for defining your endpoints, there's not wanting the entire remote GraphQL schema to become part of your final schema, etc.

@jlengstorf gramps-graphql/gramps#26 (comment)

Hmmm... that's a pretty solid argument for not including it. I haven't worked on remote schemas, so I'm pretty much going with the experience of the people using it.

@mfix22, do you have a counterargument here?

If @kbrandwijk's point holds true for all but the most trivial use cases, it does seem smart to leave out the feature and instead add an example of how to do this outside of GrAMPS. Otherwise I'd worry we're inadvertently introducing something we'd have to either deprecate later or add a big warning recommending people don't use it. (And if either of those is true, we may as well support the proper path out of the gate.)

Thoughts?

@mfix22 gramps-graphql/gramps#26 (comment)

I do see @kbrandwijk's point about not wanting to expose the entire 3rd party's schema as a definite concern here. That was a question I still want to ask one of the Apollo guys: what is the best way to expose part of a schema, especially a remote one?

As for the other concerns, GrAMPs already supports adding context and you can still add headers to handle authorization use-cases (more Express middleware for example).

For initial release, i don't think GrAMPS needs to support remote executable schemas. The escape hatch to include them might help some users though. What is nice is that including them doesn't hurt the users that don't.

Maybe the best argument for not including them at first is that once they are included, it would be a breaking change to remove them, but that is not true the other way around (once we know more about exposing a subset of a schema).

@jlengstorf gramps-graphql/gramps#26 (comment)

@mfix22 The Graphcool team just released a pretty interesting article on exposing subsets of underlying GraphQL schemas. There's a video that shows an approach to controlling data access. Check it out: https://blog.graph.cool/graphql-databases-a-preview-into-the-future-of-graphcool-c1d4981383d9

Maybe we can pull in some Apollo folks to weigh in? @stubailo, @peggyrayzis, or @jbaxleyiii β€” any thoughts on best practices for managing remote schemas as laid out in @kbrandwijk and @mfix22's comments above? (Also, πŸ‘‹.)

Version 10 of node.js has been released

Version 10 of Node.js (code name Dubnium) has been released! 🎊

To see what happens to your code in Node.js 10, Greenkeeper has created a branch with the following changes:

  • Added the new Node.js version to your .travis.yml

If you’re interested in upgrading this repo to Node.js 10, you can open a PR with these changes. Please note that this issue is just intended as a friendly reminder and the PR as a possible starting point for getting your code running on Node.js 10.

More information on this issue

Greenkeeper has checked the engines key in any package.json file, the .nvmrc file, and the .travis.yml file, if present.

  • engines was only updated if it defined a single version, not a range.
  • .nvmrc was updated to Node.js 10
  • .travis.yml was only changed if there was a root-level node_js that didn’t already include Node.js 10, such as node or lts/*. In this case, the new version was appended to the list. We didn’t touch job or matrix configurations because these tend to be quite specific and complex, and it’s difficult to infer what the intentions were.

For many simpler .travis.yml configurations, this PR should suffice as-is, but depending on what you’re doing it may require additional work or may not be applicable at all. We’re also aware that you may have good reasons to not update to Node.js 10, which is why this was sent as an issue and not a pull request. Feel free to delete it without comment, I’m a humble robot and won’t feel rejected πŸ€–


FAQ and help

There is a collection of frequently asked questions. If those don’t help, you can always ask the humans behind Greenkeeper.


Your Greenkeeper Bot 🌴

Conditional stitching to specific datasources

I have been thinking about scenarios to use stitching on the GrAMPS-level, and the only use case I can come up with is when I want to stitch to another datasource. However, to keep datasources as independent as possible, I would like to add the feature to:

  • Specify which datasource I want to stitch to (based on its unique namespace)
  • Only apply that stitching if the datasource is part of my collection of datasources

To achieve this, I propose adding a key to the stitching object like this:

context: 'myDatasource',
model: ...,
schema: ...,
resolvers: ...,
stitching: {
  xckd: {
    typeDefs: ...,
    resolvers: ...
  },
  myOtherDatasource: {
    typeDefs: ...,
    resolvers: ...
  }
}

So if I setup my GrAMPS like this:

const gramps = gramps({ datasources: [myDatacource, xckd] })

That only the stitching on myDatasource to xckd is applied, and not to myOtherDatasource, because those types, and that data, is not available.

Type namespace prefixes

One of the main reasons that different datasources play well together is that they use different prefixes for their types. However, this is neither enforced nor validated. I would like GrAMPS to apply the namespace of the datasource to my types itself, instead of having to do this myself for every type. I think this would make individual datasources a lot more readable, and given that they are always supposed to be processed through GrAMPS, it seems logical that GrAMPS would be in charge of managing those namespaces.

I don't know if there are cases where you explicitly don't want types to be prefixed. If that's the case, then this proposal wouldn't work. In that case, I would some form of validation, instead of depending on mergeSchemas to throw an error.

Action required: Greenkeeper could not be activated 🚨

🚨 You need to enable Continuous Integration on all branches of this repository. 🚨

To enable Greenkeeper, you need to make sure that a commit status is reported on all branches. This is required by Greenkeeper because it uses your CI build statuses to figure out when to notify you about breaking changes.

Since we didn’t receive a CI status on the greenkeeper/initial branch, it’s possible that you don’t have CI set up yet. We recommend using Travis CI, but Greenkeeper will work with every other CI service as well.

If you have already set up a CI for this repository, you might need to check how it’s configured. Make sure it is set to run on all new branches. If you don’t want it to run on absolutely every branch, you can whitelist branches starting with greenkeeper/.

Once you have installed and configured CI on this repository correctly, you’ll need to re-trigger Greenkeeper’s initial pull request. To do this, please delete the greenkeeper/initial branch in this repository, and then remove and re-add this repository to the Greenkeeper integration’s white list on Github. You'll find this list on your repo or organization’s settings page, under Installed GitHub Apps.

Using a GraphQL datasource

Hey,
I have a quick question. You've very well described, how to use gramps data-sources with non-GraphQL, eg REST endpoints and how to wrap them in GraphQL.
But I already have a bunch of GQL Microservices, which I would like to merge together. How would you achieve that?
I tried writing data-sources with apollo-toolses makeRemoteExecutableSchema but that brings me in an async loop where I would have to export the gramps object inside a then function, which I don't like.
Since I didn't find anything useful in the docs, do you have any suggestions, how to merge together my endpoints?
Thank you!

Gramps does not seem to support custom scalar types

I have created a data source to support a custom JSON scalar, based on this tutorial: https://www.apollographql.com/docs/graphql-tools/scalars.html

My data source is defined as follows:

import GraphQLJSON from 'graphql-type-json'

const resolvers = {
    Query: {
        json: () => JSON.stringify({ test: true })
    },
    JSON: GraphQLJSON,
}

const JSONDataSource = {
    namespace: 'JSON',
    context: {},
    resolvers,
    typeDefs: `
        scalar JSON

        type Query {
            json: JSON
        }
    `
}

export default JSONDataSource

This causes the following issue in my build:

 Error: Expected Function for name resolver but received string
   at checkFn (/path/to/project/node_modules/@gramps/gramps/dist/lib/mapResolvers.js:13:11)

I think because we are mapping over all of the keys in mapResolvers, and not checking for built-in GraphQL types, there are some issues.

My proposed solution is that if we encounter any built-in GraphQL types that cause this issue (at least scalars, possibly enums), that we simply pass them in as-is to the final resolver map. Thoughts?

Make docs layout responsive

The current docs layout on mobile phones is... rough.

screencap-by-jlengstorf 98

Simple solution would be to collapse the left nav into a top hamburger. I'm not crazy about that, but don't really have the bandwidth to design a better solution.

Another option would be to drop the docs into a different docs framework altogether.

Add support for schema stitching

(Splitting the schema stitching conversation off from the larger discussion in gramps-graphql/gramps-express#39 to help concentrate individual discussions.)

From @ecwyne:

But my sense is that staying as close to the Apollo api will serve GrAMPS well in the long run.

I agree with this 100%. My intention wasn't to suggest that GrAMPS define its own schema stitching API, but rather to suggest that we need to think through how to define the two-way bindings described in the schema stitching docs.

My gut says that the most sustainable way to do this β€” if we use the User/Chirp example β€” would be to have the User data source define something like this:

const dataSource = {
  namespace: 'User',
  schema: /* ... */,
  resolvers: /* ... */,
  mocks: /* ... */,
  model: /* ... */,
  stitching: {
    requires: ['Chirp'],
    linkTypeDefs: `
        extend type User {
          chirps: [Chirp]
        }
    `,
    resolvers: mergeInfo => ({
      User: {
        chirps: {
          fragment: `fragment UserFragment on User { id }`,
          resolve(parent, args, context, info) {
            const authorId = parent.id;
            return mergeInfo.delegate(
              'query',
              'chirpsByAuthorId',
              {
                authorId,
              },
              context,
              info,
            );
          },
        },
      },
    },
  }
};

The Chirp schema would be required to define it's own stitching object to enable the author field within the Chirp schema.

The requires field would allow us to check for the existence of required data source(s) before adding the schema stitching pieces to our complete executable schema. (And again, this would be only for local data sources β€” if stitching with an external data source, that would happen outside of GrAMPS.)

The API above is not necessarily the best way to do this β€” it's just the first one I thought of. But the main point I'm getting at is the underlying concept: each data source is responsible for stitching its half of the equation.

[Discussion] Resolver simplification

@jlengstorf @corycook
I wanted to see if anyone else thought this would be helpful.

// Optionally turn this
const resolvers = {
  Query: {
    getLatestComic: (_, __, context) => context.getLatestComic(),
    getComicById: (_, { id }, context) => context.getComicById(id),
  },
  XKCD_Comic: {
    link: data => data.link || `https://xkcd.com/${data.num}/`,
  },
};

// Into this
const resolvers = {
  Query: {
    getLatestComic: ctx => ctx.getLatestComic(),
    getComicById: (ctx, args) => ctx.getComicById(args.id),
  },
  XKCD_Comic: {
    link: p => p.link || `https://xkcd.com/${p.num}/`,
  }
};

To do so, all you need to do is wrap each resolver function with this.

import reflector from 'js-function-reflector'; // https://www.npmjs.com/package/js-function-reflector

const reflect = fn => {
	const {args: arr} = reflector(fn);
	return (parent, args, context, info) => {
		const obj = {
			p: parent,
			parent,
			r: parent,
			root: parent,
			a: args,
			args,
			c: context,
			ctx: context,
			context,
			i: info,
			info,
		};
		return fn.apply(null, arr.map((e, i) => obj[e] || arguments[i]))
	}
};

This checks the argument names of the provided function fn and replaces:
p || parent || r || root with the 1st argument to the resolver function
a || args with the 2nd argument
c || ctx || context with the 3rd argument
i || info with the 4th argument
defaults to the nth argument

Mainly, my question is whether or not everyone else is tired of writing resolver functions with all 4 arguments and only using one or two of them

Plugin API for data sources

A plugin API for data sources would allow devs to easily allow for apps to plug data sources without much effort in modular apps. This would be very useful in some situations.

Compile-time code generation

I have recently started experimenting with compile-time code generation for the bindings that graphql-binding uses, and I thought that might also be applied to GrAMPS at some point.

Do you have any existing plans to incorporate compile-time code generation?

Integration issues

After having a good look at the getting started and the sources, I'd like to share a few integration issues that I've come across. Basically the biggest issue is the fact that the output from gramps() goes directly into the apollo-server middleware. I thought I could get around it by accessing the individual properties of that object (for example, the schema), but that's also not possible, because the output is actually a function of req. Of course, I can work around that as well, by running gramps()(null).schema to get to the schema, but that doesn't really feel right.

This means that I end up with an all or nothing solution, which is not really what I would like. I'd like to have the ability to composition these components into my own server.

To be able to do this I need the following:

  • Access to the resulting schema. That makes it easier to use that and apply my own stitching, or add additional schemas to my server outside of GrAMPS.
  • Access to a middleware function that constructs my context. Currently, the context is only passed in to apollo-server. Ideally, the context would be constructed in an Express middleware function, so I have the possibility to add my own required bits and pieces to the context as well, before it is passed in to apollo-server.

This might be as easy as:

const gramps = gramps()
const schema = gramps.schema

// do my own stuff with my schemas

app.use('/graphql', gramps.setContext())

// do my own stuff in the context

app.use('/graphql', graphqlExpress({ schema: finalSchema, context: req => req, /* additional options */ })

Now, if I didn't need all of that, I would still be able to do:

const gramps = gramps()
app.use('/graphql', express.json(), graphqlExpress(gramps.serverOptions));

So it wouldn't impact the getting-started experience much, but would make it a lot easier to either use it in a more mixed environment, or integrate it with other tools (like graphql-yoga and my upcoming supergraph framework).

remove bin directory and clean up dependencies for 1.0

For a 1.0 release @gramps/gramps needs to remove all /bin features and clean up dependencies accordingly (in favor of @gramps/gramps-cli.

From what I see a production @gramps/gramps should only have:

dependencies: {
  graphql-tools,
  lodash.merge
},
peerDependencies: {
  graphql
}

Add extraContext to each data source context

Developers need to be able to add extra context to each data source via the extraContext prop:

const GraphQLOptions = gramps({
  dataSources: [/* ... */],
  extraContext: req => ({
    headers: req.headers,
  }),
});

However, when we scoped context to data source namespaces, we made it impossible to access that extra context:

const wrapFn = namespace => (fn, fieldName) => {
  checkFn(fn, fieldName);
  return (root, args, context, info) =>
    fn(root, args, context[namespace], info);
};

One solution could be to put the extra context in under a extraContext prop, then merge in wrapFn:

  const wrapFn = namespace => (fn, fieldName) => {
    checkFn(fn, fieldName);
    return (root, args, context, info) =>
-     fn(root, args, context[namespace], info);
+     fn(root, args, { ...context.extraContext, ...context[namespace] }, info);
  };

I'm not married to that solution, but it seems safe and fast. Anyone want to add the PR and tests for it?

An in-range update of nodemon is breaking the build 🚨

The devDependency nodemon was updated from 1.18.5 to 1.18.6.

🚨 View failing branch.

This version is covered by your current version range and after updating it in your project the build failed.

nodemon is a devDependency of this project. It might not break your production code or affect downstream projects, but probably breaks your build or test tools, which may prevent deploying or publishing.

Status Details
  • ❌ continuous-integration/travis-ci/push: The Travis CI build could not complete due to an error (Details).

Release Notes for v1.18.6

1.18.6 (2018-11-05)

Bug Fixes

Commits

The new version differs by 1 commits.

  • 521eb1e fix: restart on change for non-default signals (#1409) (#1430)

See the full diff

FAQ and help

There is a collection of frequently asked questions. If those don’t help, you can always ask the humans behind Greenkeeper.


Your Greenkeeper Bot 🌴

Stitching causes errors when using [email protected]

I noticed an issue with graphql-tools when we tried to upgrade to [email protected]: ardatan/graphql-tools#537

The issues are caused in our tests, and after a little digging I figured out it's happening in our stitching handler.

Here's one of the two offending tests:

    it('warns for use of schema', () => {
      console.warn = jest.genMockFn();
      const dataSources = [
        {
          namespace: 'Baz',
          schema: 'type User { name: String } type Query { me: User }',
          context: req => ({ getUser: () => ({ name: 'Test user' }) }),
          resolvers: { Query: { me: (_, __, context) => context.getUser() } },
          stitching: {
            linkTypeDefs: 'extend type User { age: Int }',
            resolvers: mergeInfo => ({
              User: {
                age: () => 40,
              },
            }),
          },
        },
      ];

      gramps({ dataSources });

      return expect(console.warn).toBeCalled();
    });

Run as-is, we get the following error:

$ npm run lint --silent && npm run test:unit --silent
 FAIL  test/gramps.test.js
  ● GrAMPS β€Ί gramps() β€Ί warns for use of schema

    TypeError: Cannot read property 'getFields' of null

      143 |         typeof source.context === 'function'
      144 |           ? source.context(req)
    > 145 |           : source.context;
      146 |
      147 |       return {
      148 |         ...allContext,

      at node_modules/graphql-tools/src/stitching/mergeSchemas.ts:114:27
          at Array.forEach (<anonymous>)
      at mergeSchemas (node_modules/graphql-tools/src/stitching/mergeSchemas.ts:85:17)
      at gramps (src/gramps.js:145:1430)
      at Object.it.only (test/gramps.test.js:41:25)

But if I remove the stitching prop and run the test like this:

    it.only('warns for use of schema', () => {
      console.warn = jest.genMockFn();
      const dataSources = [
        {
          namespace: 'Baz',
          schema: 'type User { name: String } type Query { me: User }',
          context: req => ({ getUser: () => ({ name: 'Test user' }) }),
          resolvers: { Query: { me: (_, __, context) => context.getUser() } },
        },
      ];

      gramps({ dataSources });

      return expect(console.warn).toBeCalled();
    });

It passes as expected.

Nothing looks out-of-place in https://github.com/gramps-graphql/gramps/blob/master/src/lib/combineStitchingResolvers.js

I don't see anything at first glance that looks obviously weird in the execution, either:

https://github.com/gramps-graphql/gramps/blob/master/src/gramps.js#L129-L138

We need to dig into this a little deeper and figure out what's going wrong.

Context differences between top-level and stitching resolvers

I have the following test data source:

// context keys resolver
const context = (root, args, context) => JSON.stringify(Object.keys(context))

const TestDataSource = {
    context: {
        test1: true,
        test2: true,
    },
    namespace: 'test',
    resolvers: {
        Query: {
            context,
        },
    },
    stitching: {
        linkTypeDefs: `
            extend type Query {
                test: Test
            }
        `,
        resolvers: () => ({
            Query: {
                test: () => ({}),
            },
            Test: {
                context,
            },
        })
    },
    typeDefs: `
        type Query {
            context: String
        }

        type Test {
            context: String
        }
    `
}

export default TestDataSource

My GrAMPS config is as follows:

gramps({
    dataSources: [TestDataSource],
    extraContext: (req) => ({
        req
    })
})

When I run the following query:

query {
  context
  test {
    context
  }
}

...I get the following output:

{
  "data": {
    "context": "[\"test1\",\"test2\"]",
    "test": {
      "context": "[\"req\",\"test\",\"_extensionStack\"]"
    }
  }
}

Is it intentional that the GraphQL context for stitching resolvers is different from the top-level resolvers in that the top-level resolvers merge the current data source context into the main context object and the stitching resolvers add the namespace context at context.<namespace>?

Another thing to note is that extraContext only seems to add my req key for stitching resolvers and not top-level resolvers. Is this a known issue or by-design?

Class methods are lost when spreading over context

query {
  getLatestComic {
    transcript
    year
    month
    day
    link
    news
  }
}
{
  "data": {
    "getLatestComic": null
  },
  "errors": [
    {
      "message": "context.getLatestComic is not a function",
      "locations": [
        {
          "line": 2,
          "column": 3
        }
      ],
      "path": [
        "getLatestComic"
      ]
    }
  ]
}

Server log:

Error: context.getComicById is not a function
    at Object.checkResultAndHandleErrors xxx/packages/xxx-graphql/node_modules/.registry.npmjs.org/graphql-tools/2.18.0/node_modules/graphql-tools/src/stitching/errors.ts:84:7)
    at Object.<anonymous> xxx/packages/xxx-graphql/node_modules/.registry.npmjs.org/graphql-tools/2.18.0/node_modules/graphql-tools/src/stitching/delegateToSchema.ts:97:14)
    at step xxx/packages/xxx-graphql/node_modules/.registry.npmjs.org/graphql-tools/2.18.0/node_modules/graphql-tools/dist/stitching/delegateToSchema.js:40:23)
    at Object.next xxx/packages/xxx-graphql/node_modules/.registry.npmjs.org/graphql-tools/2.18.0/node_modules/graphql-tools/dist/stitching/delegateToSchema.js:21:53)
    at fulfilled xxx/packages/xxx-graphql/node_modules/.registry.npmjs.org/graphql-tools/2.18.0/node_modules/graphql-tools/dist/stitching/delegateToSchema.js:12:58)
    at <anonymous>
    at process._tickCallback (internal/process/next_tick.js:188:7)

I tried this:

  const XKCD = require('@gramps/data-source-xkcd').default

  const gramps = require('@gramps/gramps').default
  const GraphQLOptions = gramps({
    dataSources: [XKCD],
  })

  router.post(
    '/graphql-gramps',
    koaBody(),
    graphqlKoa(ctx => {
      console.log({ctx})
      return GraphQLOptions(ctx.req)
    }),
  )

I can see all the functions in graphiql, but running them causes this error.

An in-range update of graphql-upload is breaking the build 🚨

The dependency graphql-upload was updated from 8.0.6 to 8.0.7.

🚨 View failing branch.

This version is covered by your current version range and after updating it in your project the build failed.

graphql-upload is a direct dependency of this project, and it is very likely causing it to break. If other packages depend on yours, this update is probably also breaking those in turn.

Status Details
  • ❌ continuous-integration/travis-ci/push: The Travis CI build could not complete due to an error (Details).

Release Notes for Version 8.0.7

Patch

  • Updated dependencies.
  • Handle invalid object paths in map multipart field entries, fixing #154.
  • Import WriteStream from fs-capacitor as a named rather than default import.
Commits

The new version differs by 6 commits.

See the full diff

FAQ and help

There is a collection of frequently asked questions. If those don’t help, you can always ask the humans behind Greenkeeper.


Your Greenkeeper Bot 🌴

Using prepare and extraContext is confusing

Thanks for a great library !

While checking how to extract schema from gramps I've found great example here: https://gramps.js.org/api/gramps/

I had an issue with this part, seems context doesn't get a request object in such setup:

app.use('/graphql',
  bodyParser.json(),
  gramps.addContext,         // Add the extra context
  graphqlExpress({
    schema,                  // Use the merged schema...
    context: gramps.context, // ...and the GrAMPS context object
  }),
);

So I had to rewrite it to:

app.use('/graphql',
  bodyParser.json(),
  gramps.addContext,         // Add the extra context
  graphqlExpress(req => ({
    schema,                  // Use the merged schema...
    context: gramps.context(req), // ...and the GrAMPS context object
  })),
);

Also I've noticed extraContext callback is called twice for a single query.

addContext: (req, res, next) => {
      req.gramps = getContext(req);
      next();
    }

Maybe the idea is to pass req.gramps to apollo express middleware as config object ?

It would be great to see some clarification on above so I can work on PR to improve that ?

Newbie questions

Hello guys, first of all I think I saw maybe a month ago a youtube video where this project was getting announced (and the IBM fancy stuff), I got very excited because Graphql microservices + schema stitching sounds very complicated to me but it's something that I really want to do, after that day I came here and realize that some parts of the docs were not yet done (Like the getting started) so before taking any early decision I waited to see the updates every 2 days, today I see that the page changed, some new commits were made, the getting started guide now works and it looks to me that this project is alive and cool, so aside from saying that I'm really excited for this I have the following questions before getting deep in the docs.

  • What happened to the old docs where the structure of the data source (model/connector/e.t.c) was explained ?
  • Can I consider this production ready ? or close enough ?
  • Will be possible to use something like micro-js (or one similar) instead of express ?

I haven't had the time to read the entire new docs but I will because I want to have something like this, thank you very much guys πŸ’›

The addResolveFunctionsToSchema function takes named options now

Going through the quickstart gives this error:
The addResolveFunctionsToSchema function takes named options now; see IAddResolveFunctionsToSchemaOptions
/gramps/dist/index.js:35
app.use('/graphql', (0, _apolloServerExpress.graphqlExpress)(GraphQLOptions));

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.