Giter VIP home page Giter VIP logo

Comments (19)

hlship avatar hlship commented on May 22, 2024 1

The internal structure is just promises. So in query {R {A B C}}, R executes first, when its promise resolves, we trigger A, B, and C in that order, then wait for them to resolve (in that order). So at each level, the resolvers can operate in parallel by returning a ResolverResultPromise. But since it is promise-based, the exact ordering is determined by any number of factors, and entirely different branches of a more complex query may be fully interleaved.

So as it currently stands, the best place to do work that affects A, B, and C, is in the field resolver for R.

Currently, the only mechanism to pass data between R and A, B, and C, is in the resolved value returned from R, because a field resolver can not modify the application context. However, if an Atom was stored in the application context, that could be a second channel of information.

Again, the intent of the preview API is that R could peek ahead to see that A, B, and C, are coming, and start a single query to provide that information, storing it (perhaps as some kind of promise) inside the resolved value.

I am enamored of the Action concept. Alternately, I could imagine a special API that allows the resolver function to "suggest" new key/values for the context before it returns. These changes would be merged into the context before invoking field resolvers A, B, and C. At the core of that would be an Atom ... but then we have the overhead of creating that Atom before invoking every field resolver, and then checking the atom and merging the content into the context before invoking any nested field resolvers.

I'm in a bit of a quandary, about how to balance this out without things falling into a twisted mass of special cases. Maybe there's somewhere in the office I can set up a hammock.

from lacinia.

hlship avatar hlship commented on May 22, 2024

So far, we've hinted at an API that would allow a parent node to "preview" what kind of selections are in the tree below it; what you are asking for is likely an offshoot of that. Essentially, the parent node can execute the queries and put the result (or some kind of result promise) into the parent node's resolved value, such that child nodes can just extract that query. But we're continuing to experiment with this in our production code before providing an incomplete solution.

One of the things I'm concerned about is how to deal with "grandfather" cases, where there are intermediate fields between the one that should do the query, and the field that should use the pre-cached value. Unfortunately, there isn't a mechanism where field resolvers can modify the application context. The application could place a mutable atom into the context, but that feels very unsatisfying when the rest of Lacinia is truly functional.

from lacinia.

AndrewIngram avatar AndrewIngram commented on May 22, 2024

I'd look at how Absinthe has tackled this, they're probably the closest to you guys in terms of implementation constraints (functional, immutable, macros).

It's worth mentioning that most implementations of batching i've seen present it as an orthogonal concern to looking down child selections, but the solutions have tended to be language-specific.

Excited to see what API you guys come up with :)

from lacinia.

hlship avatar hlship commented on May 22, 2024

Our focus is on using Cassandra as a backing store, so in some ways it's better to barrage Cassandra with N requests (each handled by a different cluster node) than to do it as a single request (using the in keyword). For traditional SQL databases it's the reverse.

from lacinia.

hlship avatar hlship commented on May 22, 2024

The new selections API is the main tool for handling this kind of scenario, but it's tricky to document this, especially because our applications don't actually need this functionality, or the most part.

from lacinia.

lilactown avatar lilactown commented on May 22, 2024

Unfortunately, there isn't a mechanism where field resolvers can modify the application context.

Resolvers could return a tuple, e.g. [ctx, resolved-value], to allow modifying the context.

from lacinia.

hlship avatar hlship commented on May 22, 2024

Return results from resolver methods are already as complex as I'd like them. We already have simple value, ResolverResponse wrapper, and the potential for further wrapping via decorators. Potentially, this would involve a three-arity ResolverResponse at which point I'd want to introduce a simpler Promise (just a value) to be used internally.

from lacinia.

AndrewIngram avatar AndrewIngram commented on May 22, 2024

I believe that with Sangria, resolvers return actions of which a simple value is a special case. Perhaps @OlegIlyenko can offer some insights here. There were similar concerns abut bloating the resolver API when trying to implement more advanced resolution requirements (batching, context updates) in Absinthe as well, it has similar constraints, so it might be worth involving @benwilson512.

from lacinia.

lilactown avatar lilactown commented on May 22, 2024

The (perhaps naive) solution I used when playing with Absinthe in Elixir was to spawn an agent or genserver for each request that handled batching and caching; this is idiomatic Erlang/Elixir AFAICT, but I'm not sure how that would translate to Clojure idioms.

from lacinia.

lilactown avatar lilactown commented on May 22, 2024

A plug process injected the agents into the application context, and then each resolver could call:

DataLoader.request(context.data_agent, user_id, other_parameters)

And DataLoader.request would ask the Agent for the data; the Agent would lookup whether that particular data had already been requested, if not, set it up for batching and then cache the data in memory in case it was asked for later in the GraphQL query.

from lacinia.

OlegIlyenko avatar OlegIlyenko commented on May 22, 2024

@AndrewIngram yeah, in Sangria the result of a resolve function is not a simple value, but an Action. It can contain simple values like Value, Try, Future or more complex command like DeferredValue or UpdateCtx.

So Action represents not only the value but also an instruction to an execution engine on how to process this value. Deferred values and deferred value resolution mechanism provide an alternative to dataloader. It was also necessary to make it part of the execution engine because otherwise (on JVM at least) I needed to rely on threshold-based batching (based on either a batch size or a timeout). By coupling it with execution engine I can be very precise on when to actually fetch the batched data. If people need to use a decoupled solution, then Fetch provide a good alternative (I know some projects that decided to use it instead in order to make fetching/batching logic more decoupled from Sangria).

I also can't stress enough importance of UpdateCtx. It provides an immutable way to update the context value for all child fields (and consequent sibling fields in mutation) during the execution. This helps to avoid race conditions or mutexes (if mutable data structure is involved). Over the time I saw quite a few users with real-world use-cases for context updates (and have some use-cases on my own). I find it important to provide an immutable way to achieve it.

from lacinia.

brandonbloom avatar brandonbloom commented on May 22, 2024

I don't think "preview" is a good solution because it requires brittle logic that couples parent/child field resolvers. I definitely recommend reading the README for https://github.com/facebook/dataloader

from lacinia.

brandonbloom avatar brandonbloom commented on May 22, 2024

OK, so I've tinkered with this for a few hours now and have the outline of a solution.

First: I'm not at all interested in a "immutable" context manipulation strategy. Clojure has never been pure, so there's no reason to tie our hands now. Instead, we should embrace sane reference semantics when available. Facebook's dataloader has sensible per-request, accrete-only caching logic. This is not too different from lazy evaluation.

Second: The trick to making dataloader work in JavaScript is to treat the JavaScript event loop as a batch delimiter. In effect, each "load" call adds something to the batch and then an idempotent setTimeout makes sure that on the next event "tick" of the JavaScript engine, the batch gets executed. The JVM (happily!) has no such event loop delimiter, so we need to find another way to delimit batches.

My strategy was to rely on the level-by-level execution of GraphQL queries. Object resolvers return IDs that act as symbolic promises. When they do this, they effectfully enqueue the ids in the current batch. When the field resolvers run, which (as best I can tell) is spec'd to happen after a full "layer" of the query result is resolved, then the batch is flushed on demand.

Code here: https://gist.github.com/brandonbloom/5bc8375a25eb733c41ed98f0270786e5

Some thoughts/questions:

  1. Is there some builtin way to get the object-name/field-name? Or is my decorator the recommended strategy? iirc, graphql.js has an optional "info" parameter that provides that information.

  2. My gut reaction is that it would be better still if there was a way to get a fan-out callback, so that I don't need to rely on the implicit layer-by-layer execution in order to flush batches.

from lacinia.

benwilson512 avatar benwilson512 commented on May 22, 2024

My strategy was to rely on the level-by-level execution of GraphQL queries.

FWIW this is the batching strategy chosen by the Elixir implementation. We do a walk through the document executing whatever happens eagerly, and accumulating batches for fields using batching. After a given pass in the document we run those batches, and then re-walk the doc to place results in the right spot and continue evaluation.

from lacinia.

hlship avatar hlship commented on May 22, 2024

I remember my Tapestry days, where I had put some very clever things into the framework ... but when it came time to describe what the framework did and how, things fell apart, especially for newbies. Even in a classroom situation; you simply needed lots of experience with the framework in order to understand how all those clever things operated, especially in combination. That's something I personally want to avoid (and remember: for Lacinia I'm involved, but not BDFL). So I'm more interested in finding ways to add hooks to Lacinia to facilitate these kinds of solutions, rather than baking a specific solution into Lacinia.

The internals are already promise based which should provide a lot of freedom to affect order of execution, but I'm not quite sure what the definition of a "layer" of query evaluation looks like, which is to say, where to set the boundaries for an automatic aggregator of queries.

And, of course, our internal experience, wearing the deepest path, is with Cassandra, so we don't think in terms of joins or other things that make SQL database queries fast and efficient.

from lacinia.

brandonbloom avatar brandonbloom commented on May 22, 2024

I'm more interested in finding ways to add hooks to Lacinia to facilitate these kinds of solutions, rather than baking a specific solution into Lacinia.

I totally agree. It should be possible to achieve proper data batching in client code without significant framework support.

I don't even necessarily think anything needs to change. This ticket title is "Document a solution", after all.

I'm not quite sure what the definition of a "layer" of query evaluation looks like, which is to say, where to set the boundaries for an automatic aggregator of queries.

By "layer", I meant each depth level in the query. That is to say, if a field is at depth D, it is part of the same batching process as all other fields at depth D, regardless of which path from the root was taken to get there. This works in the general case, giving you request fanout equal to the height of the query's tree. You could, in theory, do a little better than this, but in practice most queries are mostly shallow and batching is a bigger win than reducing fanout from say 7 to 6.

This however assumes a breadth-first traversal strategy for execution. I'm not quite sure if that matches reality. Does it?

Cassandra, so we don't think in terms of joins or other things

Unfortunately, not every backend provides good query pipelining support. Even if your primary backing store is Cassandra, you might need to combine that data with another simple HTTP service that offers a batched fetch endpoint.

from lacinia.

hlship avatar hlship commented on May 22, 2024

I'd like to close this issue. I think a number of things we've put in place over the last few weeks should address the concerns in this issue:

  • You can now preview what fields are to be selected from within a field resolver
  • You can now write updates to the application context that will only be visible to sub-selections of the current field

Please re-open if you have any further comments or concerns.

from lacinia.

mmmdreg avatar mmmdreg commented on May 22, 2024

I think previewing children is useful for the problem where it is preferable to perform a single query with a join rather than multiple separate queries.

Dataloader in JS land is more concerned with batching n+1 queries, so loading a list of 10 items where the resolver is defined at the item (rather than collection) level will do a single db query instead of 10.

There are a few options in clojure land, which are someone inspired by Haxl in Haskell land:
https://github.com/kachayev/muse
https://github.com/funcool/urania
https://github.com/xsc/claro

from lacinia.

wbeard avatar wbeard commented on May 22, 2024

@hlship Wondering if you are open to any PR or contribution toward solving this problem?

Taking a look through com.walmartlabs.lacinia.executor/execute-query, this seems like a good place to let userland inject some sort of strategy for execution, which could fit the desire to keep lacinia from being opinionated about a specific solution. I see there's already code that differs the execution strategy based on the type of operation being performed. The execute/combine pattern seems like it'd match the dataloader queue & dispatch pattern well.

Before trying to prove that out, I wanted to see what you thought of that direction and if there are any dragons to be aware of in that area of code.

from lacinia.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.