Giter VIP home page Giter VIP logo

sieppari's Introduction

sieppari cljdoc badge

Small, fast, and complete interceptor library for Clojure/Script with built-in support for common async libraries.

Noun Siepata (Intercept)

sieppari, someone or something that intercepts

What it does

Interceptors, like in Pedestal, but with minimal implementation and optimal performance.

The core Sieppari depends on Clojure and nothing else.

If you are new to interceptors, check the Pedestal Interceptors documentation. Sieppari's sieppari.core/execute follows a :request / :response pattern. For Pedestal-like behavior, use sieppari.core/execute-context.

First example

(ns example.simple
  (:require [sieppari.core :as s]))

;; interceptor, in enter update value in `[:request :x]` with `inc`
(def inc-x-interceptor
  {:enter (fn [ctx] (update-in ctx [:request :x] inc))})

;; handler, take `:x` from request, apply `inc`, and return an map with `:y`
(defn handler [request]
  {:y (inc (:x request))})

(s/execute
  [inc-x-interceptor handler]
  {:x 40})
;=> {:y 42}

Async

Any step in the execution pipeline (:enter, :leave, :error) can return either a context map (synchronous execution) or an instance of AsyncContext - indicating asynchronous execution.

By default, clojure deferrables, java.util.concurrent.CompletionStage and js/promise satisfy the AsyncContext protocol.

Using s/execute with async steps will block:

;; async interceptor, in enter double value of `[:response :y]`:
(def multiply-y-interceptor
  {:leave (fn [ctx]
            (future
              (Thread/sleep 1000)
              (update-in ctx [:response :y] * 2)))})


(s/execute
  [inc-x-interceptor multiply-y-interceptor handler]
  {:x 40})
; ... 1 second later:
;=> {:y 84}

Using non-blocking version of s/execute:

(s/execute
  [inc-x-interceptor multiply-y-interceptor handler]
  {:x 40}
  (partial println "SUCCESS:")
  (partial println "FAILURE:"))
; => nil
; prints "SUCCESS: {:y 84}" 1sec later

Blocking on async computation:

(let [respond (promise)
      raise (promise)]
  (s/execute
    [inc-x-interceptor multiply-y-interceptor handler]
    {:x 40}
    respond
    raise) ; returns nil immediately

  (deref respond 2000 :timeout))
; ... 1 second later:
;=> {:y 84}

Any step can return a java.util.concurrent.CompletionStage or js/promise, Sieppari works oob with libraries like Promesa:

;; [funcool/promesa "5.1.0"]`
(require '[promesa.core :as p])

(def chain
  [{:enter #(update-in % [:request :x] inc)}               ;; 1
   {:leave #(p/promise (update-in % [:response :x] / 10))} ;; 4
   {:enter #(p/delay 1000 %)}                              ;; 2
   identity])                                              ;; 3

;; blocking
(s/execute chain {:x 40})
; => {:x 41/10} after after 1sec

;; non-blocking
(s/execute
  chain
  {:x 40}
  (partial println "SUCCESS:")
  (partial println "FAILURE:"))
; => nil
;; prints "SUCCESS: {:x 41/10}" after 1sec

External Async Libraries

To add a support for one of the supported external async libraries, just add a dependency to them and require the respective Sieppari namespace. Currently supported async libraries are:

  • core.async - sieppari.async.core-async, clj & cljs
  • Manifold - sieppari.async.manifold clj

To extend Sieppari async support to other libraries, just extend the AsyncContext protocol.

core.async

Requires dependency to [org.clojure/core.async "0.4.474"] or higher.

(require '[clojure.core.async :as a])

(defn multiply-x-interceptor [n]
  {:enter (fn [ctx]
            (a/go (update-in ctx [:request :x] * n)))})

(s/execute
  [inc-x-interceptor (multiply-x-interceptor 10) handler]
  {:x 40})
;=> {:y 411}

manifold

Requires dependency to [manifold "0.1.8"] or higher.

(require '[manifold.deferred :as d])

(defn minus-x-interceptor [n]
  {:enter (fn [ctx]
            (d/success-deferred (update-in ctx [:request :x] - n)))})

(s/execute
  [inc-x-interceptor (minus-x-interceptor 10) handler]
  {:x 40})
;=> {:y 31}

Performance

Sieppari aims for minimal functionality and can therefore be quite fast. Complete example to test performance is included.

Silly numbers

Executing a chain of 10 interceptors, which have :enter of clojure.core/identity.

  • sync: all steps return the ctx
  • promesa: all steps return the ctx in an promesa.core/promise
  • core.async: all step return the ctx in a core.async channel
  • manifold: all step return the ctx in a manifold.deferred.Deferred

All numbers are execution time lower quantile (not testing the goodness of the async libraries , just the execution overhead sippari interceptors adds)

Executor sync promesa core.async manifold
Pedestal 8.2µs - 92µs -
Sieppari 1.2µs 4.0µs 70µs 110µs
Middleware (comp) 0.1µs - - -
  • MacBook Pro (Retina, 15-inch, Mid 2015), 2.5 GHz Intel Core i7, 16 MB RAM
  • Java(TM) SE Runtime Environment (build 1.8.0_151-b12)
  • Clojure 1.9.0

NOTE: running async flows without interceptors is still much faster, e.g. synchronous manifold chain is much faster than via interceptors.

NOTE: Goal is to have a Java-backed and optimized chain compiler into Sieppari, initial tests show it will be near the perf of middleware chain / comp.

Differences to Pedestal

Execution

  • io.pedestal.interceptor.chain/execute executes Contexts
  • sieppari.core/execute executes Requests (which are internally wrapped inside a Context for interceptors)

Errors

  • In Pedestal the error handler takes two arguments, the ctx and the exception.
  • In Sieppari the error handlers takes just one argument, the ctx, and the exception is in the ctx under the key :error.
  • In Pedestal the error handler resolves the exception by returning the ctx, and continues the error stage by re-throwing the exception.
  • In Sieppari the error handler resolves the exception by returning the ctx with the :error removed. To continue in the error stage, just return the ctx with the exception still at :error.
  • In Pedestal the exception are wrapped in other exceptions.
  • In Sieppari exceptions are not wrapped.
  • Pedestal interception execution catches java.lang.Throwable for error processing. Sieppari catches java.lang.Exception. This means that things like out of memory or class loader failures are not captured by Sieppari.

Async

  • Pedestal transfers thread local bindings from call-site into async interceptors.
  • Sieppari does not support this.

Thanks

License

Copyright © 2018-2020 Metosin Oy

Distributed under the Eclipse Public License 2.0.

sieppari's People

Contributors

arichiardi avatar den1k avatar dosbol avatar ericnormand avatar ikitommi avatar jarppe avatar miikka avatar nilern avatar niwinz avatar ottonascarella avatar tvaisanen avatar zelark avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sieppari's Issues

Consider including `with-bindings` support

Adding support for appears to be straight-forward:

bd54b96

All one has to do is set :bindings on the context within any :enter interceptor:

(def ^:dynamic *transacting-user-id* nil)

(defn transacting-user-interceptor []
  {:name  ::transacting-user
   :enter (fn [ctx]
            (let [user-id (get-in ctx [:request :user :db/id])]
              (update ctx :bindings merge {#'*transacting-user-id* user-id})))})

I don't know enough about the async mode to know whether this is sufficient for that use-case.

Sugar around async consumption

Hello folks!

Today we had a very interesting conversation about consumption from the AsyncContext and the fact that the library could implement a bit more in the protocol and handle quite a bit of boilerplate that always happens to be necessary.

For instance one very common thing that could be abstracted is:

(-> (fn-returning-a-promise)
  (.then #(assoc :result %))

Could be hidden behind the protocol and have something like:

(-> (fn-returning-a-promise)
  (assoc :result))

What do you think?

It would be nice to experiment with this.

Context, Stack and Queue as a Protocols

Using protocols instead of Context records/maps would allow a) much faster dispatching and b) sieppari to be used with plain objects (with static queues) too:

(let [interceptors (mapv interceptor [{:enter inc
                                       :leave inc}
                                      {:enter (partial * 2)
                                       :leave (partial * 2)}
                                      {:enter inc}])]
  ;; 100ns
  (cc/quick-bench
    (run (queue interceptors) 0)))

(let [app (comp inc (partial * 2) inc (partial * 2) inc)]
  ;; 88ns
  (cc/quick-bench
    (app 0)))

Consolidate build tools

After merging #10, both deps.edn and Leiningen are used. I think we should pick one.

  • If we go with deps.edn, we need to figure out how to run Clojure tests and deploy releases.
  • If we go with Leiningen, we need to figure out how to use Figwheel to run tests with it.
  • We can also use https://github.com/RickMoynihan/lein-tools-deps to deduplicate information between the two.

Going faster on the JVM

I looked into sieppari's performance on the JVM and ran some profiling to get a sense of what exactly needs to be optimized to get on-par with middleware
The measurements and results can be found here
Generally, CPU utilization can be broken down to:

  • enter ~66%
    • assoc-ing to the context: 25%
    • peeking/popping the queue: 20%
  • leave: ~30%
    • iterating over the stack~ 50%
    • invoking keywords on interceptors: ~40%

With native queue and stack and eliminating all keyword access (very ugly code) I was able to get a speed up of 2-3x. The remaining CPU went to the queue and stack mathods, MethodImplCache lookup, Context instantiation (immutable record) and -try. Still very far from just composing 10 functions.

To get that performance we need a "compiled" model which among other things gets rid of the queue and stack and perhaps creates a mutable context for each execution

If I understand the execution model correctly (sans the reified queue) the execution flow for three interceptors would look something like:

flow dot

This can be implemented in two different ways, besides the queue/stack implementation:

  • flow graph. Execution is handled by interceptors. Each interceptor knows who's the next stage and "routes" to it based on success or failure
  • composition. Can be a bit involved but by doing things in the correct order we can compose all the functions we need backwards, starting from the error handlers, then leave, finally enter. A very rouge sketch of this idea, putting async aside for a moment:
(defn -comp-error
  [g f]
  (fn [ctx]
    (g
     (try
       (f ctx)
       (catch Exception e
         (assoc ctx :error e))))))

(def fs [:errf1 :errf2 :errf3])
(def escapes (into [] (reductions -comp-error fs)))

(defn -comp-leave
  [[g-l g-e] [f-l f-e]]
  [(fn leave [ctx]
         (try
           (g-l (f-l ctx))
           (catch Exception e
             (g-e (assoc ctx :error e))))) f-e])

(def fs [:leave1 :leave2 :leave3])
(def leaves (into [] (map first) (reductions -comp-leave (map vector fs escapes))))

(defn -comp-enter
  [tot [f err]]
  (fn [ctx]
    (try
      (f (tot ctx))
      (catch Exception e
        (err (assoc ctx :error e))))))

(def es [:enter1 :enter2 :enter3])
(def doors (map vector es escapes))
(def enters (reduce -comp-enter identity doors))

(let [farewell (last leaves)]
  (def fin
    (fn [ctx]
      (farewell (enters ctx)))))

Regarding async, #9 suggests it might be handled incorrectly at the moment. An option to consider is choosing a unified execution model and doing everything in it. Or should it be determined by the executor?

We should also figure out which parts of the context can be mutable and are the execution environment and which are data. If we forgo the option of exposing the runtime environment (queue and stack) we can go much faster.
There could be a hierarchy of Runner -> ExecutionContext -> Context

There are a lot of considerations and degrees of freedom which can lead to widely different solutions and I'd like to open them for discussion and hopefully experiment with different solutions.

Would love to hear your thoughts and to take a swing at it myself.

Remove automatic support for 3rd party async libs

It takes whopping 4-5sec to load the sieppari.core if there are all the async libs on the classpath. Rough load times from async support:

  • core.async: 3sec
  • manifold: 500ms
  • promesa: 100ms

Client-side aot cache on libs would solve this, but might not be solved any time soon: https://clojureverse.org/t/deploying-aot-compiled-libraries/2545/13

Also, don't think it's a default need to want to use multiple async libs in a same app, so instead, the async-support could be explicit, e.g. use has to load the support ns by her/himself.

Sieppari 1.0.0 EPIC

There are some fundamental design decisions that need to revisited. The library is still an alpha, but as it's a core dependency in reitit-http any breaking change should be well though.

Access to context

We use pedestal in the context of kafka queues. We use the context extensively for storing connections and database names and other configuration. How would we manage that in Sieppari? Would we need to add it to the request? In pedestal we setup a default context once and then just assoc :request for each kafka message.

[CLJS] Async :leave not working

I've only tested this in CLJS with native promises. Currently, an async value returned by the :leave function on an interceptor is not awaited.

Example:

(sieppari/execute
 [{:enter (fn inc-enter [ctx]
            (js/Promise.resolve (update-in ctx [:request :x] inc)))
   :leave (fn inc-leave [ctx]
            (js/Promise.resolve (update-in ctx [:request :x] inc)))}
  (fn handler [request] {:y (inc (:x request))})]
 {:x 40}
 prn)

Result: Prints 42

Expected result: Prints 43; 40 + 1 (for enter) + 1 (for leave) + 1 (for handler)

I have a case where I'm trying to do some HTTP request using js/fetch and then coerce the body. js/fetch returns a Promise fulfilled with a Response type, which you then call .json / .text / etc. These methods return a new promise that must be awaited.

Please advise! Is this expected behavior?

IDeref AsyncContext is causing problems.

Having IDeref implementation of AsyncContext covered over quite a few errors. The general IDeref kept me from noticing the missing require in perf-testing.

Here is one option to resolve that.
#48

Too many async contexts

The current implementation adds a lot of async contexts.
The cost of these contexts can very from a Deferred or go block to a full
thread for a future. Even the more lightweight cases have a possible cross
thread dispatch.

One source of extra context is having separate continue and catch in the
async protocol.
I provided a pr for consideration here.
#43

The most interesting improvement is for future and delay

#####################
sieppari: sync (sync)
#####################

2.10µs -> 1.91µs
######################
sieppari: sync (async)
######################

2.14µs-> 1.99µs

###########################
sieppari: core.async (sync)
###########################

77.22µs -> 73.18µs

############################
sieppari: core.async (async)
############################

79.91µs -> 70.62µs

########################
sieppari: future (async)
########################

184.27µs-> 101.52µs

#######################
sieppari: delay (async)
#######################

122.85µs-> 86.01µs

#########################
sieppari: deferred (sync)
#########################

101.52µs-> 62.06µs

##########################
sieppari: deferred (async)
##########################

124.69µs -> 74.10µs

########################
sieppari: promesa (sync)
########################

5.44µs-> 4.76µs

#########################
sieppari: promesa (async)
#########################

5.44µs-> 4.74µs

General Questions

First of all: thanks for all the effort that went into this library, I'm a huge fan of interceptors and I've long wanted a .cljc implementation that is still somewhat close to the Pedestal implementation.

Some questions came up when taking a look at Sieppari. I was thinking of discussing those with @ikitommi but I figured I might also just open an issue.

1. Why no namespaced keys for :stack and :queue?

In Pedestal they're namespaced and that seems like a solid way to save users from accidentally overwriting the execution queue or stack.

2. Why the added request concept?

In Pedestal the model is very consistent:ctx in, ctx out. In Sieppari there is an additional concept of request that doesn't seem to have a clear meaning in the context of interceptor execution and, from my current perspective, adds complexity by introducing additional terminology/concepts.

For instance the example given in the Readme. The inc-x-interceptor receives the full context whereas the handler just receives the value of the :request key. At a first reading I thought supplying a function would be a shorthand for an :enter-only interceptor but that's not the case. It seems like the idea behind this is to be closer to the Ring model but I'm not sure if that's the only reason?

I found Pedestal's "context in, context out" model perfectly fine and never saw the need for additional nesting at the level of the interceptor library.

;; Example from the Readme
(def inc-x-interceptor
  {:enter (fn [ctx]
            (update-in ctx [:request :x] inc))})

;; handler, take `:x` from request, apply `inc`, and return an map with `:y`
(defn handler [request]
  {:y (inc (:x request))})

(sieppari/execute
  [inc-x-interceptor handler]
  {:x 40})

With the :request concept removed the example above would look like this which, to me, is much simpler since the only thing that is ever relevant is the context.

(def inc-x-interceptor
  {:enter (fn [ctx]
            (update ctx :x inc))})

;; handler, take `:x` from request, apply `inc`, and return an map with `:y`
(defn handler [ctx]
  {:y (inc (:x ctx))})

(sieppari/execute
  [inc-x-interceptor handler]
  {:x 40}) ; {:y 41}

Performance got significantly slower

Recently I noticed that it's not fast as it says. Here docs says it should be 64 µs for Pedestal sync, and 9 µs for Sieppari sync for a chain of 100 interceptors. First thing, in perf test there is only chain of 10 interceptors. Second, when I ran it, I was really surprised by the result I got. 11.87µs for Pedestal sync, and 824.35µs for Sieppari sync. You can see the whole result here. I have MacBook Air (mid 2013), which definitely isn't fast as Pro, but I don't expect so huge difference.

I started finding a point at where the issue came, and found #10. Along with other changes, it brought two ones after which performance got significantly slower. First one, which is most important, that now the check for async use satisfies? instead of just returning true or false. Because async? is used frequently it results in slowdown. Second one, which is also important it's that `Iterator' hint in leave phase has gone.

I fixed it, and got dramatically better result. You can find it here. It's just 1.63µs for sync case.

Should I create a PR?

@miikka @ikitommi

Handle JS async rejections

Hi again! As part of my port to cljs I realized what could be an issue..maybe.

It seems like at the moment the continue protocol only accepts one callback, which makes sense most of the times, the only issue being js/Promise rejection.

One could add another callback for errors but I am not myself convinced it is the right thing to do. It would certainly handle this kind of situation that at the moment trigger an uncaught error warning.

Or maybe it is simply better to let it blow and fail fast.

Thoughts?

Missing support for SuccessDeferred in manifold AsyncContext

Missing support for SuccessDeferred in manifold AsyncContext.
but AsyncContext's Implementation on IDeref prevents a missing implementation error.
IDeref's Implementation uses a future.
After adding support for SuccessDeferred in manifold AsyncContext the perf test performance is close to promesa.
around 7us on my machine
#46

You may want to include
#45
to fix missing imports of sieppari.async.manifold in the perf-testing.

Ideally we would add AsyncContext to the interface manifold.deferred.IDeferred so we do not need to support each manifold type.

Support git deps in tools.deps

Since sieppari has no production dependencies besides clojure so adding a deps.edn enables git deps via tools.deps.

I've done so here and am currently using that branch of sieppari in a project.

Happy to make a PR adding a deps.edn if you're okay with it.

Map function into enter, not into handler

If functions would be mapped into interceptor enter, instead of special handler case, it would be easier to write one-way chains:

Instead of:

(def inc-x-interceptor
  {:enter #(update % :request inc)})

(defn handler [request]
  (inc (:x request)))

(s/execute
  [inc-x-interceptor handler]
  40)
;=> 42

one could say:

(def inc-x-interceptor
  #(update % :request inc)))

(def handler
  (s/handler
    #(inc (:x request)))

(s/execute
  [inc-x-interceptor handler]
  40)
;=> 42

... inlined:

(s/execute
  [#(update % :request inc))
   p/promise
   (s/handler #(inc (:x request))]
  40)
;=> 42

https://github.com/metosin/sieppari/blob/develop/src/sieppari/interceptor.cljc#L30-L35

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.