nebo15 / annon.api Goto Github PK
View Code? Open in Web Editor NEWConfigurable API gateway that acts as a reverse proxy with a plugin system.
Home Page: http://docs.annon.apiary.io/
License: MIT License
Configurable API gateway that acts as a reverse proxy with a plugin system.
Home Page: http://docs.annon.apiary.io/
License: MIT License
Allow to pass X-Pretty-Print: true
header so gateway will add spaces in json response to make in readable in browsers.
This will make our responses friendlier to developers.
{"time":"2017-06-21T10:02:13.889Z","sourceLocation":{"moduleName":null,"line":null,"functionName":null,"file":null},"severity":"ERROR","metadata":{"error_logger":"format"},"logMessage":"#PID<0.31977.8> running Annon.ManagementAPI.Router terminated\nServer: 35.187.188.208:8080 (http)\nRequest: PUT /apis/ca9e5956-1cb3-47a1-94ba-d5c489301edb/plugins/cors\n** (exit) an exception was raised:\n ** (Ecto.ConstraintError) constraint error when attempting to insert struct:\n\n * unique: plugins_pkey\n\nIf you would like to convert this constraint into an error, please\ncall unique_constraint/3 in your changeset and define the proper\nconstraint name. The changeset defined the following constraints:\n\n * unique: plugins_api_id_name_index\n * foreign_key: plugins_api_id_fkey\n\n (ecto) lib/ecto/repo/schema.ex:493: anonymous fn/4 in Ecto.Repo.Schema.constraints_to_errors/3\n (elixir) lib/enum.ex:1229: Enum.\"-map/2-lists^map/1-0-\"/2\n (ecto) lib/ecto/repo/schema.ex:479: Ecto.Repo.Schema.constraints_to_errors/3\n (ecto) lib/ecto/repo/schema.ex:213: anonymous fn/13 in Ecto.Repo.Schema.do_insert/4\n (ecto) lib/ecto/repo/schema.ex:684: anonymous fn/3 in Ecto.Repo.Schema.wrap_in_transaction/6\n (ecto) lib/ecto/adapters/sql.ex:620: anonymous fn/3 in Ecto.Adapters.SQL.do_transaction/3\n (db_connection) lib/db_connection.ex:1275: DBConnection.transaction_run/4\n (db_connection) lib/db_connection.ex:1199: DBConnection.run_begin/3"}
There new helpful standard that allows developers to track stacktrace over all system components, this issues should start discussion upon should we support it or not.
I thing we should, at least, wait until some nice-looking related Elixir library will be created by the community.
Nice dashboards example:
Possible apps structure:
- gateway (configurations, plugins, public http server, >configuration repo<)
- management_api (service and management endpoints)
- cluster (cluster communication, distributed counters, presence, service discovery)
- runtime_tools (logging, monitoring, tracing, >requests repo<)
To draw nice system status dashbaord.
party_id
is an internal var name.
We need to standardize fallbacks (as Phoenix did) to reduce code complexity. Also it can give Plugins simple API which they can use to generate responses, dropping all this logic from them into fallbacks.
There are some data that is missing for logs to be more practical in debugging:
Additionally, we need to get rid of Plug.Parsers
when writing logs.
Right now we are using PostgreSQL-style pattern matching with %
and _
, even trough it works, there are some limits and features that we want to support:
We want to match paths /some_api
and /some_api/123
as different APIs. Current workaroung is defining /some_api_
and /some_api/_%
with different match_priority. It looks ugly and will match /some_api1
, which will unexpected for most users.
It would be really awesome to take parts of request and substitute them to upstream path. Eg: /blog/:id/comments
-> /comments/:id
.
Regex matching is expensive.
They are hardcoded, move to ENV.
Logging got few issues and that starts to worry me:
I guess it is time to reconsider this part of Annon to be a global adapter-based tracing module, which can keep working as is (which is okay for small deployments) and get much more features for production systems: send data to SaaS services, other data stores - Elasticsearch or Cassandra, tracing systems - DataDog APM, NewRelic APM, AppSignal or Scout.
Adapter API would be large and demanding, so not every service would be able to be integrated, but that is okay since most of the issues above would have an approach to solve them.
Validation responses is yet to be perfect:
We need to find a nice way to render validations that have both changeset and json schema errors.
If pcm
strategy is picked, url_template
should be required.
Even with empty DB, this request currently fails for me locally:
curl --silent --request PUT --header "Content-Type: application/json" --data '{"name":"\"Somename\"","request":"{\"host\":\"%\",\"path\":\"/apis_status\",\"port\":80,\"scheme\":\"http\",\"methods\":[\"GET\"]}"}' "http://localhost:4001/apis/9C556157-2BAC-4901-8E3C-4471F0302D70"
The error is this:
{
"meta": {
"url": "http://localhost:4001/apis/9C556157-2BAC-4901-8E3C-4471F0302D70",
"type": "object",
"request_id": "7avvvl3ahin30empcu9kg7s1rjj88c2i",
"code": 422
},
"error": {
"type": "validation_failed",
"message": "Validation failed. You can find validators description at our API Manifest: http://docs.apimanifest.apiary.io/#introduction/interacting-with-api/errors.",
"invalid": [
{
"rules": [
{
"rule": null,
"params": [],
"description": "is invalid"
}
],
"entry_type": "json_data_property",
"entry": "$.request"
}
]
}
}
Respond with a static JSON content.
Use cases:
Configuration becomes a little bit messy, group related things and split it to files.
If no token is provided, an error is raised:
** (FunctionClauseError) no function clause matching in anonymous fn/1 in Annon.Plugins.Scopes.get_scopes/2
(annon_api) lib/annon_api/plugins/scopes.ex:39: anonymous fn([]) in Annon.Plugins.Scopes.get_scopes/2
(annon_api) lib/annon_api/plugins/scopes.ex:39: Annon.Plugins.Scopes.get_scopes/2
(annon_api) lib/annon_api/public_api/router.ex:1: Annon.PublicRouter.plug_builder_call/2
(annon_api) lib/plug/error_handler.ex:64: Annon.PublicRouter.call/2
(plug) lib/plug/adapters/cowboy/handler.ex:15: Plug.Adapters.Cowboy.Handler.upgrade/4
(cowboy) /opt/app/deps/cowboy/src/cowboy_protocol.erl:442: :cowboy_protocol.execute/4
First error is for age_groups
property of borrowers
, but entry is $.buckets.[0].criterias.borrowers
but not $.buckets.[0].criterias.borrowers.age_groups
. And etc.
{
"meta":{
"url":"http://os-dev-gateway.nebo15.com/tr/portfolio_subscriptions",
"type":"object",
"request_id":"l6hjqeuvfcf27eu1ldp1pbo3qrrssdg6",
"code":422
},
"error":{
"type":"validation_failed",
"message":"Validation failed. You can find validators description at our API Manifest: http://docs.apimanifest.apiary.io/#introduction/interacting-with-api/errors.",
"invalid":[
{
"rules":[
{
"rule":"required",
"params":[
],
"description":"required property age_groups was not present"
}
],
"entry_type":"json_data_property",
"entry":"$.buckets.[0].criterias.borrowers"
},
{
"rules":[
{
"rule":"required",
"params":[
],
"description":"required property income_groups was not present"
}
],
"entry_type":"json_data_property",
"entry":"$.buckets.[0].criterias.borrowers"
},
{
"rules":[
{
"rule":"required",
"params":[
],
"description":"required property currencies was not present"
}
],
"entry_type":"json_data_property",
"entry":"$.buckets.[0].criterias.loans"
},
{
"rules":[
{
"rule":"required",
"params":[
],
"description":"required property outstanding_amount_principal was not present"
}
],
"entry_type":"json_data_property",
"entry":"$.buckets.[0].criterias.loans"
},
{
"rules":[
{
"rule":"required",
"params":[
],
"description":"required property apr was not present"
}
],
"entry_type":"json_data_property",
"entry":"$.buckets.[0].criterias.loans"
},
{
"rules":[
{
"rule":"required",
"params":[
],
"description":"required property is_prolonged was not present"
}
],
"entry_type":"json_data_property",
"entry":"$.buckets.[0].criterias.loans"
},
{
"rules":[
{
"rule":"required",
"params":[
],
"description":"required property risk_class was not present"
}
],
"entry_type":"json_data_property",
"entry":"$.buckets.[0]"
},
{
"rules":[
{
"rule":"required",
"params":[
],
"description":"required property currency was not present"
}
],
"entry_type":"json_data_property",
"entry":"$.buckets.[0]"
},
{
"rules":[
{
"rule":"required",
"params":[
],
"description":"required property term_to_maturity was not present"
}
],
"entry_type":"json_data_property",
"entry":"$.buckets.[0]"
},
{
"rules":[
{
"rule":"required",
"params":[
],
"description":"required property term_unit was not present"
}
],
"entry_type":"json_data_property",
"entry":"$.buckets.[0]"
},
{
"rules":[
{
"rule":"required",
"params":[
],
"description":"required property loans_investment_min was not present"
}
],
"entry_type":"json_data_property",
"entry":"$.buckets.[0]"
},
{
"rules":[
{
"rule":"required",
"params":[
],
"description":"required property assignment_schema was not present"
}
],
"entry_type":"json_data_property",
"entry":"$.buckets.[0]"
},
{
"rules":[
{
"rule":"required",
"params":[
],
"description":"required property guarantee_type was not present"
}
],
"entry_type":"json_data_property",
"entry":"$.buckets.[0]"
},
{
"rules":[
{
"rule":"required",
"params":[
],
"description":"required property buy_back_available was not present"
}
],
"entry_type":"json_data_property",
"entry":"$.buckets.[0]"
}
]
}
}
/status
{
nodes: [],
cluster_size: N,
latencies: {...},
load: {}
}
Scopes plugin became a tech debt that we need to pay off. It started as a plugin that resolves user scopes by a different strategies, but ended up in a mess with tokens in a the Plug.Conn and pattern matching on them in different places.
JWT and Scopes should be merged to Auth plugin, that is:
Cache has two adapters that don't have common behaviour, lets fix it.
The rate limit plugin requires us to follow one of the options:
First. To store a distributed CRDT counter which is merged via clustering protocol on an event or timing basis.
Pros:
Cons:
=NODES*CONSUMERS
) with each new API client.--
Second. Build a hash ring out of all known Annon nodes and route counter increment calls or whole request to the node which is responsible for limiting it (probably via RPC calls; Kademilla DHT?).
Pros:
Cons:
--
Third. Provide only very limited rate limiting which would work either with load balancer that supports sticky sessions (route request to the same Annon instance for each consumer) or when there is only one Annon instance at all.
This is similar to option one, except we don't need to sync CRDT's.
--
Fourth. Use persistent backends.
Proc:
Cons:
@doc
in most important methods, expand @moduledoc
's to match full module responsibilityPUT
request) partially without need to pass full objects.//
in path and relative URIs in config resolving and proxy.ip_blacklist
and ip_whitelist
settings to blacklist
and whitelist
.X-Host-Override
headerAuthorization
header. But proxy rest of headers*.example.com
domain namesstrip_request_path
option that should tell if we should strip API-related path from upstream request URIstrip_request_host
and proxy HOST
header by defaultstrip_request_path
to strip_api_path
X-Forwarded-Proto
header (http or https)connection_timeout
, receive_timeout
settingsretries_limit
and set it to 0 by default.X-Consumer-External-ID
, X-Consumer-Scopes
, X-Forwarded-For
Skycluster becomes legacy, It would be better to split it into two parts:
Limit param is optional, but pagination does not work when it's not present
Right now if no token is found we resort to scopes are a blank list
scenario, and return 403.
This is incorrect.
If no token is found, we must return 401 and halt.
When auth plugin is enabled, during auth phase when auth server is provided a token to verify, not only it pull the token from DB and renders it. It also performs some basic sanity checks:
If any of the two fail, the token will return response containing error
key instead of data
key. In turn Annon must render that error back to requester, along with response code provided by auth (403
aka access_denied
error).
Erlang distribution is limited to 45-60 nodes. There is an academic research on Erlang SD which suggests two things:
We could provide an RPC protocol with service discovery on top of Erlang.
This is way future work since we don't have demand for clusters of that size.
In a microservice architecture there it is common to have a single shared dictionary which entries are used by a bunch of upstream services. Examples of these common libraries: list of USA states, a list of item colors which are available on an eCommerce website, address types, data range units and all sorts of general configuration.
To address this cases I want to propose a similar approach to the one that we use when building libraries. It is bad to hardcode configuration within a library or to set an API that allows to configure it (either via Apllication environment or via System environment). It is much more practical to accept all options as a function call arguments, offloading configuration management to a caller app.
Same we can do in a microservice environment - downstream service may take responsibility to fulfill an API call with all data that is required to handle the request and response without the need for an upstream to store state.
The gateway can store an in-memory (ETS table) cache of well-known dictionaries that is either pulled from configuration service (when Gateway starts or cache is missing) or pushed from it (when a configuration is changed and needs to be propagated).
A new upstream metadata plugin needs to be developed, which can be added to an API and contain settings for multiple dictionaries and rules how to fetch them.
A configuration update or cache drop API needs to be developed so that responsible service can reload the cache when it changed.
Implementation failure detection approaches:
For an implementation options of shared failure rate counter refer to #219.
Should provide basic metrics and a UI component to render service status page (similar to statuspage.io or status.gndf.io)
Right now halted connection is passed to rest of plugins (because of logger and monitoring that should be called anyways).
:os_gateway
, а везде :gateway
. Проект не соберется.A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.