elixir-tesla / tesla Goto Github PK
View Code? Open in Web Editor NEWThe flexible HTTP client library for Elixir, with support for middleware and multiple adapters.
License: MIT License
The flexible HTTP client library for Elixir, with support for middleware and multiple adapters.
License: MIT License
Hey! I'm wrapping a Bad API™ that needs raw +
s in its querystring.
Currently I can't prevent Tesla from doing the sane thing and building a valid URI in Tesla.build_url/2
that follows the URI specification, subbing my plusses out with %2B
. Nor are there options to configure this at the stdlib URI.encode_query/1
level that Tesla calls out to. This is a just way to handle the uncertainty and darkness of a world gone mad and I have no objection.
Rather than laboring to support stupid crap like this, though, would you be open to making it possible to override the query (or url) encoder for a completely bespoke one, so those of us that must deal with these aberrant APIs needn't write our own Tesla from scratch?
I'm happy to contribute to the implementation of this, just wanted to sound it out first.
(For now I've copy-pasted the hackney adapter wholesale that gsubs the plusses back in.)
I've been playing around with Tesla and I really like it. Great Work!
I am really keen on implementing a caching middleware but I'm kind of lost. By looking at the current code I don't think it's trivial.
The only solution I came up with so far is something like this (very basic).
defmodule CachingMiddleware do
@ttl = 10
def call(env, next, _options) do
case Cache.read(env.url) do
nil ->
result = Tesla.run(env, next)
Cache.write(env.url, result.body, @ttl)
result
cached_body -> %{env | body: cached_body}
end
end
end
The problem here is that it depending on the cache backend you can't serialize the elixir data structures so you can only take the body, headers and status code.
The following problem is that in the case of a cache hit, the result will be different from the cache miss returns. The only way around this is knowing about the following middlewares (tuples, json, logger) and running them manually when having a cache hit.
I have found no way in the code to set a flag that will essentially avoid calling the adapter.
Any ideas?
I am trying to make a post request following the examples, but I am facing this error where I did not find in any place what it is about and how to solve it.
Tesla Code:
The post body is not being passed through the request
defmodule MyModule do
use Tesla, only: ~w(get post)a
plug Tesla.Middleware.BaseUrl, "localhost:8080"
plug Tesla.Middleware.JSON
adapter Tesla.Adapter.Hackney
def post_request(props) do
post("/", props)
end
end
MyModule.post_request(%{a: 1})
Issue:
The post body is not being passed through the request
Please provide an example or basic documentation how to use Tesla.Middleware.BaseUrlFromConfig
.
Consider this:
defmodule MyClient do
use Tesla
adapter Tesla.Adapter.Hackney
def client(opts) do
Tesla.build_client([
{CustomMiddleware, opts}
])
end
end
This code will use :httpc
(default adapter) as the HTTP adapter and will not take the configuration of hackney into consideration.
Either I'm using it wrong or the information is lost somewhere down building the client.
In order to remove any specific JSON library dependencies, I'd like to propose a refactoring of the current implementation.
Have a look at how Plug.Parsers
works. I think something similar can be done for Tesla
.
Some pseudo code:
with Tesla.Parser, parsers: [:json], accepts: ["application/json"], json_module: Poison
This would end up calling a callback module (Tesla.Parsers.JSON
) passing in the the options and using Poison
as the module to decode the response body.
Something similar could be done to handle encoding. Both poison
and exjsx
provide encode!/1
functions.
Similar callback modules can be created to automatically encode/decode based on the Content-Type
/Accept
headers.
Just a thought, what do you think?
When using Tesla and doctests like this:
defmodule Foo do
use Tesla
end
defmodule FooTest do
use ExUnit.Case
doctest Foo
end
I ran into the following:
expected non-blank line to follow iex> prompt
Could we publish the elixir-1.5 warnings fix to hex?
Thanks for the library!
Hello, thanks for this great library.
It seems to me like there's some convention in elixir to make normal function return output as tuple {:ok, value}
or {:error, error}
and for function with ! to return only value or raise error.
Would it be difficult to make Tesla work like this?
Hi,
I have a module that uses Tesla and its Tuples middleware.
use Tesla
plug Tesla.Middleware.Tuples
In that module, Tesla functions are used in the following way:
login_url
|> post(%{username: username, password: password, format: "json"})
|> extract_body
Spec of extract_body
expects a tuple with the first element :ok
or :error
. However, when I run mix dialyzer
, it gives the following error:
The pattern {'ok', _response@1} can never match the type
#{
'__client__':=fun(),
'__module__':=atom(),
'__struct__':='Elixir.Tesla.Env',
'body':=_,
'headers':=#{binary()=>binary()},
'method':='delete' | 'get' | 'head' | 'options' | 'patch' | 'post' | 'put' | 'trace',
'opts':=[any()],
'query':=[{_,_}],
'status':=integer(),
'url':=binary()
}
In order for dialyzer to detect that tuples will be returned, do I need to do something additional in the spec or code? What am I missing here?
This may be a feature and not a bug, but not what I was expecting. If I have time I will try to dig in and figure out the problem, if there is a problem. In the mean time I switched my app to just use the encode middleware and handle decoding manually.
By the way, this is an absolutely awesome elixir app 👍
defmodule MyApp.Mixfile do
use Mix.Project
def project do
[
app: :my_app,
version: "0.1.0",
elixir: "~> 1.5",
start_permanent: Mix.env == :prod,
deps: deps()
]
end
def application do
[
extra_applications: [:logger]
]
end
defp deps do
[{:tesla, "~> 0.9.0"},
{:poison, ">= 1.0.0"}]
end
end
use Mix.Config
config :tesla, adapter: :mock
defmodule MyApp do
use Tesla
plug Tesla.Middleware.BaseUrl, "http://example.com"
plug Tesla.Middleware.JSON
end
defmodule MyAppTest do
use ExUnit.Case
setup do
Tesla.Mock.mock fn
%{method: :get, url: "http://example.com/json"} ->
%Tesla.Env{status: 200, body: "[{}]"}
end
:ok
end
test "mock adapter calls middleware to decode json" do
assert %Tesla.Env{} = env = MyApp.get("/json")
assert env.status == 200
assert env.body == [{}]
end
end
1) test mock adapter calls middleware to decode json (MyAppTest)
test/my_app_test.exs:14
Assertion with == failed
code: assert env.body() == [{}]
left: "[{}]"
right: [{}]
stacktrace:
test/problem_test.exs:17: (test)
Finished in 0.05 seconds
1 test, 1 failure
I am trying to make GraphQL requests with Tesla, which requires a body on a GET request. I've tried reading the docs and adapters and calling code I believe should work, but sadly to no avail.
Could you please provide an example of this?
First, thanks for your great library ; it's a great pleasure to use 😄!
Today I tried to rely on the DecodeRels
middleware while working on paginated ETL, but it failed to decode the Link
header.
I had a closer look, and I believe that the culprit is that httpc
returns lower-case headers all the time for me (e.g. link
), while headers are actually returned as Link
, and the middleware expects Link
(code).
Example with curl (with other headers):
$ curl -I http://httpbin.org/ip
# SNIP
Content-Type: application/json
Content-Length: 33
And with httpc
:
iex(3)> :httpc.request(:get, {"http://httpbin.org/ip" |> to_char_list ,[]}, [], [])
{:ok,
{{'HTTP/1.1', 200, 'OK'},
[{'connection', 'keep-alive'}, {'date', 'Fri, 09 Sep 2016 09:18:42 GMT'},
{'server', 'nginx'}, {'content-length', '33'},
{'content-type', 'application/json'},
# SNIP
'{\n "origin": "80.215.228.205"\n}\n'}}
I haven't tested other adapters yet.
I presume being case insensitive on the Link
look-up in the middleware could be the way to go for a first step. Another possibility would be to allow configuration of the key name (but this feels more bloaty).
What do you think?
Thanks!
Using HTTP persistent connections is a huge speed improvement when making a lot of API calls.
Hackney does support it but Tesla currently not. A possible solution would be for the hackney adapter to be extended so an optional reference to a connection can be given. Instead of making a new connection the given connection is reused.
I believe I have found an issue whereby Tesla.Middleware.Tuples
does not work if a dynamic client is used. Does this possibly have something to do with the need for Tuples
to be the first middleware listed?
In this instance, I had to prepend the dynamic part w/ Tuples
for it to work.
I needed to do this to get it fully working:
plug Tesla.Middleware.Tuples
plug Tesla.Middleware.Headers, %{"User-Agent" => "MyConn", "Content-Type" => "application/json"}
def client(base_url) do
Tesla.build_client [
{Tesla.Middleware.Tuples, nil},
{Tesla.Middleware.BaseUrl, base_url}
]
end
The Hackney adapter allows options to be passed directly to the underlying request function. However, one option that will not work is with_body: true
, which returns the body directly, and is needed for the max_body
option.
The reason it won't work is that Tesla always tries to read the body from a reference:
Passing with_body: true
as an option to Tesla using Hackney will crash with a :req_not_found
adapter error.
I believe it might be sufficient to use is_reference/1
as a guard to know whether the body needs to be read or not.
After upgrading my application from Tesla 0.7.2 to 0.9.0, I started seeing a warning when running Dialyzer.
Given the following code:
@spec client(String.t) :: Tesla.Env.client
def client(api_token) do
encoded_token = Base.encode64("#{api_token}:api_token")
Tesla.build_client [
{Tesla.Middleware.Headers, %{"Authorization" => "Basic #{encoded_token}"}}
]
end
I began seeing this warning:
Invalid type specification for function 'Elixir.InvoiceTracker.TimeTracker':client/1. The success typing is (_) -> #{'__struct__':='Elixir.Tesla.Client', 'fun':='nil', 'post':=[any()], 'pre':=[any()]}
I also tried with Tesla.Client.t
instead of Tesla.Env.client
without success.
I was able to work around the error by changing my typespec to the less strict:
@spec client(String.t) :: %Tesla.Client{}
I'm not sure what the root cause of this issue is or how to fix it. If someone can give me some idea of what might be wrong here, I'd be happy to submit a PR with a fix.
They give warning about undefined modules during compilation
I'm having this issue where no middleware is ever used. It's probably something I'm doing wrong, but can't figure out why. Here's the module where I'm using Tesla
:
defmodule Trello do
use Tesla
plug Tesla.Middleware.BaseUrl, "https://api.trello.com/1"
plug Tesla.Middleware.Headers, %{"Content-Type" => "application/json"}
plug Tesla.Middleware.Tuples
plug Tesla.Middleware.JSON
adapter Tesla.Adapter.Httpc
alias Trello.Card
def create(:card, card = %Card{}) do
Tesla.post("/lists/#{card.list_id}/cards", Poison.encode!(%{name: card.name}))
|> IO.inspect
end
end
If I run this, I get ** (Tesla.Error) adapter error: :no_scheme
because the BaseUrl
middleware is ignored.
In trying to debug I can see that the plug
macro definition is being called and that the module attribute @__middleware__
is being populated correctly. However, when fetching the attribute in the __before_compile__
callback it is empty. I believe it is because the attribute is only populated after the compilation (from what I could understand), but not sure what to do about it.
I'm using elixir version 1.4.2
and erlang/OTP 19.
This works great on my local Ubuntu 16.04.2. It crashes on my server 16.04.
Here is what I see if I open remote_console on server.
iex([email protected])5> resp = Tesla.get "https://google.com"
** (UndefinedFunctionError) function :httpc.request/4 is undefined (module :httpc is not available)
:httpc.request(:get, {'https://google.com', []}, [autoredirect: false], [])
(tesla) lib/tesla/adapter/httpc.ex:29: Tesla.Adapter.Httpc.request/2
(tesla) lib/tesla/adapter/httpc.ex:16: Tesla.Adapter.Httpc.call/2
(tesla) lib/tesla/middleware/core.ex:5: Tesla.Middleware.Normalize.call/3
Example:
#little change to adapter
defmodule Tesla.Adapter.Hackney do
def call(env, opts) do
with {:ok, status, headers, body, location} <- request(env, opts || []) do
%{env | status: status,
headers: headers,
body: body,
private: %{final_location: location}}
end
end
defp handle({:ok, status, headers, ref}) do
with location <> "" <- :hackney.location(ref),
{:ok, body} <- :hackney.body(ref) do
{:ok, status, headers, body, location}
end
end
end
#usage
iex(1)> Tesla.get("https://github.com/teamon/tesla", opts: [follow_redirect: true])
%Tesla.Env{
url: "https://github.com/teamon/tesla",
private: %{final_location: "https://github.com/teamon/tesla"}
}
iex(2)> Tesla.get("https://goo.gl/8hfJ7F", opts: [follow_redirect: true])
%Tesla.Env{
url: "https://goo.gl/8hfJ7F",
private: %{final_location: "https://github.com/teamon/tesla"}
}
When using tesla in other project and running with MIX_ENV=prod mix run
causes:
** (UndefinedFunctionError) undefined function: Tesla.Adapter.Ibrowse.start/0 (module Tesla.Adapter.Ibrowse is not available)
The line causing the Tesla.Adapter.Ibrowse
not being compiled is https://github.com/monterail/tesla/blob/master/lib/tesla/adapter/ibrowse.ex#L1
Quoting http://elixir-lang.org/docs/v1.0/elixir/Code.html#ensure_loaded/1:
Elixir also contains an ensure_compiled/1 function that is a superset of ensure_loaded/1.
Since Elixir’s compilation happens in parallel, in some situations you may need to use a module that was not yet compiled, therefore it can’t even be loaded.
Given that, I wonder if we really need this check at all. Compiling Tesla.Adapter.Ibrowse
will not cause any error even when ibrowse
is not specified as dependency.
The Access docs instruct us that, unless we are creating a container type, we shouldn't implement the behaviour for structs. This is because structs are meant to define things with static properties and the Access protocol is intended to access things with dynamic properties.
However, since a Tesla.Env
is often a gateway to a dynamic JSON response, it'd be kind of convenient to use Access on it––in particular the derived get_in
stuff, ie:
response = Some.API.Client.build
|> Some.API.Call.get("/widgets.json")
value = get_in(response.body, ["json", "deep", "value"])
# could be
value = Some.API.Client.build
|> Some.API.Call.get("/")
|> get_in(["body", "json", "deep", "value"])
On the other hand, pretty much only a JSON body
, and possibly headers
, would benefit from being accessed this way, so we have to ask ourselves if this use-case common enough, and the convenience added substantial enough, to merit implementing access?
For example to make an additional HTTP request from within the middleware.
I'm working on a client talking to the Vimeo API. They are utilising Content-type
in a rather dynamic way, see https://developer.vimeo.com/api/spec#version
Ie. it might be application/vnd.vimeo.video+json
, but could also become application/vnd.vimeo.error+json
when requests are malformed, etc
Currently the Tesla JSON middleware will refuse to parse response bodies as JSON, when content-type isnt application/json
or text/javascript
.
A means to pass the intended types to the middleware plug would be helpful in such a case.
I'd like to start a wishlist for 1.0 release. It should include any last breaking changes before moving into proper semver for future releases.
Please comment if you'd like tesla to have (or drop) something in the near future.
Here are mine:
init/1
- Compile-time callback for middlewares:ok/:error
tuples by default - add get!
versionsI noticed that httpc
methods were raising undef
errors in a release.
Consequently, I was forced to add :inets
explicity to the applications in my mix.exs
like so.
def application do
[mod: {MyModule, []},
applications: [:inets, :tesla]]
end
Unless I'm missing something, it looks like tesla
should either include :httpc
as a dependency so it is resolved when building releases or note the issue during deployment.
Hi!
I am trying to use Tesla and so far so good, I can authenticate an api (Google DCM). The authentication part works with the FormUrlEncoded plug, but all of the actual API calls are json responses so I would like to use the JSON plug. However if I enable it, the initial auth call (which performs a JWT token request to Google) does not work when the JSON middleware is enabled. Is there something like
plug Tesla.Middleware.JSON, only: [:hello]
plug Tesla.Middleware.FormUrlencoded, only: [:auth]
That I can use? Sorry, I am really novice in all of this.
I have an API endpoint that requires TLS 1.2. Right now any request I make to that endpoint fails with:
** (Tesla.Error) adapter error: {:failed_connect, [{:to_address, {'test-api.tokenex.com', 443}}, {:inet, [:inet], {:tls_alert, 'handshake failure'}}]}
Elixir 1.3.4
Erlang/OTP 19
Currently it is possible to provide raw elixir data for the body parameter used by the mock adapter. This only works for some cases... It is not feasible to try and support every edge case in regard to using this feature. It would be nice if this feature went away and a policy that requires you set the content type header for a mock put in it's place. This eliminates complexity that comes via raw elixir body parameter and would make using the mock adapter a lot less confusing..
Desired behavior:
data = [%{foo: :bar}]
Tesla.Mock.mock fn
env -> %Tesla.Env{status: 200, body: Posion.encode!(data)}}
end
The above would result in an exception or warning...
"No content type header provided to Tesla.Mock for request ..."
Likewise, passing in elixir data for the body parameter would result in exception or warning.
data = [%{foo: :bar}]
Tesla.Mock.mock fn
env -> %Tesla.Env{status: 200, body: data}
end
"body parameter for Tesla.Mock must be a bitstring ..."
And make it configurable via config
When updating to a new version of Tesla, it would be nice to have a CHANGELOG.md
.
In my case, I would like to see if anything I use is broken or if there are new features I care about.
What do the maintainers think about having a changelog in the repo?
The only example from the documentation works:
Tesla.post("http://httpbin.org/post", "data", headers: %{"Content-Type" => "application/json"})
None of the following work:
Tesla.post("http://httpbin.org/post", %{key: "value"}
Tesla.post("http://httpbin.org/get", query: [a: 1, b: "foo"])
Isn't it kind of counterintuitive?
Matter of fact, I've looked through the code but couldn't find any other way to perform a post request with parameters.
error msg:
cannot import Tesla.Builder.with/1 because it conflicts with Elixir special forms
e.g. for hackney:
{:file, path, {"form-data", [{"name", "attachments[]"}, {"filename", filename}]}, []},
see multipart branch for more
At the moment it's a bit hard to update Tesla because the repository doesn't have a changelog. To upgrade, it's necessary to go and look at all the commits between releases, which is tedious, time consuming and error prone.
Would it be possible to add a changelog file with all the previous releases and update it with any new release?
I am building a client that has HEAD methods which return empty bodies. The ""
body should not pass as a decodable here.
When I change it to: def decodable_body?(env), do: (is_binary(env.body) && env.body != "") || is_list(env.body)
it works as expected.
I would propose a PR but it is really a small change...
Thanks in advance for your work! If possible, could you release a new version with this fix?
get()
is a public method added by use Tesla
. What's the best way to prevent users from directly calling MyApi.get
, and force them to use only the methods I expose, like MyApi.widgets
?
Example use cases: required API token, rate limiting middleware.
Version 1.7
is now released.
I am using Tesla to reach many servers some of which I expect to be down. My Tesla related code is generated by swagger codegen. I was expecting from the timeout middleware to return some sort of error such as {:error, :unavailable} or {:error, :timeout} or something similar. Instead, the calling process gets a kill signal which makes it very annoying to work around (especially so when generated from swagger).
The Erlang philosophy is "let it crash" not "make it crash". The middleware needs to let the caller decide whether he should crash or not. Typically you want to handle expected cases (that includes expected errors) and to let it crash when something unexpected happens.
I will submit a PR later. How far back does Tesla support Elixir? Is supporting Elixir >= 1.3 enough?
I'm currently implementing a wrapper for an external API which serves error objects describing any errors that occur on their part. I've written a middleware which checks for this errors and raises a custom error struct containing the error infos from the response.
Since I want to respond with tuples, instead of raising errors I considered using the Tuples
middleware but it only catches Tesla.Error
s. It would be great if I could pass my own errors to rescue from to the middlware.
plug Tesla.Middleware.Tuples, rescue_errors: [My.Error]
Hi,
I really like the library so far, but I have one issue regarding multiple headers with the same key.
A lot of web servers/frameworks, when returning multiple cookies, send multiple Set-cookie
headers in the response, but because of this piece of code (because a map instead of a keyword list is used) the headers get overridden.
Is there a workaround around this (I'm using the hackney adapter)?
I would be glad to fix this, just point me in the right direction :)
Hello again! You may remember me from such issues as #122. 😄
The project I'm working on wraps several APIs. This means that I'm __using__
Tesla in a few places to construct several API client modules with different behaviours.
Each API client is associated with several modules that actually perform the API calls––the surface of these APIs is very wide and benefits from being split up into conceptual buckets.
It would be nice to have my API-client-calling modules type-check that you are using the correct client for the given module. The cleanest and most idiomatic way to do this would be to pattern match on the client struct type, but all Tesla-built clients are Tesla.Client
structs. It would be helpful if I could customize the struct for any given call to use Tesla
.
I envision providing an extra option to the using call, i.e: use Tesla, client: My.API.Client
. If this parameter is provided it would define a new struct and thread it throughout all pattern matching on its type in Tesla macros. If it is not provided it would thread the existing Tesla.Client
module reference through instead, keeping everything backwards compatible.
This seems possible based on the current implementation of Tesla––all pattern matching on the module name of the client struct lives within macro calls that could be meta-programmed or parameterized. The few other places it is explicitly referenced we could adjust for, I have some ideas on how to manage. With your blessing I would be happy to pursue this further and submit a PR.
The places we are currently matching on it are within the builder:
defp generate_api(method, docs) when method in [:post, :put, :patch]
defp generate_api(method, docs) when method in [:head, :get, :delete, :trace, :options]
I just ran into a problem trying to connect to an https server using Tesla's default adapter.
When running my app normally from the command-line, I'd get the response: ** (Tesla.Error) adapter error: :econnrefused
. If I loaded the application into iex and ran it from there, it worked fine.
After much digging, I found https://github.com/teamon/tesla/blob/master/lib/tesla/adapter/httpc.ex#L57 which is essentially hiding the true cause of the failure. After temporarily commenting out that line, I got the following error message: ** (Tesla.Error) adapter error: {:failed_connect, [{:to_address, {'www.toggl.com', 443}}, {:inet, [:inet], :ssl_not_started}]}
.
After some Googling, I found that if I added :ssl
to the extra_applications
list in my mix.exs
file, everything worked fine.
Is there a way for the httpc adaptor to take care of this automatically? Or is a doc PR in order? I'm happy to take a crack at a PR for this, but I'd need some direction as to how you'd like it solved.
Thanks for a great library! I'm really liking it so far!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.