Req v0.5 released

Req v0.5.0 brings testing enhancements, errors standardization, %Req.Response.Async{}, and more improvements and bug fixes.

Testing Enhancements

In previous releases, we could only create test stubs (using Req.Test.stub/2), that is, fake HTTP servers which had predefined behaviour. Let’s say we’re integrating with a third-party weather service and we might create a stub for it like below:

Req.Test.stub(MyApp.Weather, fn conn ->
  Req.Test.json(conn, %{"celsius" => 25.0})
end)

Anytime we hit this fake we’ll get the same result. This works extremely well for simple integrations however it’s not quite enough for more complicated ones. Imagine we’re using something like AWS S3 and we test uploading some data and reading it back again. While we could do this:

Req.Test.stub(MyApp.S3, fn
  conn when conn.method == "PUT" ->
    # ...

  conn when conn.method == "GET" ->
    # ...
end)

making the test just a little bit more thorough will make it MUCH more complicated, for example: the first GET request should return a 404, we then make a PUT, and now GET should return a 200. We could solve it by adding some state to our test (e.g. an agent) but there is a simpler way and that is to set request expectations using the new Req.Test.expect/3 function:

Req.Test.expect(MyApp.S3, fn conn when conn.method == "GET" ->
  Plug.Conn.send_resp(conn, 404, "not found")
end)

Req.Test.expect(MyApp.S3, fn conn when conn.method == "PUT" ->
  {:ok, body, conn} = Plug.Conn.read_body(conn)
  assert body == "foo"
  Plug.Conn.send_resp(conn, 200, "")
end)

Req.Test.expect(MyApp.S3, fn conn when conn.method == "GET" ->
  Plug.Conn.send_resp(conn, 200, "foo")
end)

The important part is the request expectations are meant to run in order (and fail if they don’t).

In this release we’re also adding Req.Test.transport_error/2, a way to simulate network errors.

Here is another example using both of the new features, let’s simulate a server that is having issues: on the first request it is not responding and on the following two requests it returns an HTTP 500. Only on the fourth request it returns an HTTP 200. Req by default automatically retries transient errors (using retry step) so it will make multiple requests exercising all of our request expectations:

iex> Req.Test.expect(MyApp.S3, &Req.Test.transport_error(&1, :econnrefused))
iex> Req.Test.expect(MyApp.S3, 2, &Plug.Conn.send_resp(&1, 500, "internal server error"))
iex> Req.Test.expect(MyApp.S3, &Plug.Conn.send_resp(&1, 200, "ok"))
iex> Req.get!(plug: {Req.Test, MyApp.S3}).body
# 15:57:06.309 [error] retry: got exception, will retry in 1000ms, 3 attempts left
# 15:57:06.309 [error] ** (Req.TransportError) connection refused
# 15:57:07.310 [error] retry: got response with status 500, will retry in 2000ms, 2 attempts left
# 15:57:09.311 [error] retry: got response with status 500, will retry in 4000ms, 1 attempt left
"ok"

Finally, for parity with Mox, we add functions for setting ownership mode:

And for verifying expectations:

Thanks to Andrea Leopardi for driving the testing improvements.

Standardized Errors

In previous releases, when using the default adapter, Finch, Req could return these exceptions on network/protocol errors: Mint.TransportError, Mint.HTTPError, and Finch.Error. They have now been standardized into: Req.TransportError and Req.HTTPError for more consistent experience. In fact, this standardization was the pre-requisite of adding Req.Test.transport_error/2!

Two additional exception structs have been added: Req.ArchiveError and Req.DecompressError for zip/tar/etc errors in decode_body and gzip/br/zstd/etc errors in decompress_body respectively. Additionally, decode_body now returns Jason.DecodeError instead of raising it.

%Req.Response.Async{}

In previous releases we added ability to stream response body chunks into the current process mailbox using the into: :self option. When such is used, the response.body is now set to Req.Response.Async struct which implements the Enumerable protocol.

Here’s a quick example:

resp = Req.get!("http://httpbin.org/stream/2", into: :self)
resp.body
#=> #Req.Response.Async<...>
Enum.each(resp.body, &IO.puts/1)
# {"url": "http://httpbin.org/stream/2", ..., "id": 0}
# {"url": "http://httpbin.org/stream/2", ..., "id": 1}

Here is another example where we use Req to talk to two different servers. The first server produces some test data, strings "foo", "bar" and "baz". The second one is an “echo” server, it simply responds with the request body it returned. We then stream data from one server, transform it, and stream it to the other one:

Mix.install([
  {:req, "~> 0.5"},
  {:bandit, "~> 1.0"}
])

{:ok, _} =
  Bandit.start_link(
    scheme: :http,
    port: 4000,
    plug: fn conn, _ ->
      conn = Plug.Conn.send_chunked(conn, 200)
      {:ok, conn} = Plug.Conn.chunk(conn, "foo")
      {:ok, conn} = Plug.Conn.chunk(conn, "bar")
      {:ok, conn} = Plug.Conn.chunk(conn, "baz")
      conn
    end
  )

{:ok, _} =
  Bandit.start_link(
    scheme: :http,
    port: 4001,
    plug: fn conn, _ ->
      {:ok, body, conn} = Plug.Conn.read_body(conn)
      Plug.Conn.send_resp(conn, 200, body)
    end
  )

resp = Req.get!("http://localhost:4000", into: :self)
stream = resp.body |> Stream.with_index() |> Stream.map(fn {data, idx} -> "[#{idx}]#{data}" end)
Req.put!("http://localhost:4001", body: stream).body
#=> "[0]foo[1]bar[2]baz"

Req.Response.Async is an experimental feature which may change in the future.

The existing caveats to into: :self still apply, that is:

  • If the request is sent using HTTP/1, an extra process is spawned to consume messages from the underlying socket.

  • On both HTTP/1 and HTTP/2 the messages are sent to the current process as soon as they arrive, as a firehose with no backpressure.

If you wish to maximize request rate or have more control over how messages are streamed, use into: fun or into: collectable instead.

See full v0.5.0 changelog for more information. Happy hacking!