Giter VIP home page Giter VIP logo

mox's Introduction

Mox

hex.pm hexdocs.pm ci coverage

Mox is a library for defining concurrent mocks in Elixir.

The library follows the principles outlined in "Mocks and explicit contracts", summarized below:

  1. No ad-hoc mocks. You can only create mocks based on behaviours

  2. No dynamic generation of modules during tests. Mocks are preferably defined in your test_helper.exs or in a setup_all block and not per test

  3. Concurrency support. Tests using the same mock can still use async: true

  4. Rely on pattern matching and function clauses for asserting on the input instead of complex expectation rules

The goal behind Mox is to help you think and define the contract between the different parts of your application. In the opinion of Mox maintainers, as long as you follow those guidelines and keep your tests concurrent, any library for mocks may be used (or, in certain cases, you may not even need one).

See the documentation for more information.

Installation

Just add mox to your list of dependencies in mix.exs:

def deps do
  [
    {:mox, "~> 1.0", only: :test}
  ]
end

Mox should be automatically started unless the :applications key is set inside def application in your mix.exs. In such cases, you need to remove the :applications key in favor of :extra_applications or call Application.ensure_all_started(:mox) in your test/test_helper.exs.

Basic Usage

1) Add behaviour, defining the contract

# lib/weather_behaviour.ex
defmodule WeatherBehaviour do
  @callback get_weather(binary()) :: {:ok, map()} | {:error, binary()}
end

2) Add implementation for the behaviour

# lib/weather_impl.ex
defmodule WeatherImpl do
  @moduledoc """
  An implementation of a WeatherBehaviour
  """

  @behaviour WeatherBehaviour

  @impl WeatherBehaviour
  def get_weather(city) when is_binary(city) do
    # Here you could call an external api directly with an HTTP client or use a third
    # party library that does that work for you. In this example we send a
    # request using a `httpc` to get back some html, which we can process later.

    :inets.start()
    :ssl.start()

    case :httpc.request(:get, {"https://www.google.com/search?q=weather+#{city}", []}, [], []) do
      {:ok, {_, _, html_content}} -> {:ok, %{body: html_content}}
      error -> {:error, "Error getting weather: #{inspect(error)}"}
    end
  end
end

3) Add a switch

This can pull from your config/config.exs, config/test.exs, or, you can have no config as shown below and rely on a default. We also add a function to a higher level abstraction that will call the correct implementation:

# bound.ex, the main context we chose to call this function from
defmodule Bound do
  def get_weather(city) do
    weather_impl().get_weather(city)
  end

  defp weather_impl() do
    Application.get_env(:bound, :weather, WeatherImpl)
  end
end

4) Define the mock so it is used during tests

# In your test/test_helper.exs
Mox.defmock(WeatherBehaviourMock, for: WeatherBehaviour) # <- Add this
Application.put_env(:bound, :weather, WeatherBehaviourMock) # <- Add this

ExUnit.start()

5) Create a test and use expect to assert on the mock arguments

# test/bound_test.exs
defmodule BoundTest do
  use ExUnit.Case

  import Mox

  setup :verify_on_exit!

  describe "get_weather/1" do
    test "fetches weather based on a location" do
      expect(WeatherBehaviourMock, :get_weather, fn args ->
        # here we can assert on the arguments that get passed to the function
        assert args == "Chicago"

        # here we decide what the mock returns
        {:ok, %{body: "Some html with weather data"}}
      end)

      assert {:ok, _} = Bound.get_weather("Chicago")
    end
  end
end

Enforcing consistency with behaviour typespecs

Hammox is an enhanced version of Mox which automatically makes sure that calls to mocks match the typespecs defined in the behaviour. If you find this useful, see the project homepage.

License

Copyright 2017 Plataformatec
Copyright 2020 Dashbit

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

  http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

mox's People

Contributors

alexocode avatar axelson avatar bbhoss avatar bundacia avatar chulkilee avatar hubertlepicki avatar ityonemo avatar josevalim avatar kianmeng avatar ktec avatar metavida avatar mhanberg avatar mnishiguchi avatar moxley avatar msz avatar nathanl avatar nikitaavvakumov avatar pdgonzalez872 avatar pojiro avatar pragtob avatar qqwy avatar rbino avatar rudolfman avatar sgrshah avatar sophisticasean avatar thefirstavenger avatar tonyvanriet avatar whatyouhide avatar wojtekmach avatar ypconstante avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mox's Issues

Usage with acceptance/feature tests

๐Ÿ‘‹

Hello again wonderful people making lives of developers everywhere nicer ๐Ÿ’š

So, basically turns out stub_with/2 sadly wasn't the solution to my problem as described in #76 (fixed my doc mistake in #80 ), as #41 points out mox is designed to be explicit per process just calling stub_with/2 in test_helper.exs with the intention of setting a global fallback doesn't work.

This is a challenge when wanting to work with acceptance/feature testing libraries like wallaby/hound (and still execute tests in parallel), as we can't set the explicit allowances.

I see 2 options to solve this:

  1. something easy that I've overlooked, you'll tell me and I think "WHY DIDN'T I SEE THIS"
  2. reopen #76, new API suggestion defmock(mock_name, for: behaviour, fallback_to: implementation) (or stub_with: impl ?)
  3. do something similar to the SQL.Sandbox plug in phoenix_ecto

While 2. is arguably cleaner/sticking more with the general concept of explicit allowances it requires a bigger integration (most likely wallaby/hound would need to be made aware of the new metadata they are passed unless there's already a good option to pass these to them) + requires writing the plug etc.

Hence, my favorite would be 1 for now as it seems easier to do and somewhat generally useful, but I'm not a maintainer and I've never done something like 2 so it might be easier than I believe ;) Also having a "global fallback" is an ok option imo as opposed to SQL Sandbox where just running global defeats the purpose.

Thanks, looking forward to your feedback and naturally happy to help with the implementation! ๐Ÿ’š

edit: one thing I forgot, with 1 it's impossible to add expectation to the running web server process while with 2 it should be possible. It's not a functionality I want so it might still be worth the trade off.

IMG_20180317_113323

Apply calls to functions with matching arguments and in the provided order

Issue #4 already asked whether something like this should be supported:

CalcMock
|> expect(:add, fn(1, 1) -> 2 end)
|> expect(:add, fn(2, 2) -> 4 end)
CalcMock.add(1, 1)
CalcMock.add(2, 2)

The issue provides a workaround for this via pattern matching on the arguments. This may work fine in some cases, but in my scenario I also have to consider the order of the calls. To stick with this simple example, I want to verify that the first call to add passes 1, 1 and the second one 2, 2.

Warnings "module Spinnaker.ClientMock is not available"

I use Mox like this

defmodule Foo do
  @client Application.get_env(:app, :some_client)

  def bar do
    @client.get()
  end
end

defmodule Some.Client do
  def get
  end
end

# config.exs
config :foo, some_client: Some.Client

# test.exs
config :foo, some_client: Spinnaker.ClientMock

# test_helper.exs
Mox.defmock(Some.ClientMock, for: Some.Client)

# foo_test.exs
defmodule FooTest do
  alias Some.ClientMock
  import Mox
  setup :verify_on_exit!

  test "foo" do
    ClientMock |> expect(:get, ...)
  end
end

I got warnings like

warning: function Some.ClientMock.get/0 is undefined (module Some.ClientMock is not available)

I guess it's because Some.ClientMock is not defined(the code in test_helper.exs) when compiling. What's a better way to use Mox? Put the client in a function like?

defmodule Foo do
  def bar do
    client().get()
  end

  def client do
    Application.get_env(:app, :some_client)
  end
end

Make behaviour of `stub` equal to `expect`

My initial impression was that the only difference between stub and expect was that expect calls would be verified whereas stub calls would not. Looking closer though I found

expect/4 can also be invoked multiple times for the same name/arity, allowing you to give different behaviours on each invocation.

stub/3 will overwrite any previous calls to stub/3.

I have a test where I need to stub different returns depending on the invocation (i.e. first or second). I don't want to verify this behaviour because the return is used to generate side effects which are verified later on. Currently I need to expect the calls because it's the only way of returning different responses.

Is there any reason not to align stub and expect so that they're identical except for the verification step?

Mocks with multiple behaviours

This would be great when creating mocks for modules that, e.g., are GenServers and also have their own behaviour(s).

Could look something like this:

Mox.defmock(MyApp.CalcMock, for: [GenServer, MyApp.Calculator])

Improve documentation examples

I scanned through documentation again and I think we may be kind of giving misleading idea of how to use Mox in it.

Our first example is defining a mock, and then calls functions on the mock itself and verifies these were callled indeed.

https://github.com/plataformatec/mox/blob/master/lib/mox.ex#L43

While technically correct, I don't suppose anyone ever would want to write such code.

Instead, the real example of usage is that you define mocks in your tests, and switch the "backend" to to mocks in Mix.env() == :test using application configuration at compile time. The result is that in all environments you use some implementation module, while in tests you use mocks to provide "backend" functionality.

Most people will need to use mocks when they interact with external services. Most people, over HTTP.

I propose we update the documentation first example to be more of a "real world" use case. For example something along the lines of this blog post would do, I think:

https://medium.com/flatiron-labs/elixir-test-mocking-with-mox-b825a955143f

I have spoken with couple of people who mentioned the confusing documentation and I think I understand now what is wrong here. We just need a better, and more real-world example on how to use the library. @josevalim should I work on providing one?

Apply calls to functions with matching arguments

Should we support this?

CalcMock
|> expect(:add, fn(1, 1) -> 2 end)
|> expect(:add, fn(2, 2) -> 4 end)
CalcMock.add(2, 2) # 4

This is useful when expectations are more focused on the arguments and unique outputs than on the order the functions are called in. I'm afraid to show how I support this in double, but here you go: https://github.com/sonerdy/double/blob/master/lib/double/func_list.ex#L98
Try not to go blind looking at that. There must be a better way!

I suppose the workaround for this would be to just write a more advanced stub:

CalcMock
|> expect(:add, fn(x, y) -> 
  case {x, y} do
    {1, 1} -> 2
    {2, 2} -> 4
  end
end)
CalcMock.add(2, 2) # 4

Fallback to original implementation?

Hello wonderful people and thank you for mox and everything else ๐Ÿ’š

Okay, I'm currently at a client and they like elixir (yay!) but they'd really like to use mocks so sure enough I thought I'd go ahead and use mox!

One of the places where we'd specifically need it is testing phoenix controllers (basically mocking the context). However, I'd only want that to be mocked in the unit test. For acceptance tests (for instance with wallaby or others) I'd like to use the original implementation.

Is there a tried and tested way to do this?

I can see these options right now:

  1. create a new environment for which the config is set to the implementation module and not the mock
  2. Set the expectations explicitly back to the original implementation (through some allow magic for the pids I believe)

My personal (probably) favorite option would be:
3. when configuring the mock via Mox.defmock or something similar it would be allowed to set an option like fallback: true to enable this behaviour for the mock. The mock itself already knows what module it's mocking (thanks to explicit contracts!) so it should be rather easy to provide and fallback to.

I'm guessing you thought about this before and that there are also good reasons on why not to do this or there might be another way to achieve the same thing.

Anyhow, I'd be happy about feedback if this feature would have a chance of making it in (I'm happy to PR it pending time) or if there's another thing that we should be doing/should be looking into :)

Thanks a ton! ๐Ÿ’š โค๏ธ ๐ŸŽ‰
Tobi

PS: Here have a cute bunny:

image

It would be nice if Mox.allow/3 did not fail in global mode

Currently Mox.allow/3 raises an ArgumentError when it is in global mode: https://github.com/plataformatec/mox/blob/v0.4.0/lib/mox.ex#L458

For my use case it would be nice if it merely returned an :ok/:error tuple. Perhaps it would make sense to move the current code to Mox.allow!/3?

The full use case is that we have a small shim for instantiating a Task. The shim records the parent pid, then when it executes the task it calls both Ecto.Adapters.SQL.Sandbox.allow/3 and Mox.allow/3 to allow for concurrent testing.

Issue with callback defined yet still unknown until fully cleaning

So I am using the latest version released 0.3.0. I am seeing a strange issue

I followed the guide on setting up using the test/support/mocks.ex. But everytime I add a new behavior, and an expect with correct name and arity, I get the following error

 (ArgumentError) unknown function delete/2 for mock Datasync.Adapters.Mock

What fixes it is running mix clean. So I am not sure what I am doing wrong but seems like some how that new behavior is not seens in Mox?

Expect that a mocked function is never called

Mox requires that, if a mock is used, we set an expectation for it. But expect requires that the number of invocations be greater than 0. Other mock libraries (like mockito) allow for verifying that a mocked method is never called. This would probably be useful for Mox.

Thoughts?

Feature Request: Ability to generate a mock that doesn't implement optional callbacks

For some of my tests it would be useful to generate a Mock that does not include optional callbacks (because the specific module that it is mocking also does not define them).

Possible workarounds:

  • Stub/expect a no-op version of the optional callback (not always possible)
  • Define a second version of the behaviour that doesn't specify the optional callbacks at all

Would a PR be accepted that adds this functionality?

Waiting on expectations

One of the most common problems that I see people having when testing asynchronous Elixir code is waiting for something asynchronous to be finished.

This can often result in cases where folks mock what is called by the asynchronous code, then run the test, but the test finishes before the call to the asynchronous code is finished, and the expectation is never actually called so verify! fails.

I'm wondering: can Mox help in this situation? For example, could we have a way to wait_and_verify! that waits for a bit until expectations are called (with a timeout, like assert_receive or similar)? This wouldn't work with expect(..., 0) but maybe that's okay.

I might be missing obvious reasons why this is either not possible or a bad idea, but thought proposing it couldn't do too much harm ๐Ÿ˜„

Mox application won't start on Elixir 1.4

Although mox depends on Elixir 1.4, in reality it will work only with Elixir 1.5.

On Elixir 1.4, application crashes on start with error:

** (Mix) Could not start application mox: exited in: Mox.Application.start(:normal, [])
    ** (EXIT) an exception was raised:
        ** (ArgumentError) argument error
            (elixir) lib/supervisor/spec.ex:169: anonymous fn/1 in Supervisor.Spec.supervise/2
            (elixir) lib/enum.ex:1229: Enum."-map/2-lists^map/1-0-"/2
            (elixir) lib/supervisor/spec.ex:169: Supervisor.Spec.supervise/2
            (elixir) lib/supervisor.ex:297: Supervisor.start_link/2
            (kernel) application_master.erl:273: :application_master.start_it_old/4

I suppose, this line is the offender.

Support Ecto-style ownership mechanism

Here's something I think would be pretty cool, is to allow mocking with Wallaby sessions.

I propose a similar solution that is used with Ecto, where we have a plug, that has to be configured in endpoint.ex for test env.

Then in a setup block we would have to switch mode to shared:

Mox.mode({:shared, self()})

and all the mocks we set under the tests would be available.

Behind the scenes this would work similar to Ecto sandbox. We probably need to use different default header than user-agent, however, since this one seems already taken by Ecto. I think it should be custom header anyway, even in Ecto, if the major browser drivers support it, but that's whole other story.

What do you think? I could possibly put in some time to get this going.

Can't add new function to mock

Elixir 1.7.4

Error: 
** (ArgumentError) unknown function run_deduplication/0 for mock WorkerMock

Behaviour

defmodule Deduplication.Behaviours.WorkerBehaviour do
  @moduledoc false

  @callback stop_application() :: no_return()
  @callback run_deduplication() :: atom()
end

Module:

defmodule Deduplication.Worker do
  @moduledoc false

  use GenServer
  alias Deduplication.V2.Match
  use Confex, otp_app: :deduplication

@behaviour Deduplication.Behaviours.WorkerBehaviour
  @worker Application.get_env(:deduplication, :worker)

  def start_link do
    {:ok, pid} = GenServer.start_link(__MODULE__, nil, name: __MODULE__)
    @worker.run_deduplication()
    {:ok, pid}
  end
  
   ....
   

Test:

  setup :verify_on_exit!
  setup :set_mox_global

  describe "test run" do
    setup do
      expect(WorkerMock, :stop_application, 1, fn -> :ok end)
      expect(WorkerMock, :run_deduplication, 1, fn -> :ok end)

      {:ok, _pid} = Deduplication.Worker.start_link()
      :ok
    end

  test "for existing unverified persons works" do
      expect(ClientMock, :post!, fn _url, _body, _headers ->
        %HTTPoison.Response{status_code: 200}
      end)

      insert(:person, tax_id: "123456789")
      assert :continue == Match.run()
      assert GenServer.whereis(Deduplication.Worker)
    end

mix test

Errors:  1) test test run for existing unverified persons works (Deduplication.V2.MatchTest)
     test/v2/match_test.exs:22
     ** (ArgumentError) unknown function run_deduplication/0 for mock WorkerMock

is there an easy way to unmock a specific mock?

hey guys

when we add mocks and add it to the application config, it means that it is mocked everywhere in the tests, even when we want to do an integration test. The way I go about this is to unmock a specific mock when I need like this:

expect(
      Gateway.Web.LegacyAdapter.ClientMock,
      :get,
      &Gateway.Web.LegacyAdapter.Client.get/2
    )

this works fine but I feel there can be an easier way to do this.

so I just wanted to ask if there is an easier way and if no are you guys interested in a PR to implement this?

How should I wire up mox when writing a library?

Hi friends! I am working on implementing mox inside of a library at work, and I ran into problems with wiring up mox.

I had code like this to reference my implementation in prod code:

defmodule Datemath.Interpreter do
  # ...

  @relative_date Application.get_env(:datemath, :relative_date_impl)

  defp evaluate_date_relative() do
    @relative_date.now()
  end
end

And I had config like this to provide either the real implementation or the mock:

# config.exs:
config :datemath,
  relative_date_impl: Datemath.Interpreter.RelativeDateResolver

# dev.exs
config :datemath,
  relative_date_impl: Datemath.Interpreter.RelativeDateMock

But when I imported my library into my app and hit the Datemath.Interpreter.evaluate_date_relative function, I got an error saying nil.now was undefined. My config did not take effect in an app.

The workaround I found was to call get_env with a default provided as a third parameter:

@relative_date Application.get_env(:datemath, :relative_date_impl, Datemath.Interpreter.RelativeDateResolver)

This worked because the library will use its config while in dev or test, and apps that consume the library will fall back to the default implementation.

But is there a better way to do this? Thanks!

Is there a way to verify valid return types from mocks?

I'm curious if there is a way to verify the return types from a function that's been defined with expect/3?

For example,

defmodule Fetchable do
  @callback fetch(term) :: {:ok, term} | {:error, :not_found}
end

Mox.defmock(FetchableMock, for: Fetchable)

FetchableMock
|> expect(:fetch, fn _ -> :not_found)

I'm currently using behaviours + plain-old modules for mocks in test/support and diaylzer is able to catch if I return the wrong type from the mock's function. I like the ability to define an expectation and function stub explicitly in the test, but I'm worried I'm going to lose the helpful type checking provided by dialyzer.

Mocking a process spawned by a GenServer

We're trying to mock a GenServer which spawns a process (also a GenServer). This spawned process is a client which stays connected to a broker and we pass its PID around in the GenServer's state.

We're trying to use mox for this, however, by crating a mock for the client process our tests crash on application startup we the following error:

** (Mix) Could not start application <redacted>: Application.start(:normal, []) returned an error: shutdown: failed to start child: Publisher
    ** (EXIT) an exception was raised:
        ** (Mox.UnexpectedCallError) no expectation defined for ClientMock.start_link/0 in process #PID<0.317.0>
            (mox) lib/mox.ex:438: Mox.__dispatch__/4
            (<redacted>) lib/publisher.ex:22: Publisher.init/1
            (stdlib) gen_server.erl:365: :gen_server.init_it/2
            (stdlib) gen_server.erl:333: :gen_server.init_it/6
            (stdlib) proc_lib.erl:247: :proc_lib.init_p_do_apply/3

Here's a Gist which should be enough to reproduce the problem: Gist

It seems mox is expecting an expect to be defined for ClientMock.start_link/0 not in the test case, but actually in the application code itself? Are we doing something wrong or is this a shortcoming of the library?

Thanks, if you need anymore info we'll be glad to provide it.

(ArgumentError) unknown registry: Mox

I came across Mox and tried to use it in my project, however I came across this error when setting the expectation. Am I missing something in the project set up? I followed the documentation closely

     test/controllers/app_controller_test.exs:11
     ** (ArgumentError) unknown registry: Mox
     code: |> expect(:search, 1, fn(_) -> {:ok, mock_results} end)
     stacktrace:
       (elixir) lib/registry.ex:926: Registry.info!/1
       (elixir) lib/registry.ex:808: Registry.register/3
       lib/mox.ex:155: Mox.expect/4
       test/controllers/app_controller_test.exs:18: (test)

warning redefining module

Hi there, i have an umbrella app with a domain app and other apps that depend on that domain, the problem is domain has some 3rd party dependencies and does some rpc calls, i'm mocking them with mox, and I have to use those mocks in different apps (under the umbrella) that don't depend on each other. the problem is when i define for example Mox.defmock(MockClient, for: Domain.Services.Client) in multiple apps and I run the whole test suit from the root directory it prints the warning message bellow, (the warning doesn't appear running each app's test independently), I know i can rename the mock's name, but i was wondering if there is another way of getting rid of that warning, thanks.

warning: redefining module Domain.Service.MockDatabase (current version defined in memory)
  /Users/kiro/esl/release_poller/deps/mox/lib/mox.ex:273

`stub_with/2` can only be used inside of a test

Hello!

I was reading through the documentation and found stub_with/2, which seems like it would be very useful for e.g. implementing both unit test and integration tests within the same test environment.

The documentation gives an example in which stub_with/2 is called directly after defmock/2, but this doesn't seem possible. Here's a small project to demonstrate this. I'm using a test/support/mocks.ex file, but stub_with/2 can't be used in there. That makes sense to me. But then I tried putting it in test_helper.exs, and even inside example_test.exs (outside of the test), but neither of those work either. The function only works when it's used inside of the test macro.

I guess this makes sense from a process perspective, but it seems like I shouldn't have to call stub_with/2 during every test. Is there any way around this at all? The only reason I'm concerned about this is that, in a large project, each integration test would have to stub_with/2 at least a few modules. I'd rather just stub_with/2 all necessary modules at once.

As a side note: am I going about this completely wrong? This is just one of many issues I've encountered while trying to set up unit test and integration tests and acceptance tests altogether. Using the generally-accepted Application.get_env/3 pattern to set up unit tests seems to make other kinds of test very difficult.

Feature request: option to add @moduledoc false

The mock module may be defined in the environment where mix docs runs for some reasons. In that case modules generated by defmock will be included in the final doc.

Can we add an option to skip moduledoc? something like this:

Mox.defmock(MyApp.CalcMock, for: MyApp.Calculator, moduledoc: false)

Module configured with Mox cannot be integration tested

If my understanding of Mox is correct, I think I have found a limitation with the suggested configuration. I have a solution to that limitation, but the solution itself introduces other limitations. I'm looking for a solution (or combination of solutions) that will satisfy 3 criteria:

Be able to create a mock for a function such that:
a) you can use both unit and integration tests on that function
b) the function does not have to be called by your code
c) the mock can be used at and arbitrary call depth from the function under tests (ie. the function under test can be called, step down several layers of sub-function calls, and then encounter a mock

The method described in the hexdocs (and original article) for configuring modules to be mocked has the limitation that the module now must always be mocked. Because we are substituting modules at compile-time based on application config it is not possible to only sometimes mock that module. This means that you cannot run an integration test over a module that has been configured in this way.

To get around this, for the function under test, you can give the modules it calls internally as parameters. You can default that parameter to the real module, and pass in the mock module in tests. This solves our "I can't run integration tests" problem. But this approach is limited in two ways:

  1. You cannot use it on for functions that you do not define, namely 3rd party library callbacks. Therefore, to test functions on the boundary of your codebase, you must either use the first method (application config) and only ever write unit tests for that function. Or don't create mocks at all, and only use integration tests.
  2. Also note that with this strategy, it is difficult to create a mock for a function that is not directly called by the function under test, as you would have to pass a module down through multiple function calls until you got to the function you wanted to create a mock for.

Is there a way that you can create a mock for a function such that the 3 criteria listed above are met?

Support stub functions

CalcMock
|> stub(:add, fn(x, y) -> x + y end)

CalcMock.add(1, 2) # 3
CalcMock.add(2, 3) # 5

It can be called zero to infinity times and won't be included in verification when verify!() is called. This would be useful in shared setups where a few different functions may or may not be called. It lets you write more focused expectations per test.

I have a few other ideas, and would be willing to pitch in on some of the work. I've been maintaining this Double package that takes a different approach for a while, but I like what I'm seeing here better already.

How should I add a working example into this repo?

I made a repo as an example of how to use Mox because all the docs and articles I could find didn't detail the setup enough to actually get it running.

What's the best way for me to get this added to Mox? Update the hexdocs to point to this repo? Write up a summary and throw that in the hexdocs?

I'm not sure where to put it!

Mock only some tests

Is there a way to use a Mock for only some tests?

I've seen the suggestion of calling Mox.defmock and Application.put_env in the test_helper.exs.

My problem is that I want to mock my behavior for some tests but when I test the actual implementation of the behavior I don't want to mock it.

I've tried moving the Mox.defmock and Application.put_env into a setup block for specific tests but it seems like calling Mox.defmock anywhere causes all tests to use the mock?

Errors in tests are masked by the suggested "verify!()" in the after block

Today if I have code like this:

test "some test" do
  expect(MyModule, :my_fun, fn -> :ok end)
  raise "foo"
  assert MyModule.my_fun() == :ok
after
  verify!()
end

it is hard to discover errors. This is because raise "foo" will raise the RuntimeError that I want to see, but since after is executed anyways, verify!() executes anyways and it fails saying that MyModule.my_fun/0 should've been called once but was never called.

Mocking modules with methods generated by macro

Is it possible to mock methods that are generated by macro?

For example, there is some behaviour

defmodule Client.Behaviour do
  @callback request(map()) :: {:ok, map()} | {:error, map()} | {:error, atom()}
end

and there is some macro that implements this behaviour

defmodule Client.Macro do
  defmacro __using__(_) do
    @behaviour Client.Behaviour

    methods = Client.Methods.methods # a lot of methods with format {binary, atom}

    quote location: :keep, bind_quoted: [methods: methods] do
      @behaviour Client.Behaviour

      methods
      |> Enum.each(fn({original_name, formatted_name}) ->
       def unquote(formatted_name)(params) when is_list(params) do
         send_request(unquote(original_name), params)
       end
      end)

      def send_request(method_name, params) when is_list(params) do
        # params preparation

        request(params)
      end
      
      def request(params) do
        {:error, :not_implemented}
      end

      defoverridable [request: 1]
    end
  end
end

and then there are a couple of client that implement only request/1 method

  defmodule Client.HttpClient do
    use Client.Macro

    def request(params)
      # http stuff
    end
  end

 defmodule Client.IpcClient do
    use Client.Macro

    def request(params)
      # Unix socket stuff
    end
  end

this clients are used in other modules so I need somehow mock methods generated by Client.Macro. Should I define behaviour for every dynamically generated method (maybe also dynamically)? or there is no other way but to define ad-hoc mock module? Right now I'm using exvcr for all external request but I know it's not very clean.

P.S. Actually there are a lot of apis whose methods are the same except for their names so I think I'm not the only one having problems mocking them.

Expectations not visible to stop_supervised regardless of allowances or global mode

I came across an incompatibility with ExUnit.Callbacks.start_supervised/2 and stop_supervised/1 in which expectations are not shared with the supervised process while executing the teardown logic of a GenServer. I was testing a GenServer that must (attempt) to send an HTTP request during shutdown and all mocks outside of the teardown logic worked exactly as expected. Here is a minimal program that can reproduce the issue.

defmodule TeardownGenServer do
  use GenServer

  def start_link(opts) do
    GenServer.start_link(__MODULE__, nil, opts)
  end

  @impl true
  def init(_) do
    Process.flag(:trap_exit, true)
    MockClient.start()
    {:ok, nil}
  end

  @impl true
  def terminate(_, _) do
    MockClient.stop()
  end
end

defmodule MinimalClient do
  @callback start :: :ok
  @callback stop :: :ok
end

defmodule MoxTest do
  use ExUnit.Case

  import Mox

  setup :set_mox_global
  setup :verify_on_exit!

  setup_all do
    defmock(MockClient, for: MinimalClient)
    :ok
  end

  test "mocks are called" do
    MockClient
    |> expect(:start, fn -> :ok end)
    |> expect(:stop, fn -> :ok end)

    # expectation for start/0 is fulfilled
    {:ok, pid} = start_supervised(TeardownGenServer)
    IO.puts("GenServer has pid: #{inspect(pid)}")
    # expectation for stop/0 not defined...?
    stop_supervised(pid)
  end
end

This will lead to a curious result:

GenServer has pid: #PID<0.368.0>
11:06:47.377 [error] GenServer #PID<0.368.0> terminating
** (Mox.UnexpectedCallError) no expectation defined for MockClient.stop/0 in process #PID<0.368.0> with args []
    (mox) lib/mox.ex:599: Mox.__dispatch__/4
    (stdlib) gen_server.erl:673: :gen_server.try_terminate/3
    (stdlib) gen_server.erl:858: :gen_server.terminate/10
    (stdlib) proc_lib.erl:249: :proc_lib.init_p_do_apply/3
Last message: {:EXIT, #PID<0.367.0>, :shutdown}
State: nil

The TeardownGenServer had access to the start/0 mock, but not stop/0 even though it is the same process. I attempted to remedy this issue with many combinations of allowances and expectation placement, but I could simply not get Mox to recognize the expectations within stop_supervised without resorting to some nasty global mocks that I won't share here.

Workaround: use GenServer methods to start and stop the server under test and avoid the ExUnit callbacks when managing processes.

Would it be possible to capture arguments on calls to expectations?

Sometimes it would be nice to know that a mocked call wasn't just called N times, but perhaps which arguments it was called with. I think I see a place where it could be hooked in. But, I'm not sure if this is something of interest to the maintainers, or what the interface to retrieve those arguments would be (maybe a return from verify!).

If you'd like, I can submit a PR. Or, if there is a better way to accomplish this I'm happy to add it to the documentation.

Thanks!

expect 0 doesn't works

Given this code:

    Utility.SolrMock
    |> stub(:deindex, 0, fn _collection, _params ->
        {:error, :solr_down}
    end)

I'm getting:

     ** (FunctionClauseError) no function clause matching in Mox.expect/4

     The following arguments were given to Mox.expect/4:
     
         # 1
         Utility.SolrMock
     
         # 2
         :deindex
     
         # 3
         0
     
         # 4
         #Function<0.13693587/2 in PulsarSolrIndexerIntegration."test de-index: SOLR down"/1>
     
     Attempted function clauses (showing 1 out of 1):
     
         def expect(mock, name, n, code) when is_atom(mock) and is_atom(name) and is_integer(n) and n >= 1 and is_function(code)
     
     code: |> expect(:deindex, 0, fn _collection, _params ->
     stacktrace:
       (mox) lib/mox.ex:294: Mox.expect/4
       test/integration_test.exs:117: (test)

Fallback to stub when there is a FunctionClauseError in expect

I think this feature is useful when we want to focus on checking only one specific message was sent to the mock while ignoring all the other messages.

For example, the function I'm working on right now would send multiple websocket events to different topics, and in each unit test for this function, I pass a mocked Phoenix Endpoint to it and call expect on each type of event.

With the current implementation of Mox, I need to specify several expects and the order needs to match the exactly same order as broadcast/3 would be called

test "broadcasts to topic_a" do
  expect(EndpointMock, :braodcast, fn "topic_a:room", _, _ -> nil end)
  expect(EndpointMock, :braodcast, fn _, _, _ -> nil end)

  WebSocket.send(:event, params, EndpointMock)
end

test "broadcasts to topic_b" do
  expect(EndpointMock, :braodcast, fn _, _, _ -> nil end)
  expect(EndpointMock, :braodcast, fn "topic_b:room", _, _ -> nil end)

  WebSocket.send(:event, params, EndpointMock)
end

What I want is something like the following snippet:

test "broadcasts to topic_a" do
  stub(EndpointMock, :broadcast, fn _, _, _ -> nil end)
  expect(EndpointMock, :braodcast, fn "topic_a:room", _, _ -> nil end)

  WebSocket.send(:event, params, EndpointMock)
end

test "broadcasts to topic_b" do
  stub(EndpointMock, :broadcast, fn _, _, _ -> nil end)
  expect(EndpointMock, :braodcast, fn "topic_b:room", _, _ -> nil end)

  WebSocket.send(:event, params, EndpointMock)
end

UnexpectedCallError is not raised when all expectations are consumed if a stub is also defined

I can't be sure that this is not the desired behavior for Mox, but I can at least say that it surprised us.

A common pattern for us has been to define a simple stub in the top-level setup of our test file so that each individual test isn't burdened with repeating that mock setup. Then, if a test needs a more complicated stub or we want to test with expect, the individual test will declare its own stub or expect as needed, overriding the simple stub.

Then, I noticed that when using expect to expect 0 calls the test did pass even though there was a bug and the implementation code was calling the function once.
Similarly, if the test expects 1 call, but the function is actually called twice, the test still passes.

So, it appears that after all of the expected calls are consumed, Mox passes subsequent calls to the previously declared stub and no longer enforces the expectation.

I'll attach a PR in a moment with a test that reproduces this failure. If this is known to be desired behavior for Mox, I could understand that, and perhaps I'll just submit a PR with some documentation warning about this behavior.

Thanks!

doctest

Is it possible stub doctest? I need stup HTTPpoison inside doctests

Note what args were passed on UnexpectedCallError

When building out tests, it would be helpful to have the UnexpectedCallError note what args were passed to the unexpected function. The args are available here:
https://github.com/plataformatec/mox/blob/master/lib/mox.ex#L551

Not sure if this should be in a separate logger call or in the error.

This would be helpful when specifying exact expected args as here:
https://github.com/TheFirstAvenger/ironman/blob/fac59ddeb804510f0620cc009df955a28286834a/test/support/mox_helpers.ex#L26

As a workaround, I have been specifying my own catch-all for each moxed call, but that needs to be manually added to each test case after the other expects:
https://github.com/TheFirstAvenger/ironman/blob/fac59ddeb804510f0620cc009df955a28286834a/test/support/mox_helpers.ex#L59

Mocks for functions with optional argument

I am trying to use a mock for a function that has an arity of 4 according to its behaviour, but its last argument is optional.

I set up the mocking behavior like this:

MyApp.MockModule
|> expect(:do_something, fn arg1, arg2, arg3, arg4 -> :response end)

The code I'm testing calls this function with three arguments.

When I try running the tests, this fails:

** (UndefinedFunctionError) function MyApp.MockModule.do_something/3 is undefined or private. Did you mean one of:
     
           * do_something/4

Using an anonymous function with arity of 3 doesn't work on its own because the behaviour I'm mocking only specifies the function with arity of 4.

As a workaround, I added an extra @callback to the behaviour with the different arity.

As far as workarounds go, this is pretty painless, and the case could be made that it's actually better to be explicit in your behaviour that it's possible to call the function with either 3 or 4 arguments. But since Mox's documentation doesn't seem to specify anything about optional arguments I wanted to check and make sure this is intentional behavior and not a bug in Mox itself.

A failed assertion inside an expectation lets the test pass

Hi,

I wonder if it's a misunderstanding on my side or a bug.

Given a behaviour like this:

defmodule EventDispatcher do
  @callback dispatch(term()) :: :ok
end

And this test:

defmodule MyTest do
  use ExUnit.Case
  import Mox

  setup verify_on_exit!

  test "this test will pass but it shouldn't" do
    expect(EventDispatcherMock, :dispatch, fn _ ->
      assert false
    end
    DispatcherExample.run()
  end
end

And this production code

defmodule DispatcherExample do
  # I would normally rely on Application.get_env to
  # get the module but for the sake of conciseness:
  @event_dispatcher EventDispatcherMock

  def run() do
    @event_dispatcher.dispatch(:foo)
  end
end

And of course in the test_helper file:

Mox.defmock(EventDispatcherMock, for: EventDispatcher)

I want to use this pattern to make sure that 1) an event is dispatched and 2) the event has the right properties (key and values if it's a map for example).

When I run this test, it passes. When I IO.puts something from within the anonymous function, I see the output if it happens before the assert expression. It looks like assert raises an error which is then swallowed by Mox, thereby preventing the test to report a failure. So, bug or misunderstanding?

handle this scenario where underlying code runs my test in an async task?

I have a problem testing an API Module, which in turn uses Tesla. My tests look like this:

# Because Tesla does not provide one, and we need it for mox
defmodule Behaviour.Tesla do
  @callback call(Tesla.Env.client, []) :: Tesla.Env.t
end

defmodule RapiWeb.MyAPITest do
  @moduledoc """
  Tests the API module
  """
  use RapiWeb.ConnCase
  import Plug.Conn, only: [put_req_header: 3]
  import Mox

  setup :verify_on_exit!

  setup %{conn: conn} do
    conn = put_req_header(conn, "content-type", "application/json")
    {:ok, %{conn: conn}}
  end

  test "MyAPI.token returns a token if the underlying service returns one" do
    TeslaMock
    |> expect(:call, fn env, _opts ->
      %{env | status: 200, body: Poison.encode!(
        %{
          token_type: "Bearer",
          access_token: "cHBtKbMDNdc3uX2LIhCy9eVwkodhEc87s6e5Fk0lRS",
          expires_in: 86400
        })
      }
    end)

    client = RapiWeb.Helpers.MyAPI.client("my.fakehost.com")
    result = RapiWeb.Helpers.MyAPI.token(client, "CLIENT_ID", "CLIENT_SECRET")

    response = elem(result, 1)
    token = response.body["access_token"]

    assert elem(result, 0) == :ok
    assert response.status == 200
    assert is_binary(token) and token != ""

  end

All went well, until I added a timeout plug to Tesla and my test started to fail:

%Mox.UnexpectedCallError{message: "no expectation defined for TeslaMock.call/2 in process #PID<0.360.0>"}
  1) test MyAPI.token returns a token if the underlying service returns one (RapiWeb.MyAPITest)
     test/rapi_web/controllers/my_api_test.exs:71
     ** (Mox.UnexpectedCallError) no expectation defined for TeslaMock.call/2 in process #PID<0.360.0>
     code: result = RapiWeb.Helpers.MyAPI.token(client, "CLIENT_ID", "CLIENT_SECRET")
     stacktrace:
       (tesla) lib/tesla/middleware/timeout.ex:55: Tesla.Middleware.Timeout.repass_error/1
       (tesla) lib/tesla/middleware/timeout.ex:35: Tesla.Middleware.Timeout.call/3
       (tesla) lib/tesla/middleware/tuples.ex:41: Tesla.Middleware.Tuples.call/3
       test/rapi_web/controllers/my_api_test.exs:84: (test)

As it turns out, the Tesla timeout plug runs the request in an async task (see here).
I changed my test as suggested in the docs to:

  test "MyAPI.token returns a token if the underlying service returns one" do
    TeslaMock
    |> expect(:call, fn env, _opts ->
      %{env | status: 200, body: Poison.encode!(
        %{
          token_type: "Bearer",
          access_token: "cHBtKbMDNdc3uX2LIhCy9eVwkodhEc87s6e5Fk0lRS",
          expires_in: 86400
        })
      }
    end)

    parent_pid = self()
    Task.async(fn ->
      TeslaMock |> allow(parent_pid, self())

      client = RapiWeb.Helpers.MyAPI.client("my.fakehost.com")
      result = RapiWeb.Helpers.MyAPI.token(client, "CLIENT_ID", "CLIENT_SECRET")

      response = elem(result, 1)
      token = response.body["access_token"]

      assert elem(result, 0) == :ok
      assert response.status == 200
      assert is_binary(token) and token != ""
    end)
    |> Task.await
  end

But his does not help:

{%Mox.UnexpectedCallError{message: "no expectation defined for TeslaMock.call/2 in process #PID<0.4549.0>"}, [{Tesla.Middleware.Timeout, :repass_error, 1, [file: 'lib/tesla/middleware/timeout.ex', line: 55]}, {Tesla.Middleware.Timeout, :call, 3, [file: 'lib/tesla/middleware/timeout.ex', line: 35]}, {Tesla.Middleware.Tuples, :call, 3, [file: 'lib/tesla/middleware/tuples.ex', line: 41]}, {RapiWeb.MyAPITest, :"-test MyAPI.token returns a token if the underlying service returns one/1-fun-1-", 1, [file: 'test/rapi_web/controllers/my_api_test.exs', line: 88]}, {Task.Supervised, :do_apply, 2, [file: 'lib/task/supervised.ex', line: 85]}, {Task.Supervised, :reply, 5, [file: 'lib/task/supervised.ex', line: 36]}, {:proc_lib, :init_p_do_apply, 3, [file: 'proc_lib.erl', line: 247]}]}
  1) test MyAPI.token returns a token if the underlying service returns one (RapiWeb.MyAPITest)
     test/rapi_web/controllers/my_api_test.exs:71
     ** (EXIT from #PID<0.4547.0>) an exception was raised:
         ** (Mox.UnexpectedCallError) no expectation defined for TeslaMock.call/2 in process #PID<0.4549.0>
             (tesla) lib/tesla/middleware/timeout.ex:55: Tesla.Middleware.Timeout.repass_error/1
             (tesla) lib/tesla/middleware/timeout.ex:35: Tesla.Middleware.Timeout.call/3
             (tesla) lib/tesla/middleware/tuples.ex:41: Tesla.Middleware.Tuples.call/3
             test/rapi_web/controllers/my_api_test.exs:88: anonymous fn/1 in RapiWeb.MyAPITest."test MyAPI.token returns a token if the underlying service returns one"/1
             (elixir) lib/task/supervised.ex:85: Task.Supervised.do_apply/2
             (elixir) lib/task/supervised.ex:36: Task.Supervised.reply/5
             (stdlib) proc_lib.erl:247: :proc_lib.init_p_do_apply/3

I guess if Tesla starts an async task to run my request in, that code has no access to the mock.
Am I doing it wrong?

Mocks should be present during compilation-time to verify against Behaviour

I might be doing something wrong, but I am following the installation instructions in the following way:

  1. In my tests/test_helper.exs file I define my mocks:

    Mox.defmock(Core.UsersManagerMock, for: Storage.Behaviours.UsersManager)

  2. In my config/test.exs file I declare I use these mocks:

    config :core, :users_manager, Core.UsersManagerMock

  3. In my code, in modules I declare module variable and use it instead of module names directly:

    def MyModule do
    @users_manager Application.get_env(:core, :users_manager)

     def do_something() do
       @users_manager.do_something()
     end
    

    end

When I do the above, and either run mix test or MIX_ENV=test mix compile, I am getting the following warning:

MIX_ENV=test mix compile
Compiling 9 files (.ex)

warning: function Core.UsersManagerMock.do_something/0 is undefined (module Core.UsersManagerMock is not available)
  lib/my_module:5

This is happening because test_helper.exs is actually an exs file, not compiled along with the rest of the files, but executed when application / test suite starts. And the mocks defined there are not available during compilation phase.

The workaround that comes to my head is to put the mocks declaration into test/support/ directory, add it to elixirc_paths in a way that it'd force compilation before compilation of rest of app if Mix.env == test. I can't find of other solution.

Can you advise what's the best course of action?

If the workaround above is not workaround at all, but a kosher way to do it, I am happy to update the docs.

Just a heads up - one might need to `require` the behaviours

Hi,

I spent some time today trying to understand why I would get a compile time error saying that the module defining my behaviour wasn't available.

It seems that the compiler wasn't compiling the behaviour before test/support/mock.ex. I originally fixed it by adding this in test/support/mock.ex file:

Code.ensure_compiled?(MyBehaviour)

Then I realized that require MyBehaviour works too (even if the behaviour doesn't define any macro).

It did not happen with every mock but always with the same one. Once it works, only a mix clean can cause the failure to happen again. This means it could work or get fixed on one machine and break on a coworker's computer.

I do not know yet if all behaviours should be required beforehand. I suppose it would help to mention this whole situation in the docs.

Documentation for 0.1.0 missing Registry.start_link

Sorry... I know that the package is moving to a GenServer from Registry, but trying the currently published version, 0.1.0, late at night left me scratching my head until I realized that I needed to start the Registry entry for Mox. I would have done this as a PR, as my understanding is that documentation can be updated after the fact on hex, but a) I wasn't sure how to make the PR to an old version and b) I wasn't sure of the best place to make the start link call. I am currently doing it in

test_helper.exs:

Registry.start_link(:unique, Mox)

c) I wasn't sure if :unique was actually the right call.

I'm not sure how long it will be until the GenServer version goes out, but if someone can point me in the right direction, I would like to help clarify this.

Thanks!

Global mocks?

Imagine a thin HTTP client. We need to mock it in our tests which is pretty easy and straightforward with Mox. We can mock successful response in one and unsuccessful in another. That's great and one reason why it's a bit better than writing the HTTPMock module ourselves/

What I found though is that this brings a lot of repetition potentially for the successful response (and especially if the response is long). What if we can instrument Mox to stub the successful response for all test in one place and only change this response in the few tests that require it?

What if the following definition:

HTTPMock
|> stub(:get, ...

could exist once (loaded in test_helper.exs or in the similar way)? And just be overridable when necessary?

Perhaps something as:

HTTPMock
|> stub_default(:get, ...

Or are there other patterns how to make our life easier that I am probably not aware of?

Tests fail when executed from umbrella app in global mode.

We have a function that launches a Task and calls two functions from different behaviours.
We configured two mocks and called expect in each of them but it fails and says that the second expectation was not defined.
We also tried configuring one mock with both behaviours (using the version in master branch), but it fails with the same reason.

The strange part is that if there's only one test, and it is executed from inside the particular app's directory, the test passes. Either if you run the tests from the umbrella app's directory or if you add a second test, it fails.

This is the error:

15:27:24.526 [error] Task #PID<0.189.0> started from #PID<0.188.0> terminating
** (Mox.UnexpectedCallError) no expectation defined for App1.SecondMock.bye/0 in process #PID<0.189.0>
    (mox) lib/mox.ex:466: Mox.__dispatch__/4
    (elixir) lib/task/supervised.ex:88: Task.Supervised.do_apply/2
    (stdlib) proc_lib.erl:247: :proc_lib.init_p_do_apply/3
Function: #Function<0.6366682/0 in App1.test_case/0>
    Args: []

Here is a link to a sample project that reproduces the failure.

Mock only during certain tests.

Consider the following scenario:

  • A Normalizer behaviour with a normalize_email function.
  • A User module, with a changeset function that should normalize the email param.
  • The User.changeset function calls a default implementation defined such as normalizer().normalize_email() - normalizer() is defined to return Application.get_env(:my_app :normalizer, Utils). It means that by default there's a normalize_email function in the Utils module that does the job.
  • Note that many other functions in the User module use the normalizer().normalize_email() function.
  • In a mocks.ex file in the test/support folder, we Mox.defmock(NormalizerMock, for: Normalizer) and in test_helper.exs we simply Application.put_env(:my_app :normalizer, NormalizerMock).

Now in our user tests, this is all nice and dandy - we can expect on the mock and make sure that the correct function is called by all the User functions such as the User.changeset one.

The intent is really just to test that the User.changeset and other functions delegate the work to the Utils module - so mocking it gives us a way to verify this. We are not interested in what the Utils.normalize_email function does from these tests and the only place where the Utils.normalize_email function is tested is in its own Utils tests. In other words we just check that the NormalizerMock.normalize_email function was called exactly once with a parameter we control, and that the result of the User.changeset function includes this controlled parameter in its return.

However in other tests that target other files and modules, we would like this mock to be absent completely, and have the User.changeset function normally call onto the Utils.normalize_email function.

For example, in tests related to a registration controller (or say more integration tests), we call functions on a Registration module, itself calling the User.changeset function - and in this case we don't care to have User.changeset call a mock - instead it should call its default implementation.
Of course, doing things pixel-perfect would require us to actually mock the User.changeset function in the Registration tests, but there is too much overhead here.

Is there a way to have mocks applied only to one test? To one test file? Where would all the moving pieces go (Eg where would the defmock call be, the Application.put_env call etc)?

Support registered process names in explicit allowances

Hello, thank you for maintaining Mox!

Recently, I was attempting to declare an explicit allowance so that a child process could utilize the expectations defined in my test process. I attempted to create an allowance for a process with a registered name (called Verifier.Scheduler), which threw a FunctionClauseError:

  1) test call/2 resolves entitlements (Verifier.SchedulerTest)
     test/verifier/scheduler_test.exs:45
     ** (FunctionClauseError) no function clause matching in Mox.allow/3

     The following arguments were given to Mox.allow/3:

         # 1
         Verifier.Worker.Mock

         # 2
         #PID<0.386.0>

         # 3
         Verifier.Scheduler

     Attempted function clauses (showing 2 out of 2):

         def allow(_mock, owner_pid, allowed_pid) when owner_pid == allowed_pid
         def allow(mock, owner_pid, allowed_pid) when is_atom(mock) and is_pid(owner_pid) and is_pid(allowed_pid)

     code: allow(Verifier.Worker.Mock, self(), Verifier.Scheduler)
     stacktrace:
       (mox) lib/mox.ex:431: Mox.allow/3
       test/verifier/scheduler_test.exs:51: (test)

It looks like Mox does not currently support explicit allowances with registered processes names. I was able to work around this limitation by first fetching the pid of the registered process:

  pid = Process.whereis(Verifier.Scheduler)

I thought it might be helpful to add the following function clause to support explicit allowances with registered names:

def allow(mock, owner_pid, allowed_name) when is_atom(allowed_name) do
  allowed_pid = Process.whereis(allowed_name)
  allow(mock, owner_pid, allowed_pid)
end

Would this be helpful? If so, I can create a related PR.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.