richcarl / eunit Goto Github PK
View Code? Open in Web Editor NEWThe EUnit lightweight unit testing framework for Erlang - this is the canonical development repository.
License: Apache License 2.0
The EUnit lightweight unit testing framework for Erlang - this is the canonical development repository.
License: Apache License 2.0
EUnit is a lightweight unit testing framework for Erlang/OTP.
For EUNIT-7 (https://support.process-one.net/browse/EUNIT-7?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel), I implemented an experimental feature where you can write {setup, [{Tag, Setup, Cleanup}, ...], Tests-or-Instantiator}, which automatically backs out of only the steps that fail. This needs to be decided on and be documented if it stays.
Update 1: It seems quite useful. It needs documentation, and EUNIT-7 should be commented on if possible (it's closed already).
Update 2: It also needs generalizing to foreach and foreachx before becoming an official feature.
Provide a way to mark up subsets with a level, somehow.
By default, should only first-level tests be executed, or all tests?
Numeric parameter, or only by nesting?
Absolute or relative value, if a parameter?
Björn G frågade: "Finns det något enkelt/inbyggt sätt att märka ut en delmängd av testerna som smoke-tester, så att man kan lägga in Makefilerna att köra de testerna varje gång man bygger om? Jag såg i en bok att andra testramverk för andra språk (Java, tror jag), har sådan funktionalitet."
EUnit is already using regexps to enumerate functions (as funs) based on regexps, so it should be easy to make this a utility function for anyone to use. It is particularly useful for "with"-tests (abstract test functions). Right now you need to list these by hand, as in:
foo_test_() ->
{setup,
fun new_connection/0,
fun close_connection/1,
{with,
[fun test_util:foo_test/1,
fun test_util:bar_test/1,
fun test_util:baz_test/1
]}
}.
It would be nicer to just use an enumerator for the with-body.
To avoid running a million tests that don't work, there should be a limit on the number of failures allowed before aborting.
It would be great if eunit could collect the stderr output from tests separately from stdout.
Currently, this seems to be almost impossible until stderr support is added on a low level
in erts. I saw some mention about this as a coming feature in OTP, though, so it seems that
the best course right now is to just wait for that.
Not a bug, but a feature request,
in Java there is an assertEquals for floats and doubles which lets you specify an epsilon value, so that imprecisions in calculations can be dealt with.
assetEquals(1.0,0.9999999999, 10e-10) would be true if the difference is <= the epsilon.
It could be quite useful to have a macro that can insert random delays in a program, when testing is enabled. It should take both minimum and maximum delay times (in seconds) and a probability threshold for a delay to occur at all. For example:
?randomDelay(0.25, 1.0, 5.0)
would mean that when this code runs, if testing is enabled, there is a 25% chance that the process will be suspended, for a random time between 1.0 and 5.0 seconds.
Binaries can now be used as an alternative to strings in eunit: {Label, ...}. This is not yet documented (and not well tested).
It may be useful, especially for writing tests which are to be run by someone else to test for conformance, to have tests which never fail, but may produce a warning if the behaviour is not the expected.
This may be a special case of a more general "diagnostic" test, which does not fail but may report information about what it learned about the behaviour when running the test. For example, execution time, output, memory usage, platform dependent behaviour, actual implementation of unspecified behaviour, etc.
Reported by Joern Barthel.
When compiling parameterized modules an exception is thrown.
Example:
-module(foo, [ Bar ]).
-compile([export_all]).
-include_lib("eunit/include/eunit.hrl").
foo() ->
Bar.
foo_test() ->
?assertMatch("foo!", (foo:new("foo!")):foo().
Exception:
./foo.erl:none: error in parse transform 'eunit_autoexport': {{badarg,{foo,['Bar
']}},
[{packages,concat_1,1},
{eunit_autoexport,rewrite,2},
{eunit_autoexport,rewrite,2},
{compile, '-foldl_transform/2-anonymous-2- ', 2},
{compile,foldl_transform,2},
{compile, '-internal_comp/4-anonymous-1-', 2},
{compile,fold_comp,3},
{compile,internal_comp,4}]}
error
Environment:
R11B-5 win32
Reported by Joern Barthel.
When returning malformed return values (which is easily enough atm.) EUnit error handling could be improved (examples required?). As there are many different combinations of valid return values, maybe return records should be introduced.
Environment:
R11B-5 win32
Related: when a non-generator test e.g. foo_test/0 (instead of foo_test_/0) returns a generator setup (e.g. with setup and teardown functions) an exception should be thrown warning about the possible naming mistake.
The cmd code should run a shell, like os:cmd(). In particular, this needs to work better on Windows.
Idea: it should be possible to mark a test as "known failure" and report these separately, to handle test cases for known bugs not yet fixed (so you don't have to write a test case that checks that the bug is still present - tests should only be written to succeed for legal behaviour).
What about timeout/setup failures?
For what purposes title (e.g. my_test_()->{"My test", setup ....}) is used, if its even never printed? Perhaps need some way to customize output?
There should be macros in eunit for asserting that 1) a float (or any number) is within some epsilon from some other number (absolute error), 2) that the relative error is within some percentage, 3) that two numbers are "close". It is unclear if all of these are needed, or just 1-2, or even just 1.
References:
http://www.cygnus-software.com/papers/comparingfloats/comparingfloats.htm
http://www.cs.otago.ac.nz/staffpriv/ok/software.htm (pcfpcmp)
EUnit should be able to adjust the number of concurrently running tests (on the same machine) to match the number of available schedulers.
Reported by Joern Barthel.
EUnit should not timeout during debugging sessions.
Environment:
R11B-5 win32
If you compile (with debug_info) an erlang module such as the attached example and step debug (calling debug/0) through it, eunit will timeout.
If it's feasible I'd rather like to see that eunit uses the time spent in the tested process as a reference for it's (in this case default-) timeout.
It seems to be a small matter of programming to make eunit run OTP test suites. Another question is what to do about the way the OTP test server automatically compiles the test suite modules (must check up on what they do about include paths etc.).
Seems to be a problem here: starting a slave node does not work until booting has finished, but if we let booting finish, eunit cannot run from the command line (I think).
Do you plan to make a coloring of eunit output?
Currently, functions matching ...exported() are automatically exported by eunit when testing is enabled, and removed when testing is disabled. Should this be a permanent feature, and does it need modifications before it is documented?
Luke Gorrie wrote on the erlang mailing list (06/11/2007 12:31 PM):
Did I ever post my favourite profiling macro? It's pretty simple:
-define(TIME(Tag,Expr),
(fun() ->
%% NOTE: timer:tc/4 does an annoying 'catch' so we
%% need to wrap the result in 'ok' to be able to
%% detect an unhandled exception.
{__TIME, __RESULT} = timer:tc(erlang,apply,[fun() -> {ok,Expr} end,[]]),
io:format("time(~s): ~18.3fms ~999p~n", [?MODULE,__TIME/1000, Tag]),
case __RESULT of
{ok,_} -> element(2, __RESULT);
{'EXIT',Error} -> exit(Error)
end
end)()).
if you wrap it around a few strategic expressions in your code you end
up with simple running-time summary printouts like this:
time(backup): 3654.744ms restore_table_defs
time(backup): 182.969ms restore_secondary_indexes
time(backup): 311.973ms restore_records
time(backup): 20.928ms checkpoint
time(backup): 5.095ms remove_logs
Update1: Added a macro ?debugTime(Str,Expr) to eunit.hrl in revision 257 (https://forge.process-one.net/changelog/P1Contribs/trunk/eunit?cs=257), inspired by the one by Luke.
Update 2:
In what way does this differ from timeouts?
Simply measure wall clock before and after?
Use non-hardware dependent units with automatic calibration á la bogomips? (berps?)
Have different macros for measuring different things?
Macros for checking linear/quadratic/exponential time?
Currently the only description of timeouts is the {timeout, S, T} tuple, but it does not mention the default timeout.
Timeout behaviour should have its own subsection in the documentation.
Reported by M.W.Park.
When i got some errors in emacs erlang buffer, next-error (C-x `) always prompts for the exact file name because of the missing '.erl' in the tty message.
a patch for adding ".erl" at the end of module name (in eunit_tty.erl).
i don't know this is the right way for doing things like this.
but, it worked for me.
i can use 'next-error' command (C-x `) in my emacs erlang buffer without prompt
--- eunit_tty.erl (revision 274)
+++ eunit_tty.erl (working copy)
@@ -223,7 +223,7 @@
D = if Desc =:= "" -> "";
true -> io_lib:fwrite(" (~s)", [Desc])
end,
print_test_begin(I, Text) ->
indent(I),
Some users would like to be able to run eunit tests as part of another test suite engine and collect all the results (including output, see #12).
Some users have said they want to be able to get the captured output from a test.
In particular, some users would like to be able to run eunit tests as part of another test suite engine and collect all the results, including output.
I am getting a compilation error from the referencing of set:set() [line 64] and dict:dict() [line 66] in eunit_serial.erl when compiling against R16B on my mac running mavericks. I have fixed it locally and would be happy to make a pull request. Looks like the fix works back several versions of stdlib.
The example module examples/eunit_examples.erl is partly broken and should be updated to reflect all or most of the eunit features.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.