Giter VIP home page Giter VIP logo

ospec's Introduction

ospec

npm License npm Version Build Status npm Downloads

Donate at OpenCollective Gitter


About | Usage | CLI | API | Goals

Noiseless testing framework

About

  • ~1100 LOC including the CLI runner1
  • terser and faster test code than with mocha, jasmine or tape
  • test code reads like bullet points
  • assertion code follows SVO structure in present tense for terseness and readability
  • supports:
    • test grouping
    • assertions
    • spies
    • equals, notEquals, deepEquals and notDeepEquals assertion types
    • before/after/beforeEach/afterEach hooks
    • test exclusivity (i.e. .only)
    • async tests and hooks
  • explicitly regulates test-space configuration to encourage focus on testing, and to provide uniform test suites across projects

Note: 1 ospec is currently in the process of changing some of its API surface. The legacy and updated APIs are both implemented right now to ease the transition, once legacy code has been removed we'll clock around 800 LOC.

Usage

Single tests

Both tests and assertions are declared via the o function. Tests should have a description and a body function. A test may have one or more assertions. Assertions should appear inside a test's body function and compare two values.

var o = require("ospec")

o("addition", function() {
    o(1 + 1).equals(2)
})
o("subtraction", function() {
    o(1 - 1).notEquals(2)
})

Assertions may have descriptions:

o("addition", function() {
    o(1 + 1).equals(2)("addition should work")

    /* in ES6, the following syntax is also possible
    o(1 + 1).equals(2) `addition should work`
    */
})
/* for a failing test, an assertion with a description outputs this:

addition should work

1 should equal 2

Error
  at stacktrace/goes/here.js:1:1
*/

Grouping tests

Tests may be organized into logical groups using o.spec

o.spec("math", function() {
    o("addition", function() {
        o(1 + 1).equals(2)
    })
    o("subtraction", function() {
        o(1 - 1).notEquals(2)
    })
})

Group names appear as a breadcrumb trail in test descriptions: math > addition: 2 should equal 2

Nested test groups

Groups can be nested to further organize test groups. Note that tests cannot be nested inside other tests.

o.spec("math", function() {
    o.spec("arithmetics", function() {
        o("addition", function() {
            o(1 + 1).equals(2)
        })
        o("subtraction", function() {
            o(1 - 1).notEquals(2)
        })
    })
})

Callback test

The o.spy() method can be used to create a stub function that keeps track of its call count and received parameters

//code to be tested
function call(cb, arg) {cb(arg)}

//test suite
var o = require("ospec")

o.spec("call()", function() {
    o("works", function() {
        var spy = o.spy()
        call(spy, 1)

        o(spy.callCount).equals(1)
        o(spy.args[0]).equals(1)
        o(spy.calls[0]).deepEquals([1])
    })
})

A spy can also wrap other functions, like a decorator:

//code to be tested
var count = 0
function inc() {
    count++
}

//test suite
var o = require("ospec")

o.spec("call()", function() {
    o("works", function() {
        var spy = o.spy(inc)
        spy()

        o(count).equals(1)
    })
})

Asynchronous tests

If a test body function declares a named argument, the test is assumed to be asynchronous, and the argument is a function that must be called exactly one time to signal that the test has completed. As a matter of convention, this argument is typically named done.

o("setTimeout calls callback", function(done) {
    setTimeout(done, 10)
})

Alternativly you can return a promise or even use an async function in tests:

o("promise test", function() {
    return new Promise(function(resolve) {
        setTimeout(resolve, 10)
    })
})
o("promise test", async function() {
    await someOtherAsyncFunction()
})

Timeout delays

By default, asynchronous tests time out after 200ms. You can change that default for the current test suite and its children by using the o.specTimeout(delay) function.

o.spec("a spec that must timeout quickly", function() {
    // wait 20ms before bailing out of the tests of this suite and
    // its descendants
    o.specTimeout(20)
    o("some test", function(done) {
        setTimeout(done, 10) // this will pass
    })

    o.spec("a child suite where the delay also applies", function () {
        o("some test", function(done) {
            setTimeout(done, 30) // this will time out.
        })
    })
})
o.spec("a spec that uses the default delay", function() {
    // ...
})

This can also be changed on a per-test basis using the o.timeout(delay) function from within a test:

o("setTimeout calls callback", function(done) {
    o.timeout(500) //wait 500ms before bailing out of the test

    setTimeout(done, 300)
})

Note that the o.timeout function call must be the first statement in its test. It also works with Promise-returning tests:

o("promise test", function() {
    o.timeout(1000)
    return someOtherAsyncFunctionThatTakes900ms()
})
o("promise test", async function() {
    o.timeout(1000)
    await someOtherAsyncFunctionThatTakes900ms()
})

Asynchronous tests generate an assertion that succeeds upon calling done or fails on timeout with the error message async test timed out.

before, after, beforeEach, afterEach hooks

These hooks can be declared when it's necessary to setup and clean up state for a test or group of tests. The before and after hooks run once each per test group, whereas the beforeEach and afterEach hooks run for every test.

o.spec("math", function() {
    var acc
    o.beforeEach(function() {
        acc = 0
    })

    o("addition", function() {
        acc += 1

        o(acc).equals(1)
    })
    o("subtraction", function() {
        acc -= 1

        o(acc).equals(-1)
    })
})

It's strongly recommended to ensure that beforeEach hooks always overwrite all shared variables, and avoid if/else logic, memoization, undo routines inside beforeEach hooks.

Asynchronous hooks

Like tests, hooks can also be asynchronous. Tests that are affected by asynchronous hooks will wait for the hooks to complete before running.

o.spec("math", function() {
    var acc
    o.beforeEach(function(done) {
        setTimeout(function() {
            acc = 0
            done()
        })
    })

    //tests only run after async hooks complete
    o("addition", function() {
        acc += 1

        o(acc).equals(1)
    })
    o("subtraction", function() {
        acc -= 1

        o(acc).equals(-1)
    })
})

Running only some tests

One or more tests can be temporarily made to run exclusively by calling o.only() instead of o. This is useful when troubleshooting regressions, to zero-in on a failing test, and to avoid saturating console log w/ irrelevant debug information.

o.spec("math", function() {
    // will not run
    o("addition", function() {
        o(1 + 1).equals(2)
    })

    // this test will be run, regardless of how many groups there are
    o.only("subtraction", function() {
        o(1 - 1).notEquals(2)
    })

    // will not run
    o("multiplication", function() {
        o(2 * 2).equals(4)
    })

    // this test will be run, regardless of how many groups there are
    o.only("division", function() {
        o(6 / 2).notEquals(2)
    })
})

Running the test suite

//define a test
o("addition", function() {
    o(1 + 1).equals(2)
})

//run the suite
o.run()

Running test suites concurrently

The o.new() method can be used to create new instances of ospec, which can be run in parallel. Note that each instance will report independently, and there's no aggregation of results.

var _o = o.new('optional name')
_o("a test", function() {
    _o(1).equals(1)
})
_o.run()

Command Line Interface

Create a script in your package.json:

    "scripts": {
        "test": "ospec",
        ...
    }

...and run it from the command line:

npm test

NOTE: o.run() is automatically called by the cli - no need to call it in your test code.

CLI Options

Running ospec without arguments is equivalent to running ospec '**/tests/**/*.js'. In english, this tells ospec to evaluate all *.js files in any sub-folder named tests/ (the node_modules folder is always excluded).

If you wish to change this behavior, just provide one or more glob match patterns:

ospec '**/spec/**/*.js' '**/*.spec.js'

You can also provide ignore patterns (note: always add --ignore AFTER match patterns):

ospec --ignore 'folder1/**' 'folder2/**'

Finally, you may choose to load files or modules before any tests run (note: always add --preload AFTER match patterns):

ospec --preload esm

Here's an example of mixing them all together:

ospec '**/*.test.js' --ignore 'folder1/**' --preload esm ./my-file.js

native mjs and module support

For Node.js versions >= 13.2, ospec supports both ES6 modules and CommonJS packages out of the box. --preload esm is thus not needed in that case.

Run ospec directly from the command line

ospec comes with an executable named ospec. npm auto-installs local binaries to ./node_modules/.bin/. You can run ospec by running ./node_modules/.bin/ospec from your project root, but there are more convenient methods to do so that we will soon describe.

ospec doesn't work when installed globally (npm install -g). Using global scripts is generally a bad idea since you can end up with different, incompatible versions of the same package installed locally and globally.

Here are different ways of running ospec from the command line. This knowledge applies to not just ospec, but any locally installed npm binary.

npx

If you're using a recent version of npm (v5+), you can use run npx ospec from your project folder.

npm-run

If you're using a recent version of npm (v5+), you can use run npx ospec from your project folder.

Otherwise, to work around this limitation, you can use npm-run which enables one to run the binaries of locally installed packages.

npm install npm-run -g

Then, from a project that has ospec installed as a (dev) dependency:

npm-run ospec

PATH

If you understand how your system's PATH works (e.g. for OSX), then you can add the following to your PATH...

export PATH=./node_modules/.bin:$PATH

...and you'll be able to run ospec without npx, npm, etc. This one-time setup will also work with other binaries across all your node projects, as long as you run binaries from the root of your projects.


API

Square brackets denote optional arguments

void o.spec(String title, Function tests)

Defines a group of tests. Groups are optional


void o(String title, Function([Function done]) assertions)

Defines a test.

If an argument is defined for the assertions function, the test is deemed to be asynchronous, and the argument is required to be called exactly one time.


Assertion o(any value)

Starts an assertion. There are six types of assertion: equals, notEquals, deepEquals, notDeepEquals, throws, notThrows.

Assertions have this form:

o(actualValue).equals(expectedValue)

As a matter of convention, the actual value should be the first argument and the expected value should be the second argument in an assertion.

Assertions can also accept an optional description curried parameter:

o(actualValue).equals(expectedValue)("this is a description for this assertion")

Assertion descriptions can be simplified using ES6 tagged template string syntax:

o(actualValue).equals(expectedValue) `this is a description for this assertion`

Function(String description) o(any value).equals(any value)

Asserts that two values are strictly equal (===)

Function(String description) o(any value).notEquals(any value)

Asserts that two values are strictly not equal (!==)

Function(String description) o(any value).deepEquals(any value)

Asserts that two values are recursively equal

Function(String description) o(any value).notDeepEquals(any value)

Asserts that two values are not recursively equal

Function(String description) o(Function fn).throws(Object constructor)

Asserts that a function throws an instance of the provided constructo

Function(String description) o(Function fn).throws(String message)

Asserts that a function throws an Error with the provided message

Function(String description) o(Function fn).notThrows(Object constructor)

Asserts that a function does not throw an instance of the provided constructor

Function(String description) o(Function fn).notThrows(String message)

Asserts that a function does not throw an Error with the provided message


void o.before(Function([Function done]) setup)

Defines code to be run at the beginning of a test group

If an argument is defined for the setup function, this hook is deemed to be asynchronous, and the argument is required to be called exactly one time.


void o.after(Function([Function done) teardown)

Defines code to be run at the end of a test group

If an argument is defined for the teardown function, this hook is deemed to be asynchronous, and the argument is required to be called exactly one time.


void o.beforeEach(Function([Function done]) setup)

Defines code to be run before each test in a group

If an argument is defined for the setup function, this hook is deemed to be asynchronous, and the argument is required to be called exactly one time.


void o.afterEach(Function([Function done]) teardown)

Defines code to be run after each test in a group

If an argument is defined for the teardown function, this hook is deemed to be asynchronous, and the argument is required to be called exactly one time.


void o.only(String title, Function([Function done]) assertions)

Declares that only a single test should be run, instead of all of them


Function o.spy([Function fn])

Returns a function that records the number of times it gets called, and its arguments

Number o.spy().callCount

The number of times the function has been called

Array<any> o.spy().args

The arguments that were passed to the function in the last time it was called


void o.run([Function reporter])

Runs the test suite. By default passing test results are printed using console.log and failing test results are printed using console.error.

If you have custom continuous integration needs then you can use a reporter to process test result data yourself.

If running in Node.js, ospec will call process.exit after reporting results by default. If you specify a reporter, ospec will not do this and allow your reporter to respond to results in its own way.


Number o.report(results)

The default reporter used by o.run() when none are provided. Returns the number of failures, doesn't exit Node.js by itself. It expects an array of test result data as argument.


Function o.new()

Returns a new instance of ospec. Useful if you want to run more than one test suite concurrently

var $o = o.new()
$o("a test", function() {
    $o(1).equals(1)
})
$o.run()

throwing Errors

When an error is thrown some tests may be skipped. See the "run time model" for a detailed description of the bailout mechanism.


Result data

Test results are available by reference for integration purposes. You can use custom reporters in o.run() to process these results.

o.run(function(results) {
    // results is an array

    results.forEach(function(result) {
        // ...
    })
})

Boolean|Null result.pass

  • true if the assertion passed.
  • false if the assertion failed.
  • null if the assertion was incomplete (o("partial assertion) // and that's it).

Error result.error

The Error object explaining the reason behind a failure. If the assertion failed, the stack will point to the actuall error. If the assertion did pass or was incomplete, this field is identical to result.testError.


Error result.testError

An Error object whose stack points to the test definition that wraps the assertion. Useful as a fallback because in some async cases the main may not point to test code.


String result.message

If an exception was thrown inside the corresponding test, this will equal that Error's message. Otherwise, this will be a preformatted message in SVO form. More specifically, ${subject}\n${verb}\n${object}.

As an example, the following test's result message will be "false\nshould equal\ntrue".

o.spec("message", function() {
    o(false).equals(true)
})

If you specify an assertion description, that description will appear two lines above the subject.

o.spec("message", function() {
    o(false).equals(true)("Candyland") // result.message === "Candyland\n\nfalse\nshould equal\ntrue"
})

String result.context

A >-separated string showing the structure of the test specification. In the below example, result.context would be testing > rocks.

o.spec("testing", function() {
    o.spec("rocks", function() {
        o(false).equals(true)
    })
})

Run time model

Definitions

  • A test is the function passed to o("description", function test() {}).
  • A hook is a function passed to o.before(), o.after(). o.beforeEach() and o.afterEach().
  • A task designates either a test or a hook.
  • A given test and its associated beforeEach and afterEach hooks form a streak. The beforeEach hooks run outermost first, the afterEach run outermost last. The hooks are optional, and are tied at test-definition time in the o.spec() calls that enclose the test.
  • A spec is a collection of streaks, specs, one before hook and one after hook. Each component is optional. Specs are defined with the o.spec("spec name", function specDef() {}) calls.

The phases of an ospec run

For a given instance, an ospec run goes through three phases:

  1. tests definition
  2. tests execution and results accumulation
  3. results presentation

Tests definition

This phase is synchronous. o.spec("spec name", function specDef() {}), o("test name", function test() {}) and hooks calls generate a tree of specs and tests.

Test execution and results accumulation

At test execution time, for each spec, the before hook is called if present, then nested specs the streak of each test, in definition order, then the after hook, if present.

Test and hooks may contain assertions, which will populate the results array.

Results presentation

Once all tests have run or timed out, the results are presented.

Throwing errors and spec bail out

While some testing libraries consider error thrown as assertions failure, ospec treats them as super-failures. Throwing will cause the current spec to be aborted, avoiding what can otherwise end up as pages of errors. What this means depends on when the error is thrown. Specifically:

  • A syntax error in a file causes the file to be ignored by the runner.
  • At test-definition time:
    • An error thrown at the root of a file will cause subsequent tests and specs to be ignored.
    • An error thrown in a spec definition will cause the spec to be ignored.
  • At test-execution time:
    • An error thrown in the before hook will cause the streaks and nested specs to be ignored. The after hook will run.
    • An error thrown in a task...
      • ...prevents further streaks and nested specs in the current spec from running. The after hook of the spec will run.
      • ...if thrown in a beforeEach hook of a streak, causes the streak to be hollowed out. Hooks defined in nested scopes and the actual test will not run. However, the afterEach hook corresponding to the one that crashed will run, as will those defined in outer scopes.

For every error thrown, a "bail out" failure is reported.


Goals

Ospec started as a bare bones test runner optimized for Leo Horie to write Mithril v1 with as little hasle as possible. It has since grown in capabilities and polish, and while we tried to keep some of the original spirit, the current incarnation is not as radically minimalist as the original. The state of the art in testing has also moved with the dominance of Jest over Jasmine and Mocha, and now Vitest coming up the horizon.

  • Do the most common things that the mocha/chai/sinon triad does without having to install 3 different libraries and several dozen dependencies
  • Limit configuration in test-space:
    • Disallow ability to pick between API styles (BDD/TDD/Qunit, assert/should/expect, etc)
    • No "magic" plugin system with global reach. Custom assertions need to be imported or defined lexically (e.g. in o(value)._(matches(refence)), matches can be resolved in file).
    • Provide a default simple reporter
  • Make assertion code terse, readable and self-descriptive
  • Have as few assertion types as possible for a workable usage pattern

These restrictions have a few benefits:

  • tests always look the same, even across different projects and teams
  • single source of documentation for entire testing API
  • no need to hunt down plugins to figure out what they do, especially if they replace common javascript idioms with fuzzy spoken language constructs (e.g. what does .is() do?)
  • no need to pollute project-space with ad-hoc configuration code
  • discourages side-tracking and yak-shaving

ospec's People

Contributors

barneycarroll avatar bruce-one avatar commanderroot avatar dead-claudia avatar dependabot[bot] avatar futurist avatar gilbert avatar kesara avatar lhorie avatar maranomynet avatar onlyskin avatar pakx avatar pdfernhout avatar porsager avatar pygy avatar richardivan avatar robertakarobin avatar rodericday avatar spacejack avatar stephanhoyer avatar tivac avatar zyrolasting avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

ospec's Issues

ospec: use string substitution to expose values in assertion reports

ospecs messaging processes assertion values differently depending on environment. Under Node it uses util.inspect, the platforms preferred tool for debugging; but in the browser it falls back to a loosely opinionated custom process for stringifying values.

Browser consoles are much richer than Nodes' terminal output and could be better served by making use of string substitution, which allow intelligent representations of complex entities - see below for current vs suggested behaviour for exotic objects logged by ospec in Chrome

image

Happy to implement this myself.

deepEquals always fails on objects with arrays created with seamless-immutable

deepEquals always fails on objects with arrays created with seamless-immutable, (or any other library that adds methods to the arrays)
ospec version: 4.1.1

Code:

The following test fails when it shouldn't

let a = Immutable({
    tasks: {
        tasksList: [
            {
                due_date: 1595808000,
                dueMessage: 'Due today',
                daysLeft: 0,
            }
        ],
        computed: false
    }
})
, b = Immutable({
    tasks: {
        tasksList: [
            {
                due_date: 1595808000,
                dueMessage: 'Due today',
                daysLeft: 0,
            }
        ],
        computed: false
    }
})

o('test', () => {
    o(a).deepEquals(b)
})

o.run()

The problem seems to be here: https://github.com/MithrilJS/ospec/blob/master/ospec.js#L543

Ospec is using Object.getOwnPropertyNames on the array so it's finding non-enumerable functions and trying to compare them as well, and so it's finding non-enumerable functions and trying to compare them as well, rather than just ignoring them like it should.

DeepEquals should handle Sets (and Maps)

For sets, this does an object identity check for membership, I suppose we could do N^2 deepEqual() checks on the set members to determine if they match... Likewise for maps...

      if (a instanceof Set && b instanceof Set) {
        for (const el of a) {
          if (!b.has(el)) return false
        }
        for (const el of b) {
          if (!a.has(el)) return false
        }
        return true
      }

spy wrapping test doesn't work on non ES6 enviroments.

PhantomJS runner:

var o = require("./ospec.js")
require("./polyfill.min.js")
require("./tests/test-ospec.js")
o.run()

Polyfill: https://polyfill.io/v3/polyfill.min.js?flags=gated%2Calways&features=Promise

Output:

$ phantomjs test.js
%cospec > sync > spy wrapping:%c
%cAttempting to change value of a readonly property.%c
defineProperties@[native code]
 color:black;font-weight:bold  color:red color:black

  phantomjs://platform/ospec.js:356 in report
%c1 out of 371 assertions failed%c  in 359ms color:red;font-weight:bold

Ospec needs a Karma story

Mithril version:

Browser and OS:

Project:

Is this something you're interested in implementing yourself?

Description

ospec needs integrated with Karma so ospec users can use Karma for much easier cross-browser testing.

Why

This is mildly blocking for us moving to Karma internally - the only alternative is literally just moving to another framework. Also, users may appreciate this.

Possible Implementation

Open Questions

`.satisfies()` is too verbose, `.notSatisfies()` and `.notDeepEquals()` are of little use

In o(myObject).satisfies(myValidator(someSepcification)), the satisfies bit takes too much space and hampers readability. o(myObject)._(matches(someSepcification)) would make more readable tests.

Also, the .notSatisfies() and .notDeepEquals() assertions are not very useful, and even IMO an anti-pattern, as any difference will make them pass, not necessarily the one the tester had in mind when designing the test. They should be. retired.

Here's the roadmap I propose

For v4.2.0:

Add ._() as a synonym to .satisfies().

Make o.warn = false. If the user sets o.warn = true, have .satisfies, .notSatisfies and .notDeepEquals issue warnings announcing future obsolescence (we can do that cleanly such that a single list is reported after publishing the report).

For v5.0.0

Make the warning unconditional, and state that the methods are deprecated.

Sometimes in the future

Remove them altogether.

Context and running streaks in parallel.

This isn't a concern as far as Mithril or ospec itself are concerned, but a test suite that make async requests may see its run time dominated by network and remote work latency. In these scenarios, it could make sense to have streaks (a task and its corresponding beforeEach and afterEach hooks) run in parallel.

In that scenario, having them all share a context object would make a lot of sense.

The v5 API has a nice affordance for this with the injected o object.

We could have o("some test", (o, {spy, context, timeout} = o) => {...}) or even o("", ({o, spy, context, timeout}) => {...}) (if o.o ===o).

context could prototypically inherit from the spec context passed to before and after hooks (each spec having its own context that inherits from parent specs).

A context object is necessary in parallel mode because we can't rely on the closure variables not to be clobbered by other streaks running in parallel.

We could use o.parallel(maxParallelism?, callback), or overload o.spec(name, options = {parallel: false|number}, callback).

The latter is probably preferable.

How to use ospec for components having imports

I'm not able to test components having import statement with it.

var mq = require("mithril-query");
var o = require("ospec");

import { Alert } from "[path]";

o.spec("Alert", function() {
  o("Alert", function() {
    var out = mq(Alert, { type: "danger" }, ["You got it"]);
    out.should.have("alert-danger");
    out.should.contain("You got it");
  });
});

o.run();

I'm getting error as below

import {Alert} from "[path]"
^^^^^^

SyntaxError: Cannot use import statement outside a module

spy.calls doesn't seem to match TypeScript definitions

Runtime looks like this and works fine:

const res = {
  end: o.spy()
}
await handler(req, res)
console.log("First call", res.end.calls[0])
//=> First call { this: <ref *1> { [omitted]  }, args: [ 'ok' ] }

However, TS complains about further usage:

image

It thinks .calls is an array of arrays:

image

Failed assertion reporting does unnecessary string substitution

Suppose I have a test like this:

o('%foo').equals('%ioo')("%doo")

When evaluated, I'd expect for this expression to produce the following output:

%doo

'%foo'
  should equal
'%ioo'

What I'm getting instead is this:

0oo

'NaNoo'
  should equal
'NaNoo'

Now I'm guessing this has to do with format placeholders, but I'm fairly sure that any such substitution should only be performed once (so that it does not affect the inserted text).

Edit: checked o.spec("%foo", () => o("%foo", () => {/* โ€ฆ */})) โ€“ these strings are also affected.

`o.before()` within a `o.spec` still runs even when no tests are running

const o = require('ospec')

o.spec('Example', () => {
  o.before(() => {
    console.log("RUNNING BEFORE")
  })

  o('test1', () => {
    o(true).equals(true)
  })
})

o.only('test2', () => {
  o(true).equals(true)
})

In this example, the console log runs even though no test in that spec is running (due to the .only on test2). In my own project this is causing extra debug statements to show up in the console even though they're not relevant to the test I'm focusing on.

Lost assertions in async tests

It is currently possible to have assertions that run after the test that defined them, and at that point, ospec is unacle to trace their origin.

One possibility to solve this would be to change the API to something tape-like:

o("test", o => {
  // this would either refuse to work
  setTimeout(()=>o(1).equals(false), 5)
})

If we were to implement this, we'd need a complimentary codemod to upgrade the test suites. If someone skilled in theses want to step up and write it would be most welcome. Otherwise I'll dive into it.

Immutable woes?

Not sure if it is still relevant, bu I have this flems with a fix for a problem with ImmutableJS that was in a stray tab. Keeping it here until I find out if it is relevant.

Adapt the `lock()` mechanism as a core part of o.spy()

In the Mithril test suite, @dead-claudia uses a lock utility that tracks stray calls to functions after a test is finished. This should be a core functionality of o.spy()

The groundwork I'm doing to fix #50 should make the default case easy (spies defined during a test should not run when it's done. Spies defined at test-definition time shouldn't expire though.

It may be possible to have spies defined in before/beforeEach hooks to last until the matching after/afterEach hook.

Remove devDependencies before publish to NPM

The devDependencies make no sense in npm package, just make the installation longer and heavier.

"devDependencies": {
        "compose-regexp": "0.4.0",
        "eslint": "^6.8.0",
        "ospec": "4.0.1"
}

If the sales point is light weight, above should be get rid off before publish.

ospec assertion descriptions are not reported

ospec version: 4.0.1

Browser and OS: Node v12.12.0 & Firefox v71

Code

o("addition", function() {
	o(1 + 1).equals(3)("addition should work");
});
o.run();

demo in flems

Steps to Reproduce

  1. Run the above.

Expected Behavior

According to the documentation the assertion's message should override the test's description like so:

/* for a failing test, an assertion with a description outputs this:

addition should work

1 should equal 2

Error
  at stacktrace/goes/here.js:1:1
*/

Current Behavior

The assertions message is never displayed

ospec: Asynchronous tests doesn't work with arrow functions

ospec: Asynchronous tests doesn't work with arrow functions

ospec version: 4.0.1

Browser and OS: MacOS

Code

Following test works:

o("setTimeout calls callback", function(done) {
	setTimeout(done, 10)
})

But this test with arrow function gives and error:

o("setTimeout calls callback", (done) => {
	setTimeout(done, 10)
})

Steps to Reproduce

  1. Create a test.js file with following:
var o = require("ospec")

o("setTimeout calls callback", (done) => {
	setTimeout(done, 10)
})
  1. Run ospec test.js

Expected Behavior

โ€“โ€“โ€“โ€“โ€“โ€“
The 1 assertion passed in 23ms

Current Behavior

ospec.js:181
						else throw e
						     ^

`(done)()` should be called at least once
    at Object.<anonymous> (/Users/kesara/Lab/ospec-test/test.js:3:1)

Context

ospec should treat the arrow functions as same as regular functions.

re-think the results output

We currently shove every kind of problems into the results array as assertions, even timeouts and bail outs.

The latter two should be tagged as such IMO.

Also, we shouldn't format the output on error, but in the reporter (with a possible exception for custom assertions).

Here's the current format:

{
	pass: null,
	message: "Incomplete assertion in the test definition starting at...",
	error: $$_testOrHook.error,
	task: $$_testOrHook,
	timeoutLimbo: $$_timedOutAndPendingResolution === 0,
	// Deprecated
	context: ($$_timedOutAndPendingResolution === 0 ? "" : "??? ") + $$_testOrHook.context,
	testError: $$_testOrHook.error
}

We should aim for something like this:

{
	kind: "assertion" | "error" | "timeout",
        pass: boolean,
	message?: {value, methodName, reference} | string
        error?: Error // or maybe just a stack trace?
	task: Task
}

This is a prerequisite for #31.

The "incomplete assertion" checks should run on task finalization time, and on timeout, and cause a hard error to be thrown.

Likewise, syntax errors while parsing a test file should cause hard errors, not bailouts.

At long last, it would be useful to be able to bulk-add results, such that we can parallelize running the tests and merge results.

ERR_UNSUPPORTED_ESM_URL_SCHEME error on Windows 10

Platform: Windows 10, 64 bit
Node: v14.17.0
NPM: 6.14.13
ospec: 4.1.1

Reproduction Steps

  1. npm install --save-dev ospec
  2. Change test command in package.json to ospec
  3. Add tests/mytest.spec.js with following:
const o = require('ospec');
const assert = require('assert');

o.spec("my test", () => {
    o("things are working", () => {
        assert.strictEqual("foo", "bar");
    });
});

Expected

Just a message saying 0 assertions passed

Actual

0 assertions passed, with 1 assertion bailed out with following: (note: I altered the name of the project directory in output)

Error [ERR_UNSUPPORTED_ESM_URL_SCHEME]: Only file and data URLs are supported by the default ESM loader. On Windows, absolute paths must be valid file:// URLs. Received protocol 'c:'
    at Loader.defaultResolve [as _resolve] (internal/modules/esm/resolve.js:782:11)
    at Loader.resolve (internal/modules/esm/loader.js:88:40)
    at Loader.getModuleJob (internal/modules/esm/loader.js:241:28)
    at Loader.import (internal/modules/esm/loader.js:176:28)
    at importModuleDynamically (internal/modules/cjs/loader.js:1011:27)
    at exports.importModuleDynamicallyCallback (internal/process/esm_loader.js:30:14)
    at eval (eval at <anonymous> (C:\Users\JamesCote\Projects\my-project\node_modules\ospec\bin\ospec:24:15), <anonymous>:1:1)
    at C:\Users\JamesCote\Projects\my-project\node_modules\ospec\bin\ospec:24:15
    at C:\Users\JamesCote\Projects\my-project\node_modules\ospec\bin\ospec:75:15
    at processTicksAndRejections (internal/process/task_queues.js:95:5) {
  code: 'ERR_UNSUPPORTED_ESM_URL_SCHEME'
}

C:\Users\JamesCote\Projects\my-project\tests\mytest.spec.js > > > BAILED OUT < < <:
Only file and data URLs are supported by the default ESM loader. On Windows, absolute paths must be valid file:// URLs. Received protocol 'c:'
Error [ERR_UNSUPPORTED_ESM_URL_SCHEME]: Only file and data URLs are supported by the default ESM loader. On Windows, absolute paths must be valid file:// URLs. Received protocol 'c:'
    at Loader.defaultResolve [as _resolve] (internal/modules/esm/resolve.js:782:11)
    at Loader.resolve (internal/modules/esm/loader.js:88:40)
    at Loader.getModuleJob (internal/modules/esm/loader.js:241:28)
    at Loader.import (internal/modules/esm/loader.js:176:28)
    at importModuleDynamically (internal/modules/cjs/loader.js:1011:27)
    at exports.importModuleDynamicallyCallback (internal/process/esm_loader.js:30:14)
    at eval (eval at <anonymous> (C:\Users\JamesCote\Projects\my-project\node_modules\ospec\bin\ospec:24:15), <anonymous>:1:1)
    at C:\Users\JamesCote\Projects\my-project\node_modules\ospec\bin\ospec:24:15
    at C:\Users\JamesCote\Projects\my-project\node_modules\ospec\bin\ospec:75:15
    at processTicksAndRejections (internal/process/task_queues.js:95:5)

โ€“โ€“โ€“โ€“โ€“โ€“
The 0 assertion passed (old style total: 1). Bailed out 1 time
npm ERR! Test failed.  See above for more details.

A possible workaround is to downgrade Node.js to v10.15.2, but perhaps this should be investigated to allow for support on newer versions as well.

Complex assetions like `deepEquals` and `satisfies` plugins should differentiate between bail outs and internal errors

Currently, if a crash happens in the deepEquals code (as in #41), or in a .satisfies plugin, this is reported as a bailout.

Crashes in user code (e.g. an getter that throws when doing a deepEquals, or some other failure in user code called from the satisfies() validator) should cause bailouts, but errors in ospec and validators should throw errors and exit ASAP with a corresponding stack trace.

Provide an ES6 build of ospec

Hi
Thanks a lot for making and supporting ospec.
I would like to ask you to provide ES6 version of ospec.
We are currently in the process of migrating to Rollup and we try to use native ES6 modules for development, in both web and node versions. To use ospec we use @rollup/plugin-commonjs which transforms commonjs module into ES6 module (or tries its best to do so). Unfortunately it cannot do much with dynamic require() for the util which is not allowed when the file is treated as ES module. What could be used for ES there is dynamic import().

ospec should allow custom assertions

The design would be conceptually simple: expose o.define = define where define(key, cond) is an internal function we already use for similar effect, and expand that function to take more than 1 argument.

To define a custom assertion, you'd call o.define("method", (operand, ...args) => cond). You would then call this via o(operand).method(...args). It's conceptually pretty simple, and it's easy to implement.

This would make things like mithril-query much easier to integrate, since that isn't as simple as equality or deep equality.

Ospec doesn't handle relative paths on Windows

Platform: Windows 10, 64 bit
Node: v10.16.0
NPM: 6.9.0

I'm trying to reproduce a testing setup from official docs.
When I run npm test either from powershell or cmd I get an error:

> ospec --require ./test-setup.js

internal/modules/cjs/loader.js:638
    throw err;
    ^

Error: Cannot find module './test-setup.js'
    at Function.Module._resolveFilename (internal/modules/cjs/loader.js:636:15)
    at Function.resolve (internal/modules/cjs/helpers.js:33:19)
    at C:\Users\Predator\Desktop\dev\JS\mithril repos\test-test\node_modules\ospec\bin\ospec:33:31
    at Array.forEach (<anonymous>)
    at Object.<anonymous> (C:\Users\Predator\Desktop\dev\JS\mithril repos\test-test\node_modules\ospec\bin\ospec:31:15)
    at Module._compile (internal/modules/cjs/loader.js:776:30)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:787:10)
    at Module.load (internal/modules/cjs/loader.js:653:32)
    at tryModuleLoad (internal/modules/cjs/loader.js:593:12)
    at Function.Module._load (internal/modules/cjs/loader.js:585:3)
npm ERR! Test failed.  See above for more details.

Then I change a line in package.json from
"test": "ospec --require ./test-setup.js"
to
"test": "ospec --require ../../../test-setup.js"

And I get another error caused by require("mithril") line in test-setup.js (there was no mention to install mithrill in the docs)

Then I do npm install --save-dev mithril

And I get another error

Error: Cannot find module '../../../test-setup.js'
    at Function.Module._resolveFilename (internal/modules/cjs/loader.js:636:15)
    at Function.resolve (internal/modules/cjs/helpers.js:33:19)
    at C:\Users\Predator\Desktop\dev\JS\mithril repos\test-test\node_modules\mithril\ospec\bin\ospec:33:31
    at Array.forEach (<anonymous>)
    at Object.<anonymous> (C:\Users\Predator\Desktop\dev\JS\mithril repos\test-test\node_modules\mithril\ospec\bin\ospec:31:15)
    at Module._compile (internal/modules/cjs/loader.js:776:30)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:787:10)
    at Module.load (internal/modules/cjs/loader.js:653:32)
    at tryModuleLoad (internal/modules/cjs/loader.js:593:12)
    at Function.Module._load (internal/modules/cjs/loader.js:585:3

Then I change a line in package.json from
"test": "ospec --require ../../../test-setup.js"
to
"test": "ospec --require ../../../../test-setup.js"

And I get output:

> ospec --require ../../../../test-setup.js

โ€“โ€“โ€“โ€“โ€“โ€“
All 0 assertions passed in 5ms

However expected output according to docs is:

โ€“โ€“โ€“โ€“โ€“โ€“
All 1 assertions passed in 0ms

'throws' doesn't work with async functions

ospec version: 4.1.1
node version: 14.15.4

throws does not seem to catch exceptions from async functions.

Code

o("synchronous", function() {
    o(() => {throw new Error()}).throws(Error)
})
o("asynchronous", function() {
    o(async () => {throw new Error()}).throws(Error)
})

Expected behavior

โ€“โ€“โ€“โ€“โ€“โ€“
All 2 assertions passed (old style total: 2) 

Current behavior

asynchronous:
[AsyncFunction (anonymous)]
  should throw a
[Function: Error] { stackTraceLimit: 10 }
    at /home/vis/dev/repositories/2020-04-pqmail-prototype/report.js:6:40
    
โ€“โ€“โ€“โ€“โ€“โ€“
1 out of 2 assertions failed (old style total: 2)  

Question: View realtime test progress?

I love ospec and its simplicity. Two questions:

  1. I didn't see this documented, but is it possible to view the test progress in real time. For example, many test runners will show a stream of . or X as each test passes or fails, and these are written to the terminal as soon as the result as in, rather than waiting until all test results are in, and showing them all at once.
  2. If not possible, are there are any plans for this? If not, would this be a welcome feature PR? And if so, how difficult would retrofitting such a feature be?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.