Giter VIP home page Giter VIP logo

tape's Introduction

tape Version Badge

TAP-producing test harness for node and browsers

github actions coverage dependency status dev dependency status License Downloads

npm badge

tape

example

var test = require('tape');

test('timing test', function (t) {
    t.plan(2);

    t.equal(typeof Date.now, 'function');
    var start = Date.now();

    setTimeout(function () {
        t.equal(Date.now() - start, 100);
    }, 100);
});

test('test using promises', async function (t) {
    const result = await someAsyncThing();
    t.ok(result);
});
$ node example/timing.js
TAP version 13
# timing test
ok 1 should be strictly equal
not ok 2 should be strictly equal
  ---
    operator: equal
    expected: 100
    actual:   107
  ...

1..2
# tests 2
# pass  1
# fail  1

usage

You always need to require('tape') in test files. You can run the tests by usual node means (require('test-file.js') or node test-file.js). You can also run tests using the tape binary to utilize globbing, on Windows for example:

$ tape tests/**/*.js

tape's arguments are passed to the glob module. If you want glob to perform the expansion on a system where the shell performs such expansion, quote the arguments as necessary:

$ tape 'tests/**/*.js'
$ tape "tests/**/*.js"

Preloading modules

Additionally, it is possible to make tape load one or more modules before running any tests, by using the -r or --require flag. Here's an example that loads babel-register before running any tests, to allow for JIT compilation:

$ tape -r babel-register tests/**/*.js

Depending on the module you're loading, you may be able to parameterize it using environment variables or auxiliary files. Babel, for instance, will load options from .babelrc at runtime.

The -r flag behaves exactly like node's require, and uses the same module resolution algorithm. This means that if you need to load local modules, you have to prepend their path with ./ or ../ accordingly.

For example:

$ tape -r ./my/local/module tests/**/*.js

Please note that all modules loaded using the -r flag will run before any tests, regardless of when they are specified. For example, tape -r a b -r c will actually load a and c before loading b, since they are flagged as required modules.

things that go well with tape

tape maintains a fairly minimal core. Additional features are usually added by using another module alongside tape.

pretty reporters

The default TAP output is good for machines and humans that are robots.

If you want a more colorful / pretty output there are lots of modules on npm that will output something pretty if you pipe TAP into them:

To use them, try node test/index.js | tap-spec or pipe it into one of the modules of your choice!

uncaught exceptions

By default, uncaught exceptions in your tests will not be intercepted, and will cause tape to crash. If you find this behavior undesirable, use tape-catch to report any exceptions as TAP errors.

other

command-line flags

While running tests, top-level configurations can be passed via the command line to specify desired behavior.

Available configurations are listed below:

--require

Alias: -r

This is used to load modules before running tests and is explained extensively in the preloading modules section.

--ignore

Alias: -i

This flag is used when tests from certain folders and/or files are not intended to be run. The argument is a path to a file that contains the patterns to be ignored. It defaults to .gitignore when passed with no argument.

tape -i .ignore '**/*.js'

An error is thrown if the specified file passed as argument does not exist.

--ignore-pattern

Same functionality as --ignore, but passing the pattern directly instead of an ignore file. If both --ignore and --ignore-pattern are given, the --ignore-pattern argument is appended to the content of the ignore file.

tape --ignore-pattern 'integration_tests/**/*.js' '**/*.js'

--no-only

This is particularly useful in a CI environment where an only test is not supposed to go unnoticed.

By passing the --no-only flag, any existing only test causes tests to fail.

tape --no-only **/*.js

Alternatively, the environment variable NODE_TAPE_NO_ONLY_TEST can be set to true to achieve the same behavior; the command-line flag takes precedence.

methods

The assertion methods in tape are heavily influenced or copied from the methods in node-tap.

var test = require('tape')

test([name], [opts], cb)

Create a new test with an optional name string and optional opts object. cb(t) fires with the new test object t once all preceding tests have finished. Tests execute serially.

Available opts options are:

  • opts.skip = true/false. See test.skip.
  • opts.timeout = 500. Set a timeout for the test, after which it will fail. See test.timeoutAfter.
  • opts.objectPrintDepth = 5. Configure max depth of expected / actual object printing. Environmental variable NODE_TAPE_OBJECT_PRINT_DEPTH can set the desired default depth for all tests; locally-set values will take precedence.
  • opts.todo = true/false. Test will be allowed to fail.

If you forget to t.plan() out how many assertions you are going to run and you don't call t.end() explicitly, or return a Promise that eventually settles, your test will hang.

If cb returns a Promise, it will be implicitly awaited. If that promise rejects, the test will be failed; if it fulfills, the test will end. Explicitly calling t.end() while also returning a Promise that fulfills is an error.

test.skip([name], [opts], cb)

Generate a new test that will be skipped over.

test.onFinish(fn)

The onFinish hook will get invoked when ALL tape tests have finished right before tape is about to print the test summary.

fn is called with no arguments, and its return value is ignored.

test.onFailure(fn)

The onFailure hook will get invoked whenever any tape tests has failed.

fn is called with no arguments, and its return value is ignored.

t.plan(n)

Declare that n assertions should be run. t.end() will be called automatically after the nth assertion. If there are any more assertions after the nth, or after t.end() is called, they will generate errors.

t.end(err)

Declare the end of a test explicitly. If err is passed in t.end will assert that it is falsy.

Do not call t.end() if your test callback returns a Promise.

t.teardown(cb)

Register a callback to run after the individual test has completed. Multiple registered teardown callbacks will run in order. Useful for undoing side effects, closing network connections, etc.

t.fail(msg)

Generate a failing assertion with a message msg.

t.pass(msg)

Generate a passing assertion with a message msg.

t.timeoutAfter(ms)

Automatically timeout the test after X ms.

t.skip(msg)

Generate an assertion that will be skipped over.

t.ok(value, msg)

Assert that value is truthy with an optional description of the assertion msg.

Aliases: t.true(), t.assert()

t.notOk(value, msg)

Assert that value is falsy with an optional description of the assertion msg.

Aliases: t.false(), t.notok()

t.error(err, msg)

Assert that err is falsy. If err is non-falsy, use its err.message as the description message.

Aliases: t.ifError(), t.ifErr(), t.iferror()

t.equal(actual, expected, msg)

Assert that Object.is(actual, expected) with an optional description of the assertion msg.

Aliases: t.equals(), t.isEqual(), t.strictEqual(), t.strictEquals(), t.is()

t.notEqual(actual, expected, msg)

Assert that !Object.is(actual, expected) with an optional description of the assertion msg.

Aliases: t.notEquals(), t.isNotEqual(), t.doesNotEqual(), t.isInequal(), t.notStrictEqual(), t.notStrictEquals(), t.isNot(), t.not()

t.looseEqual(actual, expected, msg)

Assert that actual == expected with an optional description of the assertion msg.

Aliases: t.looseEquals()

t.notLooseEqual(actual, expected, msg)

Assert that actual != expected with an optional description of the assertion msg.

Aliases: t.notLooseEquals()

t.deepEqual(actual, expected, msg)

Assert that actual and expected have the same structure and nested values using node's deepEqual() algorithm with strict comparisons (===) on leaf nodes and an optional description of the assertion msg.

Aliases: t.deepEquals(), t.isEquivalent(), t.same()

t.notDeepEqual(actual, expected, msg)

Assert that actual and expected do not have the same structure and nested values using node's deepEqual() algorithm with strict comparisons (===) on leaf nodes and an optional description of the assertion msg.

Aliases: t.notDeepEquals, t.notEquivalent(), t.notDeeply(), t.notSame(), t.isNotDeepEqual(), t.isNotDeeply(), t.isNotEquivalent(), t.isInequivalent()

t.deepLooseEqual(actual, expected, msg)

Assert that actual and expected have the same structure and nested values using node's deepEqual() algorithm with loose comparisons (==) on leaf nodes and an optional description of the assertion msg.

t.notDeepLooseEqual(actual, expected, msg)

Assert that actual and expected do not have the same structure and nested values using node's deepEqual() algorithm with loose comparisons (==) on leaf nodes and an optional description of the assertion msg.

Aliases: t.notLooseEqual(), t.notLooseEquals()

t.throws(fn, expected, msg)

Assert that the function call fn() throws an exception. expected, if present, must be a RegExp, Function, or Object. The RegExp matches the string representation of the exception, as generated by err.toString(). For example, if you set expected to /user/, the test will pass only if the string representation of the exception contains the word user. Any other exception will result in a failed test. The Function could be the constructor for the Error type thrown, or a predicate function to be called with that exception. Object in this case corresponds to a so-called validation object, in which each property is tested for strict deep equality. As an example, see the following two tests--each passes a validation object to t.throws() as the second parameter. The first test will pass, because all property values in the actual error object are deeply strictly equal to the property values in the validation object.

    const err = new TypeError("Wrong value");
    err.code = 404;
    err.check = true;

    // Passing test.
    t.throws(
        () => {
            throw err;
        },
        {
            code: 404,
            check: true
        },
        "Test message."
    );

This next test will fail, because all property values in the actual error object are not deeply strictly equal to the property values in the validation object.

    const err = new TypeError("Wrong value");
    err.code = 404;
    err.check = "true";

    // Failing test.
    t.throws(
        () => {
            throw err;
        },
        {
            code: 404,
            check: true // This is not deeply strictly equal to err.check.
        },
        "Test message."
    );

This is very similar to how Node's assert.throws() method tests validation objects (please see the Node assert.throws() documentation for more information).

If expected is not of type RegExp, Function, or Object, or omitted entirely, any exception will result in a passed test. msg is an optional description of the assertion.

Please note that the second parameter, expected, cannot be of type string. If a value of type string is provided for expected, then t.throws(fn, expected, msg) will execute, but the value of expected will be set to undefined, and the specified string will be set as the value for the msg parameter (regardless of what actually passed as the third parameter). This can cause unexpected results, so please be mindful.

t.doesNotThrow(fn, expected, msg)

Assert that the function call fn() does not throw an exception. expected, if present, limits what should not be thrown, and must be a RegExp or Function. The RegExp matches the string representation of the exception, as generated by err.toString(). For example, if you set expected to /user/, the test will fail only if the string representation of the exception contains the word user. Any other exception will result in a passed test. The Function is the exception thrown (e.g. Error). If expected is not of type RegExp or Function, or omitted entirely, any exception will result in a failed test. msg is an optional description of the assertion.

Please note that the second parameter, expected, cannot be of type string. If a value of type string is provided for expected, then t.doesNotThrows(fn, expected, msg) will execute, but the value of expected will be set to undefined, and the specified string will be set as the value for the msg parameter (regardless of what actually passed as the third parameter). This can cause unexpected results, so please be mindful.

t.test(name, [opts], cb)

Create a subtest with a new test handle st from cb(st) inside the current test t. cb(st) will only fire when t finishes. Additional tests queued up after t will not be run until all subtests finish.

You may pass the same options that test() accepts.

t.comment(message)

Print a message without breaking the tap output. (Useful when using e.g. tap-colorize where output is buffered & console.log will print in incorrect order vis-a-vis tap output.)

Multiline output will be split by \n characters, and each one printed as a comment.

t.match(string, regexp, message)

Assert that string matches the RegExp regexp. Will fail when the first two arguments are the wrong type.

t.doesNotMatch(string, regexp, message)

Assert that string does not match the RegExp regexp. Will fail when the first two arguments are the wrong type.

t.capture(obj, method, implementation = () => {})

Replaces obj[method] with the supplied implementation. obj must be a non-primitive, method must be a valid property key (string or symbol), and implementation, if provided, must be a function.

Calling the returned results() function will return an array of call result objects. The array of calls will be reset whenever the function is called. Call result objects will match one of these forms:

  • { args: [x, y, z], receiver: o, returned: a }
  • { args: [x, y, z], receiver: o, threw: true }

The replacement will automatically be restored on test teardown. You can restore it manually, if desired, by calling .restore() on the returned results function.

Modeled after tap.

t.captureFn(original)

Wraps the supplied function. The returned wrapper has a .calls property, which is an array that will be populated with call result objects, described under t.capture().

Modeled after tap.

t.intercept(obj, property, desc = {}, strictMode = true)

Similar to t.capture()``, but can be used to track get/set operations for any arbitrary property. Calling the returned results()` function will return an array of call result objects. The array of calls will be reset whenever the function is called. Call result objects will match one of these forms:

  • { type: 'get', value: '1.2.3', success: true, args: [x, y, z], receiver: o }
  • { type: 'set', value: '2.4.6', success: false, args: [x, y, z], receiver: o }

If strictMode is true, and writable is false, and no get or set is provided, an exception will be thrown when obj[property] is assigned to. If strictMode is false in this scenario, nothing will be set, but the attempt will still be logged.

Providing both desc.get and desc.set are optional and can still be useful for logging get/set attempts.

desc must be a valid property descriptor, meaning that get/set are mutually exclusive with writable/value. Additionally, explicitly setting configurable to false is not permitted, so that the property can be restored.

var htest = test.createHarness()

Create a new test harness instance, which is a function like test(), but with a new pending stack and test state.

By default the TAP output goes to console.log(). You can pipe the output to someplace else if you htest.createStream().pipe() to a destination stream on the first tick.

test.only([name], [opts], cb)

Like test([name], [opts], cb) except if you use .only this is the only test case that will run for the entire process, all other test cases using tape will be ignored.

Check out how the usage of the --no-only flag could help ensure there is no .only test running in a specified environment.

var stream = test.createStream(opts)

Create a stream of output, bypassing the default output stream that writes messages to console.log(). By default stream will be a text stream of TAP output, but you can get an object stream instead by setting opts.objectMode to true.

tap stream reporter

You can create your own custom test reporter using this createStream() api:

var test = require('tape');
var path = require('path');

test.createStream().pipe(process.stdout);

process.argv.slice(2).forEach(function (file) {
    require(path.resolve(file));
});

You could substitute process.stdout for whatever other output stream you want, like a network connection or a file.

Pass in test files to run as arguments:

$ node tap.js test/x.js test/y.js
TAP version 13
# (anonymous)
not ok 1 should be strictly equal
  ---
    operator: equal
    expected: "boop"
    actual:   "beep"
  ...
# (anonymous)
ok 2 should be strictly equal
ok 3 (unnamed assert)
# wheee
ok 4 (unnamed assert)

1..4
# tests 4
# pass  3
# fail  1

object stream reporter

Here's how you can render an object stream instead of TAP:

var test = require('tape');
var path = require('path');

test.createStream({ objectMode: true }).on('data', function (row) {
    console.log(JSON.stringify(row))
});

process.argv.slice(2).forEach(function (file) {
    require(path.resolve(file));
});

The output for this runner is:

$ node object.js test/x.js test/y.js
{"type":"test","name":"(anonymous)","id":0}
{"id":0,"ok":false,"name":"should be strictly equal","operator":"equal","actual":"beep","expected":"boop","error":{},"test":0,"type":"assert"}
{"type":"end","test":0}
{"type":"test","name":"(anonymous)","id":1}
{"id":0,"ok":true,"name":"should be strictly equal","operator":"equal","actual":2,"expected":2,"test":1,"type":"assert"}
{"id":1,"ok":true,"name":"(unnamed assert)","operator":"ok","actual":true,"expected":true,"test":1,"type":"assert"}
{"type":"end","test":1}
{"type":"test","name":"wheee","id":2}
{"id":0,"ok":true,"name":"(unnamed assert)","operator":"ok","actual":true,"expected":true,"test":2,"type":"assert"}
{"type":"end","test":2}

A convenient alternative to achieve the same:

// report.js
var test = require('tape');

test.createStream({ objectMode: true }).on('data', function (row) {
    console.log(JSON.stringify(row)) // for example
});

and then:

$ tape -r ./report.js **/*.test.js

install

With npm do:

npm install tape --save-dev

troubleshooting

Sometimes t.end() doesn’t preserve the expected output ordering.

For instance the following:

var test = require('tape');

test('first', function (t) {

  setTimeout(function () {
    t.ok(1, 'first test');
    t.end();
  }, 200);

  t.test('second', function (t) {
    t.ok(1, 'second test');
    t.end();
  });
});

test('third', function (t) {
  setTimeout(function () {
    t.ok(1, 'third test');
    t.end();
  }, 100);
});

will output:

ok 1 second test
ok 2 third test
ok 3 first test

because second and third assume first has ended before it actually does.

Use t.plan() instead to let other tests know they should wait:

var test = require('tape');

test('first', function (t) {

+  t.plan(2);

  setTimeout(function () {
    t.ok(1, 'first test');
-    t.end();
  }, 200);

  t.test('second', function (t) {
    t.ok(1, 'second test');
    t.end();
  });
});

test('third', function (t) {
  setTimeout(function () {
    t.ok(1, 'third test');
    t.end();
  }, 100);
});

license

MIT

tape's People

Contributors

abelmokadem avatar aredridel avatar braddunbar avatar cagross avatar davidanson avatar domenic avatar fongandrew avatar fredriknoren avatar fregante avatar grncdr avatar isaacs avatar jocrah avatar joris-van-der-wel avatar jtlapp avatar ljharb avatar lohfu avatar marcusberner avatar mattriley avatar mkls avatar mstade avatar nelsonic avatar nhamer avatar ntwb avatar r0mflip avatar raynos avatar rgruesbeck avatar ryanhamley avatar sceat avatar tehshrike avatar timgates42 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tape's Issues

subtests are not run before parent's siblings

This is weird:

var test = require('tape');

var childRan = false;

test('parent', function(t) {
  t.test('child', function(t) {
    childRan = true;
    t.pass('child ran');
    t.end();
  });
  t.end();
});

test('uncle', function(t) {
  t.ok(childRan, 'Child should have run before moving on to next top-level test');
  t.end();
});

Not emitting `end` event from stream in plan errors

Does tape intentionally not fire end event when there is a plan error?

Example:

var test = require('tape');

test.createStream({ objectMode: true }).on('end', function (){
  console.log('ok');
});

test('fibwibblers and xyrscawlers', function (t) {
  t.plan(3);
  t.ok(false);
});

Won't output "ok"

Objects with properties with undefined values are different to deepEqual but not in output

This is better illustrated via example:

"use strict";
var test = require('tape');
test('example', function(t) {
    t.plan(1);
    t.deepEqual({}, {no: undefined});
});

Produces:

TAP version 13
# example
not ok 1 should be equivalent
  ---
    operator: deepEqual
    expected: {}
    actual:   {}
    at: Test._cb (/opt/home/turner/Dropbox/Code/sql-lexer/bad.js:5:7)
  ...

1..1
# tests 1
# pass  0
# fail  1

As you can see, tape's deepEqual considers these two data structures different, but the diagnostics make them look identical. This is due to outputting via JSON.serialize, which can't emit undefined as a value.

Either these two structures should be considered equal, or the diagnostics should be updated to reflect the actual values.

Sub-tests should be counted as assertions

In tap this works:

test('parent test', function (t) {
  t.plan(2);
  t.test('first child', function (t) {
    t.plan(1);
    t.pass('pass first child');
  })

  t.test(function (t) {
    t.plan(1);
    t.pass('pass second child');
  })
})

But in tape the child tests will never run because only assertions are counted against the plan.

async throws do not get reported in 1.0.2

require('tape')('strange', function (t) {
  setTimeout(function () {
   throw new Error('wtf?')
  }, 10)
})

output:

TAP version 13
# strange
not ok 1 test exited without ending
  ---
    operator: fail
    expected:
    actual:
  ...

this worked correctly some older versions of tape, but not 1.0.2. running in [email protected]

duplicated test runs ~ callbacks are executed multiple times

using tape 1.0.4
npm 1.3.8
node 0.15 & 0.16

Test callbacks are executed more than once after the first test -- conveniently, the number of executions matches the position of the test() call in the suite:

second test runs twice
third test runs 3 times
etc.

The effect is that every assertion in the test is run multiple times.

grandchild tests not handled

test(function(t) {
  t.test(function (t) {
    t.test('pwned', function (t) {
      t.pass('this never gets run');
    });
  });
});

Test harness failing in Windows

Before I run npm test, I see several test failing, because I haven't yet installed the dependencies. But after I run npm test, it doesn't appear to run any tests.

Trace:

$ npm test

> [email protected] test c:\Documents and Settings\apenneba\Desktop\src\tape
> tap test/*.js

total ................................................... 0/1

not ok
npm ERR! Test failed.  See above for more details.
npm ERR! not ok code 0

System:

$ specs node os
Specs:

specs 0.4
https://github.com/mcandre/specs#readme

npm --version
1.2.17

node --version
v0.10.3

systeminfo | findstr /B /C:"OS Name" /C:"OS Version"
OS Name:                   Microsoft Windows XP Professional
OS Version:                5.1.2600 Service Pack 3 Build 2600

Nested asynchronous tests fail

The following fails in tape but works in node-tap - oddly enough the t.equal will be run twice in tape... Tested on tape 1.0.2

test(function(t) {
  var i = 0
  t.test('setup', function(t) {
    process.nextTick(function() {
      t.equal(i, 0, 'called once')
      i++
      t.end()
    })
  })


  t.test('teardown', function(t) {
    t.end()
  })

  t.end()
})

[CRITICAL] assert.plan does not actually cause tests to fail

The following tests pass when run with node test.js

var test = require('tape');

test('one', function(assert) {
    assert.plan(1);
    assert.ok(true);
});

test('two', function(assert) {
    assert.plan(2);
    assert.ok(true);
});

It seems that any .plan is not honored after the first one. If I change the first plan to 2, then the test fails as expected. However, nothing in the plan makes the second test fail.

This means there are tests out there which are passing that may not actually be passing.

node.js 0.10.0 changes `process.nextTick` semantics

I haven't investigated the specific error, but it surfaces when running tests on tar-parse: on 0.8.X, it runs all of the tests as expected; on 0.10.X, it runs the tests without waiting for each test to pass before running the next test.

in 0.10.X, setImmediate inherited the semantics of process.nextTick from node 0.8.X. I tested out replacing all instances of process.nextTick with setImmediate and all of tar-parse's tests ran as expected, so it's likely that that change is the culprit.

Skipped tests are not as convenient as they could be

If I mark a test as skipped with

t.test("Some tests", function(t) {
    t.plan(1);
    t.test("some test", {skip:true}, function(t) { ... });
    t.end();
});
  1. it is no longer counted in the plan, meaning that I have to alter the plan number when I mark a test as skipped, which is rather annoying (I might as well just comment the test out), and
  2. A skipped test isn't drawn to my attention very clearly.

It might be nice if skipped tests still counted in the plan (so one doesn't have to alter the plan number), and if the number of skipped tests were reported along with passed, failed, and total tests at the end of the test run.

Could use a diag() Method

It's pretty typical for TAP implementations to offer an interface for emitting diagnostic messages as TAP comments. Examples:

I tried to figure out where to plug one in to submit a patch, but the path to the output handle was over my head, so I used console.log() for my immediate use. Would be nice to have a canonical place for it.

print the file name tape is running in for debuggability

When a test suite grows to multiple files it's hard to tell which file a given failed assert belongs to.

One thing that can be done is

test("testing {{file}}.js", function (assert) {
  assert.end()
})

This approach works today but is tedious.

A different approach is for the lib/test.js constructor to create a new Error() and read new Error().stack then find the caller file and store it in the test object.

Then the lib/results.js reporter we can see the fileName on every emitted result. When the fileName changes from the previous one we know that we are running a test block in a new file so we can just print that file as a comment in some form.

This way we have comments for files and comments for the name in the test block.

Then reporters downstream like faucet and tap-spec can do pretty things with them.

Like faucet --file can collapse all the asserts in a file rather then all the asserts in a test block (for super terse ness).

I can make a PR for this.

More user friendly defaults

There is a module called tap-spec that prints nicely.

There is also faucet that has more terse defaults.

Currently when you run node test.js tape just spits out TAP to stdout.

What if tape checked whether stdout was a TTY and used faucet or tap-spec instead?

That way if you ever redirect stdout anywhere that's not a TTY like | some thing or > some file or | tee some file you will get the machine readable TAP just like you want.

However if you run node test.js plainly from the terminal you will get a more user friendly default output

Broken image in README

The readme.markdown file has an image reference to http://ci.testling.com/substack/tape but there's no image served there, so it appears as a broken image in the README.

use of console.log hard-coded

The README suggests console.log is only used if process.stdout isn't available. The code for tape 1.0.4 suggests otherwise:

  • lib/default_stream always uses console.log
  • the only use of process.stdout is in test/max_listeners.js

The irony is that I wouldn't have noticed had I not been irritated that node_redis used console.log, leading me to replace console.log with a hook to my logger, and then to unit test that hook using tape, only to see all of the tape output come out of my logger…

Getting mojibake on windows

When i run some chinese test on windows, i'm getting mojibake.

My test is:

test('构造函数', function(t) {
  t.ok(typeof Ant === 'function', 'Ant 是个构造函数');
  t.end();
});

Output getting mojibake:

TAP version 13
# 鏋勯€犲嚱鏁?ok 1 Ant 鏄釜鏋勯€犲嚱鏁?
1..1
# tests 1
# pass  1

# ok

V2.13.0 (using console.log instead of fs.writeSync) getting the right output:

TAP version 13
# 构造函数
ok 1 Ant 是个构造函数

1..1
# tests 1
# pass  1

# ok

This should be cause by #84.

getting 'test exited without ending' when using subtests with async calls

Found this when writing a test. When using subtests with async calls, test output looks incorrect and test fails with 'test exited without ending'. Here's a simple example:

var test = require('../');

var asyncFunction = function (callback) {
  setTimeout(callback, Math.random * 50);
};

test('master test', function (t) {
  t.test('inner test 1', function (tt) {
    tt.pass('inner test 1 before async call');
    asyncFunction(function () {
      tt.pass('inner test 1 in async callback');
      tt.end();
    })
  });

  t.test('inner test 2', function (ttt) {
    ttt.pass('inner test 2 before async call');
    asyncFunction(function () {
      ttt.pass('inner test 2 in async callback');
      ttt.end();
    })
  });

  t.end(); // test fails with or without this, is t.end in master test necessary?
})

And the output:

TAP version 13
# master test
# inner test 1
ok 1 inner test 1 before async call
ok 2 inner test 1 in async callback
ok 3 inner test 1 before async call
ok 4 inner test 1 in async callback
not ok 5 test exited without ending
  ---
    operator: fail
    expected: 
    actual:   
  ...

Note that the first test is actually run twice. This isn't an issue when removing the async call:

var test = require('../');

test('master test', function (t) {
  t.test('inner test 1', function (tt) {
    tt.pass('inner test 1 no async');
    tt.end();
  });

  t.test('inner test 2', function (ttt) {
    ttt.pass('inner test 2 no async');
    ttt.end()
  });

  t.end(); // test fails with or without this, is t.end in master test necessary?
})

produces:

TAP version 13
# master test
# inner test 1
ok 1 inner test 1 no async
# inner test 2
ok 2 inner test 2 no async

1..2
# tests 2
# pass  2

# ok

I can open a pull request for these tests, if you like.

Tape exists with 1 when using test.only

I've got some tests using test.only and everything looks like it passes except it exits with 1

This is mainly confusing because the exit code and TAP output disagree on test success

Better error stack traces reporting

@chrisdickinson mentioned that tape doesn't print stack traces properly.

Reporting error cases for failing tests in a clean fashion is important for debugging & testing.

@chrisdickinson should provide more feedback and I can write a PR for some (failing) tests around a more descriptive error output and update the renderer

"t.error" is backwards

t.ok means the first argument must be truthy, and t.notOk means the first argument must be falsy. Most other test methods are like this.

However, t.error means the first argument must be falsy?

To be consistent with everything else, t.error should assert that the error is truthy, and t.notError that it is falsy.

Either way, could there be a method that's the inverse of t.error? Currently I have to do t.throws(function () { throw err; }, TypeError, 'is a type error') (or t.doesNotThrow) when I'd love to be able to do t.error(err, TypeError, 'err is a TypeError') (or t.notError)

why use t.skip?

I'm trying to think of a use case for t.skip and blanking. Could someone offer an example?

deepEqual is non-strict

Is it valid behaviour that deepEqual doesn't distinguish between numbers and strings?

# produces pass, while fail is expected
t.deepEqual([1,2,3], ['1','2','3']);

I.e., it uses non-strict equality (==)

test(name, conf, cb) --> conf is not used ever

This prevents you from making whole sets of tests skipped, which is insanely convenient:

var isWindows = process.platform !== 'win32'
test('unix stuff', { skip: isWindows }, function(t) {
  // blardeebloop
})

Same for t.test(name, config, cb).

Actually, it looks like the Test class just takes an opts argument, but then doesn't do anything with it, and that argument isn't passed anything any of the places where it's called.

more weird ordering issues...

Because sub tests are unshifted onto the pending queue, it makes for some surprising behavior:

test(function (t) {
  t.test('first', function (t) {
    t.pass('first');
    t.end();
  });
  t.test('second', function (t) {
    t.pass('second');
    t.end();
  });
  t.pass('parent');
  t.end();
})

Output is parent, second, first.

Browserifyed latest version doesn't work with in IE8

Since stream-combiner was introduced IE8 fails

Using string.prototype.split with regex makes the tap output incorrect due to IE < 9 string split bugs

https://github.com/dominictarr/split

Using Array.prototype.forEach in duplexer fails as it doesn't exist in IE < 9

https://github.com/Raynos/duplexer

I can help to fix these issues but first wanted your advise in how to deal with browser incompatibility issues.

Shall we
a.) Fix duplexer and split with a browser shim/ a fall back method?
b.) Fix browserify to add these shims automatically?
c.) ???

test.only broken

test.only is now broken (tested in tape 2.3.2):

var test = require('tape');

test('test 1', function (t) {
  t.end();
});

test.only('test 2', function (t) {
  t.end();
});

This fails with:

$ node only.js
TAP version 13
# test 2
not ok 1 test exited without ending
  ---
    operator: fail
    expected:
    actual:
  ...

1..1
# tests 1
# pass  0
# fail  1

The last version that this code worked was in tape 2.1.0.

tape exits 1 when tests pass

for this output:

:node test/simple.js 
TAP version 13
# sha1
ok 1 should be equal
# md5
ok 2 should be equal
# randomBytes

1..2
# tests 2
# pass  2

# ok

returns exit status 1

>echo $?
1

t.comment() = fail exit code

Is this correct behaviour?

Including any test comments seems to fail a suite.

// test.js
var test = require('tape')

test('comment test', function(t) {
  t.comment('this is a comment')
  t.ok(true)
  t.end()
})
> tape test.js
TAP version 13
# comment test
# this is a comment
ok 1 (unnamed assert)

1..1
# tests 1
# pass  1

# ok


> echo $?
1

before/beforeEach/after/afterEach

Hello :-)

I'm considering using tape for a project requiring IE8 testing, in order to replace chai/mocha.

Is there any way to achieve doing before/beforeEach/after/afterEach to setup fixtures and stubs before running assertions?

Thanks very much :-)

tape doesn't work in node 0.10

tape is unusable in node 0.10

This prevents tape from being used locally or in Travis CI for node 0.10.

Both issues (#21 & #27) have a reference to this being an issue in 0.10 that's fixed in 1.0. Even if it's fixed in 1.0, it's important that it runs in 0.10 as well. Projects shouldn't have to skip testing a stable node release because it may be fixed in a few months.

Enhancement: core tape produces only object stream

Hi James, I was thinking about your words about more testing and came to an idea that now it is needed to test simultaneously two kinds of output: an object stream and a tap-formatted output.

Things would become dramatically simplified if we had only one core output - an object stream with well defined object formats for each kind of result - and a kind of pluggable formatter for tap output.

This would allow to have separately, more thoroughly tested modules, keep things DRY in lib/results.js and encourage contributors to implement their own formatters (I have already seen on github an attempt of 'beautiful tape output').

It could have looked like this:

//requiring tests/*js

tape.run()                  // <-- returns core object stream
    .pipe(tapFormatter())   // <-- formats tap stdout

If you are interested how it could have looked like, I can scratch a draft version in a day or two.

Test parallelization

It seems like it would be possible to introduce some level of parallelization or batching to this framework. Since all tap output is generated by t.xxx(), it would be possible to save the output per test and then output test output at the end of a test run.

This would, of course, leave the user open to race conditions in their code, but it is useful for doing simple things like running selenium tests which are inherently async, and time consuming purely due to net traffic.

`InternalError: too much recursion` after upgrade to tape 1.0.1

See the FF18 & FF19 runs for https://ci.testling.com/jfsiii/XCSSMatrix

The tests ran fine in those browsers until I upgraded to tape 1.0.1. Then I got errors like:

not ok 197 InternalError: too much recursion
  ---
    operator: error
    expected: 
    actual:   {}
    stack:
      [10]</EventEmitter.prototype.emit@http://git.testling.com/work/jfsiii/repos/8d723e930c0bf5f541ade46bae6dd885067bae20.1367959429175/1367959429276.675740b8.js:1846
    [7]</Test.prototype.run@http://git.testling.com/work/jfsiii/repos/8d723e930c0bf5f541ade46bae6dd885067bae20.1367959429175/1367959429276.675740b8.js:3023
    [8]</Results.prototype.push/<@http://git.testling.com/work/jfsiii/repos/8d723e930c0bf5f541ade46bae6dd885067bae20.1367959429175/1367959429276.675740b8.js:3448
    g@http://git.testling.com/work/jfsiii/repos/8d723e930c0bf5f541ade46bae6dd885067bae20.1367959429175/1367959429276.675740b8.js:1951
    [10]</EventEmitter.prototype.emit@http://git.testling.com/work/jfsiii/repos/8d723e930c0bf5f541ade46bae6dd885067bae20.1367959429175/1367959429276.675740b8.js:1869
    [7]</Test.prototype.end@http://git.testling.com/work/jfsiii/repos/8d723e930c0bf5f541ade46bae6dd885067bae20.1367959429175/1367959429276.675740b8.js:3058
    [1]</</</</<@http://git.testling.com/work/jfsiii/repos/8d723e930c0bf5f541ade46bae6dd885067bae20.1367959429175/1367959429276.675740b8.js:549
    [7]</Test.prototype.run@http://git.testling.com/work/jfsiii/repos/8d723e930c0bf5f541ade46bae6dd885067bae20.1367959429175/1367959429276.675740b8.js:3025
    [8]</Results.prototype.push/<@http://git.testling.com/work/jfsiii/repos/8d723e930c0bf5f541ade46bae6dd885067bae20.1367959429175/1367959429276.675740b8.js:3448
    g@http://git.testling.com/work/jfsiii/repos/8d723e930c0bf5f541ade46bae6dd885067bae20.1367959429175/1367959429276.675740b8.js:1951
    [10]</EventEmitter.prototype.emit@http://git.testling.com/work/jfsiii/repos/8d723e930c0bf5f541ade46bae6dd885067bae20.1367959429175/1367959429276.675740b8.js:1869
    [7]</Test.prototype.end@http://git.testling.com/work/jfsiii/repos/8d723e930c0bf5f541ade46bae6dd885067bae20.1367959429175/1367959429276.675740b8.js:3058
    [1]</</</</<@http://git.testling.com/work/jfsiii/repos/8d723e930c0bf5f541ade46bae6dd885067bae20.1367959429175/1367959429276.675740b8.js:428
    [7]</Test.prototype.run@http://git.testling.com/work/jfsiii/repos/8d723e930c0bf5f541ade46bae6dd885067bae20.1367959429175/1367959429276.675740b8.js:3025
    [8]</Results.prototype.push/<@http://git.testling.com/work/jfsiii/repos/8d723e930c0bf5f541ade46bae6dd885067bae20.1367959429175/1367959429276.675740b8.js:3448
    g@http://git.testling.com/work/jfsiii/repos/8d723e930c0bf5f541ade46bae6dd885067bae20.1367959429175/1367959429276.675740b8.js:1951
    [10]</EventEmitter.prototype.emit@http://git.testling.com/work/jfsiii/repos/8d723e930c0bf5f541ade46bae6dd885067bae20.1367959429175/1367959429276.675740b8.js:1869

[SNIPPED TO PREVENT GITHUB ERROR]    
  ...

I'm pretty sure this is limited to tape (or perhaps tape + testling) because the tests went from passing to failing without any changes to the library or the tests themselves.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.