Giter VIP home page Giter VIP logo

tape's People

Contributors

abelmokadem avatar aredridel avatar braddunbar avatar cagross avatar davidanson avatar domenic avatar fongandrew avatar fredriknoren avatar fregante avatar grncdr avatar isaacs avatar jocrah avatar joris-van-der-wel avatar jtlapp avatar ljharb avatar lohfu avatar marcusberner avatar mattriley avatar mkls avatar mstade avatar nelsonic avatar nhamer avatar ntwb avatar r0mflip avatar raynos avatar rgruesbeck avatar ryanhamley avatar sceat avatar tehshrike avatar timgates42 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tape's Issues

Getting mojibake on windows

When i run some chinese test on windows, i'm getting mojibake.

My test is:

test('构造函数', function(t) {
  t.ok(typeof Ant === 'function', 'Ant 是个构造函数');
  t.end();
});

Output getting mojibake:

TAP version 13
# 鏋勯€犲嚱鏁?ok 1 Ant 鏄釜鏋勯€犲嚱鏁?
1..1
# tests 1
# pass  1

# ok

V2.13.0 (using console.log instead of fs.writeSync) getting the right output:

TAP version 13
# 构造函数
ok 1 Ant 是个构造函数

1..1
# tests 1
# pass  1

# ok

This should be cause by #84.

duplicated test runs ~ callbacks are executed multiple times

using tape 1.0.4
npm 1.3.8
node 0.15 & 0.16

Test callbacks are executed more than once after the first test -- conveniently, the number of executions matches the position of the test() call in the suite:

second test runs twice
third test runs 3 times
etc.

The effect is that every assertion in the test is run multiple times.

Better error stack traces reporting

@chrisdickinson mentioned that tape doesn't print stack traces properly.

Reporting error cases for failing tests in a clean fashion is important for debugging & testing.

@chrisdickinson should provide more feedback and I can write a PR for some (failing) tests around a more descriptive error output and update the renderer

print the file name tape is running in for debuggability

When a test suite grows to multiple files it's hard to tell which file a given failed assert belongs to.

One thing that can be done is

test("testing {{file}}.js", function (assert) {
  assert.end()
})

This approach works today but is tedious.

A different approach is for the lib/test.js constructor to create a new Error() and read new Error().stack then find the caller file and store it in the test object.

Then the lib/results.js reporter we can see the fileName on every emitted result. When the fileName changes from the previous one we know that we are running a test block in a new file so we can just print that file as a comment in some form.

This way we have comments for files and comments for the name in the test block.

Then reporters downstream like faucet and tap-spec can do pretty things with them.

Like faucet --file can collapse all the asserts in a file rather then all the asserts in a test block (for super terse ness).

I can make a PR for this.

Tape exists with 1 when using test.only

I've got some tests using test.only and everything looks like it passes except it exits with 1

This is mainly confusing because the exit code and TAP output disagree on test success

tape doesn't work in node 0.10

tape is unusable in node 0.10

This prevents tape from being used locally or in Travis CI for node 0.10.

Both issues (#21 & #27) have a reference to this being an issue in 0.10 that's fixed in 1.0. Even if it's fixed in 1.0, it's important that it runs in 0.10 as well. Projects shouldn't have to skip testing a stable node release because it may be fixed in a few months.

Objects with properties with undefined values are different to deepEqual but not in output

This is better illustrated via example:

"use strict";
var test = require('tape');
test('example', function(t) {
    t.plan(1);
    t.deepEqual({}, {no: undefined});
});

Produces:

TAP version 13
# example
not ok 1 should be equivalent
  ---
    operator: deepEqual
    expected: {}
    actual:   {}
    at: Test._cb (/opt/home/turner/Dropbox/Code/sql-lexer/bad.js:5:7)
  ...

1..1
# tests 1
# pass  0
# fail  1

As you can see, tape's deepEqual considers these two data structures different, but the diagnostics make them look identical. This is due to outputting via JSON.serialize, which can't emit undefined as a value.

Either these two structures should be considered equal, or the diagnostics should be updated to reflect the actual values.

test(name, conf, cb) --> conf is not used ever

This prevents you from making whole sets of tests skipped, which is insanely convenient:

var isWindows = process.platform !== 'win32'
test('unix stuff', { skip: isWindows }, function(t) {
  // blardeebloop
})

Same for t.test(name, config, cb).

Actually, it looks like the Test class just takes an opts argument, but then doesn't do anything with it, and that argument isn't passed anything any of the places where it's called.

node.js 0.10.0 changes `process.nextTick` semantics

I haven't investigated the specific error, but it surfaces when running tests on tar-parse: on 0.8.X, it runs all of the tests as expected; on 0.10.X, it runs the tests without waiting for each test to pass before running the next test.

in 0.10.X, setImmediate inherited the semantics of process.nextTick from node 0.8.X. I tested out replacing all instances of process.nextTick with setImmediate and all of tar-parse's tests ran as expected, so it's likely that that change is the culprit.

grandchild tests not handled

test(function(t) {
  t.test(function (t) {
    t.test('pwned', function (t) {
      t.pass('this never gets run');
    });
  });
});

`InternalError: too much recursion` after upgrade to tape 1.0.1

See the FF18 & FF19 runs for https://ci.testling.com/jfsiii/XCSSMatrix

The tests ran fine in those browsers until I upgraded to tape 1.0.1. Then I got errors like:

not ok 197 InternalError: too much recursion
  ---
    operator: error
    expected: 
    actual:   {}
    stack:
      [10]</EventEmitter.prototype.emit@http://git.testling.com/work/jfsiii/repos/8d723e930c0bf5f541ade46bae6dd885067bae20.1367959429175/1367959429276.675740b8.js:1846
    [7]</Test.prototype.run@http://git.testling.com/work/jfsiii/repos/8d723e930c0bf5f541ade46bae6dd885067bae20.1367959429175/1367959429276.675740b8.js:3023
    [8]</Results.prototype.push/<@http://git.testling.com/work/jfsiii/repos/8d723e930c0bf5f541ade46bae6dd885067bae20.1367959429175/1367959429276.675740b8.js:3448
    g@http://git.testling.com/work/jfsiii/repos/8d723e930c0bf5f541ade46bae6dd885067bae20.1367959429175/1367959429276.675740b8.js:1951
    [10]</EventEmitter.prototype.emit@http://git.testling.com/work/jfsiii/repos/8d723e930c0bf5f541ade46bae6dd885067bae20.1367959429175/1367959429276.675740b8.js:1869
    [7]</Test.prototype.end@http://git.testling.com/work/jfsiii/repos/8d723e930c0bf5f541ade46bae6dd885067bae20.1367959429175/1367959429276.675740b8.js:3058
    [1]</</</</<@http://git.testling.com/work/jfsiii/repos/8d723e930c0bf5f541ade46bae6dd885067bae20.1367959429175/1367959429276.675740b8.js:549
    [7]</Test.prototype.run@http://git.testling.com/work/jfsiii/repos/8d723e930c0bf5f541ade46bae6dd885067bae20.1367959429175/1367959429276.675740b8.js:3025
    [8]</Results.prototype.push/<@http://git.testling.com/work/jfsiii/repos/8d723e930c0bf5f541ade46bae6dd885067bae20.1367959429175/1367959429276.675740b8.js:3448
    g@http://git.testling.com/work/jfsiii/repos/8d723e930c0bf5f541ade46bae6dd885067bae20.1367959429175/1367959429276.675740b8.js:1951
    [10]</EventEmitter.prototype.emit@http://git.testling.com/work/jfsiii/repos/8d723e930c0bf5f541ade46bae6dd885067bae20.1367959429175/1367959429276.675740b8.js:1869
    [7]</Test.prototype.end@http://git.testling.com/work/jfsiii/repos/8d723e930c0bf5f541ade46bae6dd885067bae20.1367959429175/1367959429276.675740b8.js:3058
    [1]</</</</<@http://git.testling.com/work/jfsiii/repos/8d723e930c0bf5f541ade46bae6dd885067bae20.1367959429175/1367959429276.675740b8.js:428
    [7]</Test.prototype.run@http://git.testling.com/work/jfsiii/repos/8d723e930c0bf5f541ade46bae6dd885067bae20.1367959429175/1367959429276.675740b8.js:3025
    [8]</Results.prototype.push/<@http://git.testling.com/work/jfsiii/repos/8d723e930c0bf5f541ade46bae6dd885067bae20.1367959429175/1367959429276.675740b8.js:3448
    g@http://git.testling.com/work/jfsiii/repos/8d723e930c0bf5f541ade46bae6dd885067bae20.1367959429175/1367959429276.675740b8.js:1951
    [10]</EventEmitter.prototype.emit@http://git.testling.com/work/jfsiii/repos/8d723e930c0bf5f541ade46bae6dd885067bae20.1367959429175/1367959429276.675740b8.js:1869

[SNIPPED TO PREVENT GITHUB ERROR]    
  ...

I'm pretty sure this is limited to tape (or perhaps tape + testling) because the tests went from passing to failing without any changes to the library or the tests themselves.

Not emitting `end` event from stream in plan errors

Does tape intentionally not fire end event when there is a plan error?

Example:

var test = require('tape');

test.createStream({ objectMode: true }).on('end', function (){
  console.log('ok');
});

test('fibwibblers and xyrscawlers', function (t) {
  t.plan(3);
  t.ok(false);
});

Won't output "ok"

[CRITICAL] assert.plan does not actually cause tests to fail

The following tests pass when run with node test.js

var test = require('tape');

test('one', function(assert) {
    assert.plan(1);
    assert.ok(true);
});

test('two', function(assert) {
    assert.plan(2);
    assert.ok(true);
});

It seems that any .plan is not honored after the first one. If I change the first plan to 2, then the test fails as expected. However, nothing in the plan makes the second test fail.

This means there are tests out there which are passing that may not actually be passing.

Sub-tests should be counted as assertions

In tap this works:

test('parent test', function (t) {
  t.plan(2);
  t.test('first child', function (t) {
    t.plan(1);
    t.pass('pass first child');
  })

  t.test(function (t) {
    t.plan(1);
    t.pass('pass second child');
  })
})

But in tape the child tests will never run because only assertions are counted against the plan.

Enhancement: core tape produces only object stream

Hi James, I was thinking about your words about more testing and came to an idea that now it is needed to test simultaneously two kinds of output: an object stream and a tap-formatted output.

Things would become dramatically simplified if we had only one core output - an object stream with well defined object formats for each kind of result - and a kind of pluggable formatter for tap output.

This would allow to have separately, more thoroughly tested modules, keep things DRY in lib/results.js and encourage contributors to implement their own formatters (I have already seen on github an attempt of 'beautiful tape output').

It could have looked like this:

//requiring tests/*js

tape.run()                  // <-- returns core object stream
    .pipe(tapFormatter())   // <-- formats tap stdout

If you are interested how it could have looked like, I can scratch a draft version in a day or two.

test.only broken

test.only is now broken (tested in tape 2.3.2):

var test = require('tape');

test('test 1', function (t) {
  t.end();
});

test.only('test 2', function (t) {
  t.end();
});

This fails with:

$ node only.js
TAP version 13
# test 2
not ok 1 test exited without ending
  ---
    operator: fail
    expected:
    actual:
  ...

1..1
# tests 1
# pass  0
# fail  1

The last version that this code worked was in tape 2.1.0.

async throws do not get reported in 1.0.2

require('tape')('strange', function (t) {
  setTimeout(function () {
   throw new Error('wtf?')
  }, 10)
})

output:

TAP version 13
# strange
not ok 1 test exited without ending
  ---
    operator: fail
    expected:
    actual:
  ...

this worked correctly some older versions of tape, but not 1.0.2. running in [email protected]

why use t.skip?

I'm trying to think of a use case for t.skip and blanking. Could someone offer an example?

getting 'test exited without ending' when using subtests with async calls

Found this when writing a test. When using subtests with async calls, test output looks incorrect and test fails with 'test exited without ending'. Here's a simple example:

var test = require('../');

var asyncFunction = function (callback) {
  setTimeout(callback, Math.random * 50);
};

test('master test', function (t) {
  t.test('inner test 1', function (tt) {
    tt.pass('inner test 1 before async call');
    asyncFunction(function () {
      tt.pass('inner test 1 in async callback');
      tt.end();
    })
  });

  t.test('inner test 2', function (ttt) {
    ttt.pass('inner test 2 before async call');
    asyncFunction(function () {
      ttt.pass('inner test 2 in async callback');
      ttt.end();
    })
  });

  t.end(); // test fails with or without this, is t.end in master test necessary?
})

And the output:

TAP version 13
# master test
# inner test 1
ok 1 inner test 1 before async call
ok 2 inner test 1 in async callback
ok 3 inner test 1 before async call
ok 4 inner test 1 in async callback
not ok 5 test exited without ending
  ---
    operator: fail
    expected: 
    actual:   
  ...

Note that the first test is actually run twice. This isn't an issue when removing the async call:

var test = require('../');

test('master test', function (t) {
  t.test('inner test 1', function (tt) {
    tt.pass('inner test 1 no async');
    tt.end();
  });

  t.test('inner test 2', function (ttt) {
    ttt.pass('inner test 2 no async');
    ttt.end()
  });

  t.end(); // test fails with or without this, is t.end in master test necessary?
})

produces:

TAP version 13
# master test
# inner test 1
ok 1 inner test 1 no async
# inner test 2
ok 2 inner test 2 no async

1..2
# tests 2
# pass  2

# ok

I can open a pull request for these tests, if you like.

Broken image in README

The readme.markdown file has an image reference to http://ci.testling.com/substack/tape but there's no image served there, so it appears as a broken image in the README.

Skipped tests are not as convenient as they could be

If I mark a test as skipped with

t.test("Some tests", function(t) {
    t.plan(1);
    t.test("some test", {skip:true}, function(t) { ... });
    t.end();
});
  1. it is no longer counted in the plan, meaning that I have to alter the plan number when I mark a test as skipped, which is rather annoying (I might as well just comment the test out), and
  2. A skipped test isn't drawn to my attention very clearly.

It might be nice if skipped tests still counted in the plan (so one doesn't have to alter the plan number), and if the number of skipped tests were reported along with passed, failed, and total tests at the end of the test run.

more weird ordering issues...

Because sub tests are unshifted onto the pending queue, it makes for some surprising behavior:

test(function (t) {
  t.test('first', function (t) {
    t.pass('first');
    t.end();
  });
  t.test('second', function (t) {
    t.pass('second');
    t.end();
  });
  t.pass('parent');
  t.end();
})

Output is parent, second, first.

deepEqual is non-strict

Is it valid behaviour that deepEqual doesn't distinguish between numbers and strings?

# produces pass, while fail is expected
t.deepEqual([1,2,3], ['1','2','3']);

I.e., it uses non-strict equality (==)

tape exits 1 when tests pass

for this output:

:node test/simple.js 
TAP version 13
# sha1
ok 1 should be equal
# md5
ok 2 should be equal
# randomBytes

1..2
# tests 2
# pass  2

# ok

returns exit status 1

>echo $?
1

Test parallelization

It seems like it would be possible to introduce some level of parallelization or batching to this framework. Since all tap output is generated by t.xxx(), it would be possible to save the output per test and then output test output at the end of a test run.

This would, of course, leave the user open to race conditions in their code, but it is useful for doing simple things like running selenium tests which are inherently async, and time consuming purely due to net traffic.

More user friendly defaults

There is a module called tap-spec that prints nicely.

There is also faucet that has more terse defaults.

Currently when you run node test.js tape just spits out TAP to stdout.

What if tape checked whether stdout was a TTY and used faucet or tap-spec instead?

That way if you ever redirect stdout anywhere that's not a TTY like | some thing or > some file or | tee some file you will get the machine readable TAP just like you want.

However if you run node test.js plainly from the terminal you will get a more user friendly default output

Browserifyed latest version doesn't work with in IE8

Since stream-combiner was introduced IE8 fails

Using string.prototype.split with regex makes the tap output incorrect due to IE < 9 string split bugs

https://github.com/dominictarr/split

Using Array.prototype.forEach in duplexer fails as it doesn't exist in IE < 9

https://github.com/Raynos/duplexer

I can help to fix these issues but first wanted your advise in how to deal with browser incompatibility issues.

Shall we
a.) Fix duplexer and split with a browser shim/ a fall back method?
b.) Fix browserify to add these shims automatically?
c.) ???

t.comment() = fail exit code

Is this correct behaviour?

Including any test comments seems to fail a suite.

// test.js
var test = require('tape')

test('comment test', function(t) {
  t.comment('this is a comment')
  t.ok(true)
  t.end()
})
> tape test.js
TAP version 13
# comment test
# this is a comment
ok 1 (unnamed assert)

1..1
# tests 1
# pass  1

# ok


> echo $?
1

Test harness failing in Windows

Before I run npm test, I see several test failing, because I haven't yet installed the dependencies. But after I run npm test, it doesn't appear to run any tests.

Trace:

$ npm test

> [email protected] test c:\Documents and Settings\apenneba\Desktop\src\tape
> tap test/*.js

total ................................................... 0/1

not ok
npm ERR! Test failed.  See above for more details.
npm ERR! not ok code 0

System:

$ specs node os
Specs:

specs 0.4
https://github.com/mcandre/specs#readme

npm --version
1.2.17

node --version
v0.10.3

systeminfo | findstr /B /C:"OS Name" /C:"OS Version"
OS Name:                   Microsoft Windows XP Professional
OS Version:                5.1.2600 Service Pack 3 Build 2600

subtests are not run before parent's siblings

This is weird:

var test = require('tape');

var childRan = false;

test('parent', function(t) {
  t.test('child', function(t) {
    childRan = true;
    t.pass('child ran');
    t.end();
  });
  t.end();
});

test('uncle', function(t) {
  t.ok(childRan, 'Child should have run before moving on to next top-level test');
  t.end();
});

before/beforeEach/after/afterEach

Hello :-)

I'm considering using tape for a project requiring IE8 testing, in order to replace chai/mocha.

Is there any way to achieve doing before/beforeEach/after/afterEach to setup fixtures and stubs before running assertions?

Thanks very much :-)

use of console.log hard-coded

The README suggests console.log is only used if process.stdout isn't available. The code for tape 1.0.4 suggests otherwise:

  • lib/default_stream always uses console.log
  • the only use of process.stdout is in test/max_listeners.js

The irony is that I wouldn't have noticed had I not been irritated that node_redis used console.log, leading me to replace console.log with a hook to my logger, and then to unit test that hook using tape, only to see all of the tape output come out of my logger…

Could use a diag() Method

It's pretty typical for TAP implementations to offer an interface for emitting diagnostic messages as TAP comments. Examples:

I tried to figure out where to plug one in to submit a patch, but the path to the output handle was over my head, so I used console.log() for my immediate use. Would be nice to have a canonical place for it.

Nested asynchronous tests fail

The following fails in tape but works in node-tap - oddly enough the t.equal will be run twice in tape... Tested on tape 1.0.2

test(function(t) {
  var i = 0
  t.test('setup', function(t) {
    process.nextTick(function() {
      t.equal(i, 0, 'called once')
      i++
      t.end()
    })
  })


  t.test('teardown', function(t) {
    t.end()
  })

  t.end()
})

"t.error" is backwards

t.ok means the first argument must be truthy, and t.notOk means the first argument must be falsy. Most other test methods are like this.

However, t.error means the first argument must be falsy?

To be consistent with everything else, t.error should assert that the error is truthy, and t.notError that it is falsy.

Either way, could there be a method that's the inverse of t.error? Currently I have to do t.throws(function () { throw err; }, TypeError, 'is a type error') (or t.doesNotThrow) when I'd love to be able to do t.error(err, TypeError, 'err is a TypeError') (or t.notError)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.