Giter VIP home page Giter VIP logo

grover's People

Contributors

caridy avatar customcommander avatar davglass avatar petey avatar slieschke avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

grover's Issues

Glob pattern does not work on the windows platform

Giving a glob pattern such as grover ./tests/*.html does not work on the windows platform. Grover ends up looking for the file ./tests/*.html, instead of a file listing of the given directory, and fails.

The number of tests is wrong in TAP format

I setup 5 tests. However, it shows 1 .. 10 instead of 1 .. 5 in the TAP report.

➜  grover-samples git:(master) npm run test

> [email protected] test /Users/stanleyn/Projects/stanleyhlng/git/grover-samples
> grover test/lib/*.test.html --outfile artifacts/test/results.tap --tap

Starting Grover on 1 files with [email protected]
  Running 15 concurrent tests at a time.
✔ [User Test Suite]: Passed: 5 Failed: 0 Total: 5 (ignored 0) (0.046 seconds)
Writing files in TAP format to: artifacts/test/results.tap
----------------------------------------------------------------
✔ [Total]: Passed: 5 Failed: 0 Total: 5 (ignored 0) (0.046 seconds)
  [Grover Execution Timer] 1.714 seconds


➜  grover-samples git:(master) cat artifacts/test/results.tap --tap
1..10
#Begin testcase User(0 failed of 5)
ok 1 - should return true
ok 2 - should instantiate Y.User object
ok 3 - should return "Stanley" in the first name
ok 4 - should return "Ng" in the last name
ok 5 - should return "[email protected]" in the email
#End testcase User

samples
https://github.com/stanleyhlng/grover-samples

Grover should optionally combohandle local files when using --server option.

It would be a nice addition if grover could serve local files via the combohandler when using the --server option.

I've forked and amended the server.js file to suit my own test purposes here: olanb7@fe0ae89 - however I'm not sure if this is already accounted for elsewhere in the tool. I'm also unsure of the best way to make this configurable.. JSON file with root:relative location mappings perhaps?

How to let grover know about modules loader doesn't know about

I'm running some tests on a few modules loader doesn't know about. When running the individual tests, I added a groups entry to the YUI({groups: xxx).use() in the individual unit test html files but grover doesn't seem to read that and fails all tests that work fine when run individually. How can I let it know about those dependencies?

Istanbul coverage + XML output not cooperating.

When using the -o and --xml options on Instanbul-instrumented code, yuitest is throwing an error:

Case to reproduce:

test.html

<html>
<head>
</head>
<body>
<script src="http://yui.yahooapis.com/3.7.0/build/yui/yui.js"></script>
<script type="text/javascript" src="instrumented.js"></script>
<script type="text/javascript" src="test.js"></script>
</body>
</html>

instrumented.js

if (typeof __coverage__ === 'undefined') { __coverage__ = {}; }
if (!__coverage__['/home/ascheffl/grovertemp/noninstrumented.js']) {
   __coverage__['/home/ascheffl/grovertemp/noninstrumented.js'] = {"path":"/home/ascheffl/grovertemp/noninstrumented.js","s":{"1":0,"2":0},"b":{},"f":{"1":0},"fnMap":{"1":{"name":"(anonymous_1)","line":1,"loc":{"start":{"line":1,"column":19},"end":{"line":1,"column":31}}}},"statementMap":{"1":{"start":{"line":1,"column":0},"end":{"line":3,"column":37}},"2":{"start":{"line":2,"column":4},"end":{"line":2,"column":21}}},"branchMap":{}};
}
var __cov_LOJ6ji5HlmrXkM0DuK_2nA = __coverage__['/home/ascheffl/grovertemp/noninstrumented.js'];
__cov_LOJ6ji5HlmrXkM0DuK_2nA.s['1']++;YUI.add('example',function(Y){__cov_LOJ6ji5HlmrXkM0DuK_2nA.f['1']++;__cov_LOJ6ji5HlmrXkM0DuK_2nA.s['2']++;Y.example=true;},'@VERSION@',{requires:['base']});

test.js

YUI({
    logInclude: { TestRunner: true }
}).use('test', 'console', 'example', function(Y) {
    var suite = new Y.Test.Suite("Example failure suite");

    suite.add(new Y.Test.Case({
        name : 'Example failure test',
        setUp : function () {

        },
        testExample: function () {
        Y.Assert.isTrue(Y.example);
        }

    }));

    Y.Test.Runner.add(suite);

    //initialize the console
    var yconsole = new Y.Console({
        newestOnTop: false
    });

    Y.Test.Runner.run();
});
$ grover test.html -o grover_results.xml --xml

[snip]

/home/y/lib/node_modules/grover/node_modules/yuitest/lib/yuitest-node.js:2264
        return text.replace(/[<>"'&]/g, function(value){
                    ^
TypeError: Cannot call method 'replace' of undefined
    at xmlEscape (/home/y/lib/node_modules/grover/node_modules/yuitest/lib/yuitest-node.js:2264:21)

Nested testsuites in XML output are not parseable by Jenkins

This is a regression in Grover 0.1.16.

Grover 0.1.16 introduced a change that causes additional <testsuites> tags to be nested inside the root <testsuites> tag.

Jenkins will only report tests in <testcase/> tags that are in nested in <testsuite> tags in the root <testsuites> tag. Nested <testsuites> are silently ignored. This comprises the majority of YUI's use of Grover.

(Examples can be found on the internal YUI dev-3.x component build. The regression occurred on build 240. The last correctly working use of Grover was on build 239. Compare the junit.xml artifacts on those builds.)

The --istanbul-report overrides the -co flag

If one uses the --istanbul-report and -co (or --coverageFileName) flag on the same grover command, only the istanbul html coverage report is generated and the lcov.info file is not.

Expecting both lcov.info and lcov report html to be generated when both flags are present.

Issue in running grover on windows

When I try to run grover on my module, it throws below error.

C:\workspace\demo\branches\feature-branch\src>grover ./my-widget/tests/unit/my-widget.html
Starting Grover on 1 files with [email protected]
  Running 15 concurrent tests at a time.

events.js:72
        throw er; // Unhandled 'error' event
              ^
Error: spawn ENOENT
    at errnoException (child_process.js:977:11)
    at Process.ChildProcess._handle.onexit (child_process.js:768:34)

Appreciate any inputs to get this resolved.

'output' option generates incorrect / duplicate count.

The -o option generates incorrect / duplicate 'passed' count in the generated output file.

For example, running grover on the yui3 'cache' module as below will show the 'passed' count as 58 instead of the actual count which is 29.

grover tests/unit/cache.html -o ./cachle-results.json --json

The issue seems to be in the results normalizing code here. swapping the logic as below seem to resolve the issue.

if (data[key] && (typeof data[key] === 'number')) {
  data[key] += r[key];
}

if (!data[key]) {
  data[key] = r[key];
}

cc: @caridy , @davglass

Test Results (TAP) are not rolled up

Much like coverage data wasn't rolled up #3. The test results in most if not all formats are not rolled up in the final output.

This results in the test output only displaying the results from one of the tests, as opposed to all the different test html files.

Error when remove an iframe on teardown

Hi,

I'm just started to use grover, and I found a bug:
If I add an iframe to the page during the test and remove it on tearDown I get the following error:

$ grover index.html 
Starting Grover on 1 files with [email protected]
  Running 15 concurrent tests at a time.
✖ [index.html]: Passed: 0 Failed: 1 Total: 1 (ignored 0)
    Javascript Error
       Phantom failed to load this page
----------------------------------------------------------------
✖ [Total]: Passed: 0 Failed: 1 Total: 1 (ignored 0)
  [Timer] 1.275 seconds

Here is an example code:

https://gist.github.com/3877937

I just call grover index.html.
grover -v says: 0.0.25
node -v says: v0.8.10

I tried to track down the issue but it wasn't success. Maybe the bug is in Phantomjs but I'm not sure.
If I can help somehow, let me know.

Thanks.

0.1.16 broke junit xml reporting with YUI when testing multiple files

Around 3 months ago, our automated build started having issues collecting test results. I've finally narrowed down the cause: A change to lib/process.js in 0.1.16 broke the way that results are passed to yuitest, which caused it to get confused and treat every testsuite after the first as a new root.

Before: (With coverage and superfluous tests stripped out)

[
    {
        "name": "mb-cta-link Test Suite",
        "passed": 5,
        "failed": 0,
        "errors": 0,
        "ignored": 0,
        "total": 5,
        "duration": 70,
        "type": "report",
        "MB Ads FE Call To Action Link": {
            "name": "MB Ads FE Call To Action Link",
            "passed": 5,
            "failed": 0,
            "errors": 0,
            "ignored": 0,
            "total": 5,
            "duration": 66,
            "type": "testcase",
            "test initializer": {
                "result": "pass",
                "message": "Test passed",
                "type": "test",
                "name": "test initializer",
                "duration": 4
            }
        },
        "timestamp": "Mon 21 Oct 2013 05:23:39 PM PDT",
        "coverage": {},
        "coverageType": "istanbul",
        "consoleInfo": []
    },
    {
        "name": "mb-base-widget Test Suite",
        "passed": 11,
        "failed": 0,
        "errors": 0,
        "ignored": 0,
        "total": 11,
        "duration": 148,
        "type": "report",
        "MB Ads FE Base Widget": {
            "name": "MB Ads FE Base Widget",
            "passed": 11,
            "failed": 0,
            "errors": 0,
            "ignored": 0,
            "total": 11,
            "duration": 145,
            "type": "testcase",
            "testInitializer": {
                "result": "pass",
                "message": "Test passed",
                "type": "test",
                "name": "testInitializer",
                "duration": 2
            }
        },
        "timestamp": "Mon 21 Oct 2013 05:23:40 PM PDT",
        "coverage": {},
        "coverageType": "istanbul",
        "consoleInfo": []
    }
]

After:

{
    "name": "mb-cta-link Test Suite",
    "passed": 10,
    "failed": 0,
    "errors": 0,
    "ignored": 0,
    "total": 10,
    "duration": 132,
    "type": "report",
    "MB Ads FE Call To Action Link": {
        "name": "MB Ads FE Call To Action Link",
        "passed": 5,
        "failed": 0,
        "errors": 0,
        "ignored": 0,
        "total": 5,
        "duration": 64,
        "type": "testcase",
        "test initializer": {
            "result": "pass",
            "message": "Test passed",
            "type": "test",
            "name": "test initializer",
            "duration": 4
        }
    },
    "timestamp": "Mon 21 Oct 2013 05:20:59 PM PDT",
    "coverage": {},
    "coverageType": "istanbul",
    "consoleInfo": [],
    "mb-base-widget Test Suite": {
        "name": "mb-base-widget Test Suite",
        "passed": 22,
        "failed": 0,
        "errors": 0,
        "ignored": 0,
        "total": 22,
        "duration": 292,
        "type": "report",
        "MB Ads FE Base Widget": {
            "name": "MB Ads FE Base Widget",
            "passed": 11,
            "failed": 0,
            "errors": 0,
            "ignored": 0,
            "total": 11,
            "duration": 144,
            "type": "testcase",
            "testInitializer": {
                "result": "pass",
                "message": "Test passed",
                "type": "test",
                "name": "testInitializer",
                "duration": 2
            }
        },
        "timestamp": "Mon 21 Oct 2013 05:20:59 PM PDT",
        "coverage": {},
        "coverageType": "istanbul",
        "consoleInfo": []
    }}

And what the result.xml looks like when it's broken:

<?xml version="1.0" encoding="UTF-8"?>
<testsuites>
    <testsuite name="MB Ads FE Call To Action Link" tests="1" failures="0" time="0.066">
        <testcase name="test initializer" time="0.004"></testcase>
    </testsuite>
    <testsuites>
        <testsuite name="MB Ads FE Base Widget" tests="1" failures="0" time="0.145">
            <testcase name="testInitializer" time="0.002"></testcase>
        </testsuite>
    </testsuites>
</testsuites>

In this case, our build would report only 1 test success, not 2, as the latter test suite is wrapped in a <testsuites> tag.

Coverage: Several test pages for the same piece of javascript are not rolled up correctly

If I have all the widget code in foo.js and several different html files with tests for different classes contained within foo.js, the coverage report is not rolled up based on the union of the test results of foo.js.

Example:
test 1 in test1.html tests Class 1 in foo.js, lines 1-50
test 2 in test2.html tests Class 2 in foo.js, lines 51-100
test 3 in test3.html tests Class 3 in foo.js lines 101-150

The result will show 33% coverage since each test only covered 33% of the code, but together all these tests cover ~100% of the code in foo.js.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.