Giter VIP home page Giter VIP logo

protractor-flake's Introduction

Protractor Flake Build Status NPM package Join the chat at https://gitter.im/NickTomlin/protractor-flake

Rerun potentially flakey protractor tests before failing.

npm i protractor-flake

# or globally for easier cli usage
npm i -g protractor-flake

Usage

Via the CLI:

npm i -g protractor-flake

# protractor-flake <protractor-flake-options> -- <options to be passed to protractor>
protractor-flake --parser standard  --max-attempts=3 -- path/to/protractor.conf.js

See src/options.ts for the full list of command line options.

Protractor flake expects protractor to be on $PATH by default, but you can use the --protractor-path argument to point to the protractor executable.

Or programmatically:

// using commonjs:
var protractorFlake = require('protractor-flake')
// OR using es6 modules/typescript
import protractorFlake = require('protractor-flake')

// Default Options
protractorFlake({
  parser: 'standard'
}, function (status, output) {
  process.exit(status)
})

// Full Options
protractorFlake({
  protractorPath: '/path/to/protractor',
  maxAttempts: 3,
  parser: 'standard',
  // expects node to be in path
  // set this to wherever the node bin is located
  nodeBin: 'node',
  // set color to one of the colors available at 'chalk' - https://github.com/chalk/ansi-styles#colors
  color: 'magenta',
  protractorArgs: [],
  // specify a different protractor config to apply after the first execution attempt
  // either specify a config file, or cli args (ex. --capabilities.browser=chrome)
  protractorRetryConfig: 'path/to/<protractor-retry-config>.js' 
}, function (status, output) {
  process.exit(status)
})

Parsers

Protractor flake defaults to using the standard parser, which will typically pick up failures run from non-sharded/multi-capability test runs using Jasmine 1 + 2 and Mocha.

There are a few other ways that you can customize your parsing:

  • overriding this with the parser option, specifying one of the built in parsers.
  • providing a path to a module (e.g. /my/module.js or ./module.js) that exports a parser
  • a parser (if used programatically)

Parsers should be defined as an object with a parse method (and optionally a name property):

module.exports = {
  name: 'my-custom-parser',
  parse (protractorTestOutput) {
    let failedSpecs = new Set()
    // ... analyze protractor test output
    // ... and add to specFiles
    failedSpecs.add('path/to/failed/specfile')

    // specFiles to be re-run by protractor-flake
    // if an empty array is returned, all specs will be re-run
    return [...failedSpecs]
  }
}
import Parser from 'protractor-flake/lib/parsers/parser'

const MyParser: Parser = {
  name: 'my-custom-parser',
  parse (protractorTestOutput) {
    let failedSpecs = new Set()
    // ... analyze protractor test output
    // ... and add to specFiles
    failedSpecs.add('path/to/failed/specfile')

    // specFiles to be re-run by protractor-flake
    // if an empty array is returned, all specs will be re-run
    return [...failedSpecs]
  }
}

exports = MyParser

Parser documentation

Caveats

This has not yet been tested with Protractor + Mocha. It should function similarly. Please update with an issue or PR if this is not the case.

Tests will not re-run properly (all tests will run each time) if you use a custom reporter that does not log stacktraces for failed tests. For example, if you are using jasmine-spec-reporter with Jasmine 2.0, make sure to set displayStacktrace: 'specs' or displayStacktrace: 'all'.

Contributors

See CONTRIBUTING.md

protractor-flake's People

Contributors

aamirafridi97 avatar bbrandes avatar brandenbyers avatar dependabot[bot] avatar jimmygchen avatar jithinkmatthew avatar jrust avatar kylelilly avatar lteacher avatar mike-eason avatar nicktomlin avatar qualityshepherd avatar sberan avatar snyderizer avatar sul4bh avatar theandrewlane avatar wswebcreation avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

protractor-flake's Issues

Compatibility with gulp-protractor

Hey,

I was wondering if someone managed to integrate protractor-flake with the gulp-protractor project (https://github.com/mllrsohn/gulp-protractor).

I was trying to create the following gulp task:

gulp.task("test-e2e-new", function(){
gulp.src(path.join(consts.appRoot, 'test/**/*.js'))
.pipe(protractorFlake({
maxAttempts: 3,
protractorPath: './gulp/protractor.conf.js'
}, function (status, output) {
process.exit(status);
}))
.on('error', function(e) {console.log(e); });
});

However, when i'm trying to run it I get:

[09:20:28] Using gulpfile /Volumes/case_sensitive/dev/myCompany/trunk/myService/build/gulpfile.js
[09:20:28] Starting 'test-e2e-new'...
[09:20:28] 'test-e2e-new' errored after 11 ms
[09:20:28] TypeError: Cannot read property 'on' of undefined
at DestroyableTransform.Readable.pipe (/Volumes/case_sensitive/dev/myCompany/trunk/myService/build/node_modules/vinyl-fs/node_modules/readable-stream/lib/_stream_readable.js:516:7)
at Gulp. (/Volumes/case_sensitive/dev/myCompany/trunk/myService/build/gulp/test.gulp.js:34:6)
at module.exports (/Volumes/case_sensitive/dev/myCompany/trunk/myService/build/node_modules/orchestrator/lib/runTask.js:34:7)
at Gulp.Orchestrator._runTask (/Volumes/case_sensitive/dev/myCompany/trunk/myService/build/node_modules/orchestrator/index.js:273:3)
at Gulp.Orchestrator._runStep (/Volumes/case_sensitive/dev/myCompany/trunk/myService/build/node_modules/orchestrator/index.js:214:10)
at Gulp.Orchestrator.start (/Volumes/case_sensitive/dev/myCompany/trunk/myService/build/node_modules/orchestrator/index.js:134:8)
at /usr/local/lib/node_modules/gulp/bin/gulp.js:129:20
at process._tickCallback (node.js:355:11)
at Function.Module.runMain (module.js:503:11)
at startup (node.js:129:16)

Any suggestions?

Thanks a lot!

Paths are broken for Windows

Windows paths begin with C:\foo\bar, the regex that we use to parse failed specs tries to be non-greedy and stops at the first : it encounters. We should update this to better handle these types of paths.

Better windows support

At the moment the only way to run protractor-flake on a windows machine (without protractor on path) is to point directly to the protractor.cmd (e.g. protractor-flake --protractor-path=node_modules\.bin\protractor.cmd). I'd like to find a better way.

Multicapabilities retries all tests, and uses all maxInstances on fail.

As an example if you pass

multiCapabilities = [
{
browserName: 'chrome',
specs: ['./someOkTest.js'],
maxInstances: 1
},
{
browserName: 'chrome',
specs: ['./someBadTest.js'],
maxInstances: 1
}
];
This will run all of the tests from someBadTests, in 2 webdrivers.

And if you have multiple specs in a multicapabilities, it will run all of the specs if one fails.

protractor-flake seems to rerun ALL tests when only SOME fail

running via gulpfile:

gulp.task 'e2e', ->
 (require 'protractor-flake')
    '--': ['./protractor.conf.coffee']
    maxAttempts: 3
    protractorPath: './node_modules/.bin/protractor'

and the protractor.conf.coffee:

module.exports.config =

  baseUrl: 'http://localhost:1234/'

  suites:
    all: '**/*.e2e.js'

  capabilities:
    browserName: 'chrome'
    chromeOptions:
      args: [
        '--lang=en'
        '--window-size=1280,1280'
      ]
  chromeDriver: './node_modules/chromedriver/bin/chromedriver'
  seleniumServerJar: './node_modules/selenium-standalone-jar/bin/selenium-server-standalone-2.45.0.jar'

  framework: 'jasmine2'

Can't exit from cucumber tests

Hi there, thanks for creating this.

I'm having troubles getting my cucumber tests to run.
I get this error "Error: Spec patterns did not match any files."

protractorFlake({
      maxAttempts: 2,
      protractorArgs: ['protractor.conf.js'],
    })

protractor.conf.js:

exports.config = {
  baseUrl: 'http://localhost:8000',
  seleniumServerJar: './node_modules/gulp-protractor/node_modules/protractor/selenium/selenium-server-standalone-2.48.2.jar',
  capabilities: {
    browserName: 'firefox'
  },
  framework: 'custom',
  cucumberOpts: {
    require: [
      'app/test/integration/runner.js',
      'app/test/integration/integrationHooks.js',
      'app/test/integration/**/steps.js'
    ],
    format: 'pretty'
  },
  // set to "custom" instead of cucumber.
  frameworkPath: require.resolve('protractor-cucumber-framework'),
  onPrepare: function() {
    // We fix the size of the browser window to ensure that the elements are
    // always in view and detectable by protractor.
    return browser.driver.manage().window().setSize(1920, 1080);
  }
};

Thanks!

"Protractor is not found"

Hi there!

I have protractor nested in "node_modules/protractor" and ".bin/protractor".

I go to my project folder and run "protractor-flake"

I get the error "Cannot find module 'protractor'".

How can I debug from here?

Thanks!

Fail to catch spec which fails by jasmine default timeout

Hi there!

When test fail due to
Error: Timeout - Async callback was not invoked within timeout specified by jasmine.DEFAULT_TIMEOUT_INTERVAL.
Protractor-flake fails to parse the failing test and the spec is not rerun thus failing to "rerun potentially flakey protractor tests before failing."

For better recreation I have setup an example found on git-url .

  • Operating system and version - MacOs - 10.13.1
  • Node.js version - v6.12.0
  • Protractor version - Version 5.1.1
  • Protractor flake version - "3.0.1"
  • Protractor configuration file
// solves `SyntaxError: Unexpected token import
require("babel-register")({
    presets: [ 'es2015' ]
});
const fs = require('fs');
const FancyReporter = require('fancy-protractor-reporter').Reporter;

const fancyReporter = new FancyReporter({
    path: 'report/fancy' + new Date().toISOString().substring(0,19),
    screenshotOnPassed: true,
    consolidateAll: true,
    // isSharded: true
});

exports.config = {
    /**
     *  Uncomment ONE of the following to connect to: seleniumServerJar OR directConnect. Protractor
     *  will auto-start selenium if you uncomment the jar, or connect directly to chrome/firefox
     *  if you uncomment directConnect.
     */
    //seleniumServerJar: "node_modules/protractor/node_modules/webdriver-manager/selenium/selenium-server-standalone-3.4.0.jar",
    directConnect: true,

    specs: ['specs/*Spec.js'],
    baseUrl: 'http://qualityshepherd.com',
    framework: 'jasmine',

    onPrepare: () => {
        // set browser size...
        browser.manage().window().setSize(1024, 800);

        if (!fs.existsSync('report')) {
                    fs.mkdirSync('report');
        }

        // better jasmine 2 reports...
        const SpecReporter = require('jasmine-spec-reporter');
        jasmine.getEnv().addReporter(new SpecReporter({displayStacktrace: 'specs'}));
        jasmine.getEnv().addReporter(fancyReporter);
    },
    afterLaunch: () => {
        fancyReporter.combineReports();
    },

    capabilities: {
        browserName: 'chrome',
        shardTestFiles: false,
        maxInstances: 1
        // chromeOptions: {
        //     args: [
        //         // disable chrome's wakiness
        //         '--disable-infobars',
        //         '--disable-extensions',
        //         'verbose',
        //         'log-path=/tmp/chromedriver.log'
        //     ],
        //     prefs: {
        //         // disable chrome's annoying password manager
        //         'profile.password_manager_enabled': false,
        //         'credentials_enable_service': false,
        //         'password_manager_enabled': false
        //     }
        // }
    },

    jasmineNodeOpts: {
        showColors: true,
        displaySpecDuration: true,
        // overrides jasmine's print method to report dot syntax for custom reports
        print: () => {},
        defaultTimeoutInterval: 50000
    }
};
  • Output from your test suite
P0 - Edit Spec1
	    โœ“ should test1 changes (54 secs)
	.    โœ“ should test2 it (59 secs)
	FA Jasmine spec timed out. Resetting the WebDriver Control Flow.
	A Jasmine spec timed out. Resetting the WebDriver Control Flow.
	A Jasmine spec timed out. Resetting the WebDriver Control Flow.
	    โœ— should test3 it (6 mins 57 secs)
	      - Error: Timeout - Async callback was not invoked within timeout specified by jasmine.DEFAULT_TIMEOUT_INTERVAL.
	          at tryOnTimeout (timers.js:232:11)
	      - Error: Timeout - Async callback was not invoked within timeout specified by jasmine.DEFAULT_TIMEOUT_INTERVAL.
	          at tryOnTimeout (timers.js:232:11)
	      - Error: Timeout - Async callback was not invoked within timeout specified by jasmine.DEFAULT_TIMEOUT_INTERVAL.
	          at tryOnTimeout (timers.js:232:11)

FR: Create a new CucumberJS parser that replaces both CucumberJS parsers

Current situation:
Currently protractor-flake for CucumberJS can use 2 parsers, the cucumber.js and the cucumber.multi.js parser. The first parser is based on the logging of the framework for CucumberJS < 0.9.0 and the second one is based on a console.log.

Desired situation:
Replace both parsers with a new parser that will read the log based on a console.log users need to add to their framework. This is currently already advised for CucumberJS 1.
The advantage of parsing the log of protractor based on a console.log is that it will be more stable and framework independent then the current parsing because that one is based on the logging of the framework itself.

To do:

  • [] create a new parser called cucumber, the other ones will be removed and this will result in a new major release
  • [] test this for CucumberJS 1, 2 and 3
  • [] create new unit tests
  • [] create new docs for cucumber with examples how to implement this for CucumberJS 1, 2 and 3

Protractor process forked/non-forked output differences causing all tests to be retried

We recently noted an issue which only happens when there is only 1 failing test in the first attempt, and a subsequent failure of the test again in the second try. protractor flake is then unable to determine the failing spec from the output and falls back to re-running our whole suite of tests on the third attempt. (We are using the multi parser and have protractor sharding our tests).

Investigating it further, we found that protractor only runs in a forked mode when there is more than 1 spec to run, and the sharding of test files configuration being enabled does not make a difference, and there is a difference in output between forked and single. The condition block for this can be found here. Testing done with modifying this logic to always run protractor in forked mode validates and resolves the issue we are experiencing.

Our current workaround for now involves using a custom parser based on multi which injects a dummy test spec when there is only 1 test which has failed to ensure protractor always runs in forked mode. This issue could be resolved by potentially making the forked mode configurable in protractor, but I thought of raising the issue here to see if there is a better way to resolve this issue. Please let me know if you need further info/clarification. Thanks for writing protractor flake, it has been a very useful tool in handling flakey-ness! :)

  • Operating system and version: Ubuntu 16.04
  • Node.js version: 6.11.4
  • Protractor version: 5.1.2
  • Protractor flake version: 3.0.1
  • Protractor configuration file
 "protractor": {
    "browserCapabilities": {
      "browserName": "chrome",
      "maxInstances": 5,
      "shardTestFiles": true
    },
    "timeout": 90000
  }

protractor-flake

maxAttempts: 3
parser: 'multi'
  • Output from your test suite (I've truncated the output to the relevant bits)
..
..
[04:39:05] I/testLogger - [chrome #01-11] PID: 29509
[chrome #01-11] Specs: /home/test/tests/failed-spec.js
[chrome #01-11] 
[chrome #01-11] [04:38:20] I/hosted - Using the selenium server at http://localhost:4444/wd/hub
[chrome #01-11] Started
[chrome #01-11] Spec started
[chrome #01-11] .
[chrome #01-11]   Advertiser add context
[chrome #01-11]     โœ“ when a is created (1 sec)
[chrome #01-11] .    โœ“ when b is created (0.107 sec)
[chrome #01-11] .    โœ“ when the user logs in (11 secs)
[chrome #01-11] { NoSuchElementError: Index out of bound. Trying to access element at index: 0, but there are only 0 elements that match locator by.cssContainingText(".something")
--stacktrace removed--

[04:39:05] I/testLogger - 

[04:39:05] E/launcher - Runner process exited unexpectedly with error code: 4

--other tests removed--

[04:48:23] I/launcher - 0 instance(s) of WebDriver still running
[04:48:23] I/launcher - chrome #01-3 passed
[04:48:23] I/launcher - chrome #01-4 passed
[04:48:23] I/launcher - chrome #01-5 passed
[04:48:23] I/launcher - chrome #01-6 passed
[04:48:23] I/launcher - chrome #01-1 passed
[04:48:23] I/launcher - chrome #01-7 passed
[04:48:23] I/launcher - chrome #01-9 passed
[04:48:23] I/launcher - chrome #01-0 passed
[04:48:23] I/launcher - chrome #01-10 passed
[04:48:23] I/launcher - chrome #01-8 passed
[04:48:23] I/launcher - chrome #01-11 failed with exit code: 4
[04:48:23] I/launcher - chrome #01-12 passed
[04:48:23] I/launcher - chrome #01-14 passed
[04:48:23] I/launcher - chrome #01-15 passed
[04:48:23] I/launcher - chrome #01-16 passed
[04:48:23] I/launcher - chrome #01-2 passed
[04:48:23] I/launcher - chrome #01-17 passed
[04:48:23] I/launcher - chrome #01-13 passed
[04:48:23] I/launcher - chrome #01-20 passed
[04:48:23] I/launcher - chrome #01-21 passed
[04:48:23] I/launcher - chrome #01-22 passed
[04:48:23] I/launcher - chrome #01-18 passed
[04:48:23] I/launcher - chrome #01-24 passed
[04:48:23] I/launcher - chrome #01-19 passed
[04:48:23] I/launcher - chrome #01-23 passed
[04:48:23] I/launcher - chrome #01-25 passed
[04:48:23] I/launcher - chrome #01-26 passed
[04:48:23] I/launcher - chrome #01-28 passed
[04:48:23] I/launcher - chrome #01-27 passed
[04:48:23] I/launcher - chrome #01-30 passed
[04:48:23] I/launcher - chrome #01-29 passed
[04:48:23] I/launcher - chrome #01-31 passed
[04:48:23] I/launcher - chrome #01-32 passed
[04:48:23] I/launcher - chrome #01-33 passed
[04:48:23] I/launcher - chrome #01-34 passed
[04:48:23] I/launcher - chrome #01-35 passed
[04:48:23] I/launcher - chrome #01-36 passed
[04:48:23] I/launcher - chrome #01-37 passed
[04:48:23] I/launcher - chrome #01-39 passed
[04:48:23] I/launcher - chrome #01-41 passed
[04:48:23] I/launcher - chrome #01-42 passed
[04:48:23] I/launcher - chrome #01-40 passed
[04:48:23] I/launcher - chrome #01-38 passed
[04:48:23] I/launcher - chrome #01-45 passed
[04:48:23] I/launcher - chrome #01-48 passed
[04:48:23] I/launcher - chrome #01-43 passed
[04:48:23] I/launcher - chrome #01-49 passed
[04:48:23] I/launcher - chrome #01-50 passed
[04:48:23] I/launcher - chrome #01-46 passed
[04:48:23] I/launcher - chrome #01-47 passed
[04:48:23] I/launcher - chrome #01-44 passed
[04:48:23] I/launcher - overall: 1 process(es) failed to complete
[04:48:23] E/launcher - Process exited with error code 100

Using multi to parse output
Re-running tests: test attempt 2
Re-running the following test files:
/home/test/tests/failed-spec.js
[04:48:25] I/launcher - Running 1 instances of WebDriver
[04:48:25] I/hosted - Using the selenium server at http://localhost:4444/wd/hub
Started
Spec started
.
  Advertiser add context
    โœ“ when a is created (1 sec)
.    โœ“ when b is created (0.114 sec)
.    โœ“ when the user logs in (8 secs)
{ NoSuchElementError: Index out of bound. Trying to access element at index: 0, but there are only 0 elements that match locator by.cssContainingText(".something")
--stacktrace removed--
[04:49:00] E/launcher - Process exited with error code 1

Using multi to parse output
Re-running tests: test attempt 3

Tests failed but no specs were found. All specs will be run again.
--truncated--

Use JUnitXML output with grep ?

Hey @NickTomlin would it be better we get rid of the Regex parsing we do for the protractor output and use the JUnitXML reporter instead ? Once we find failing tests from there we could just rerun the failed tests using the grep option provided.

Issues with parsing cucumber

Hi!
First of all, thank you for this plugin.
I have the plugin working I'm just having problems parsing cucumber output log so it runs every test case if 1 fails.
Any thoughts on what I could do to fix this?

Windows 7
Node version = 6.9.1
Protractor version = 5.1.2

Command I am running:
protractor-flake --parser cucumber --max-attempts=2 -- protractor.conf.js

Output I get:

[14:21:36] I/launcher - Running 1 instances of WebDriver
[14:21:36] I/hosted - Using the selenium server at http://localhost:4444/wd/hub
Feature: Example Test Cases

  Scenario: Passing Script
  โˆš Given I am on the DI login page
  โˆš Then I login
  โˆš When I am on the home page

  Scenario: Failing Script
  โˆš Given I am on the DI login page
  โˆš Then I login
  ร— Then I intenionally fail

Failures:

1) Scenario: Failing Script - features\arithmetic.feature:8
   Step: Then I intenionally fail - features\arithmetic.feature:11
   Step Definition: Testing\features\steps\DI_login_steps.js:27
   Message:
     AssertionError: expected 'Home - RBC Direct Investing' to include 'pop'

2 scenarios (1 failed, 1 passed)
6 steps (1 failed, 5 passed)
0m16.993s
[14:21:55] I/launcher - 0 instance(s) of WebDriver still running
[14:21:55] I/launcher - chrome #01 failed 1 test(s)
[14:21:55] I/launcher - overall: 1 failed spec(s)
[14:21:55] E/launcher - Process exited with error code 1

Using cucumber to parse output
Re-running tests: test attempt 2

Tests failed but no specs were found. All specs will be run again.

[14:21:57] I/launcher - Running 1 instances of WebDriver
[14:21:57] I/hosted - Using the selenium server at http://localhost:4444/wd/hub
Feature: Example Test Cases

  Scenario: Passing Script
  โˆš Given I am on the DI login page
  โˆš Then I login
  โˆš When I am on the home page

  Scenario: Failing Script
  โˆš Given I am on the DI login page
  โˆš Then I login
  ร— Then I intenionally fail

Failures:

1) Scenario: Failing Script - features\arithmetic.feature:8
   Step: Then I intenionally fail - features\arithmetic.feature:11
   Step Definition: Testing\features\steps\DI_login_steps.js:27
   Message:
     AssertionError: expected 'Home - RBC Direct Investing' to include 'pop'

2 scenarios (1 failed, 1 passed)
6 steps (1 failed, 5 passed)
0m16.227s
[14:22:17] I/launcher - 0 instance(s) of WebDriver still running
[14:22:17] I/launcher - chrome #01 failed 1 test(s)
[14:22:17] I/launcher - overall: 1 failed spec(s)
[14:22:17] E/launcher - Process exited with error code 1
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! [email protected] test: `protractor-flake --parser cucumber --max-attempts=2 -- protractor.conf.js`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the [email protected] test script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.

How to rerun parent spec when helper spec fails

Here is my small spec file that calls a helper file which contains all the it and describe block.

// File: externalAccountDirector.spec.js

const sharedTest = require('./permissions.shared');

sharedTest('externalAccountDirector');

When externalAccountDirector.spec.js fails, flake will try to rerun permissions.shared.js file instead. I would like to rerun externalAccountDirector.spec.js
Thank you.

ERROR: 'missing ) after argument list

I am getting an error when running protractor-flake. I am trying to use a npm script but it gives me the following error:

Error

basedir=$(dirname "$(echo "$0" | sed -e 's,\\,/,g')")
          ^^^^^^^
SyntaxError: missing ) after argument list
    at Object.exports.runInThisContext (vm.js:73:16)
    at Module._compile (module.js:543:28)
    at Object.Module._extensions..js (module.js:580:10)
    at Module.load (module.js:488:32)
    at tryModuleLoad (module.js:447:12)
    at Function.Module._load (module.js:439:3)
    at Module.runMain (module.js:605:10)
    at run (bootstrap_node.js:422:7)
    at startup (bootstrap_node.js:143:9)
    at bootstrap_node.js:537:3

npm logs

0 info it worked if it ends with ok
1 verbose cli [ 'C:\\Program Files\\nodejs\\node.exe',
1 verbose cli   'C:\\Users\\073349\\AppData\\Roaming\\npm\\node_modules\\npm\\bin\\npm-cli.js',
1 verbose cli   'run',
1 verbose cli   'flake' ]
2 info using [email protected]
3 info using [email protected]
4 verbose run-script [ 'preflake', 'flake', 'postflake' ]
5 info lifecycle [email protected]~preflake: [email protected]
6 info lifecycle [email protected]~flake: [email protected]
7 verbose lifecycle [email protected]~flake: unsafe-perm in lifecycle true
8 verbose lifecycle [email protected]~flake: PATH: C:\Users\073349\AppData\Roaming\npm\node_modules\npm\bin\node-gyp-bin;C:\Projects\Unity\Dev\ProtractorTests\node_modules\.bin;C:\ProgramData\Oracle\Java\javapath;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Program Files (x86)\Java\jre6\bin\;C:\Program Files\Microsoft SQL Server\120\Tools\Binn\;C:\Program Files\Microsoft SQL Server\130\Tools\Binn\;C:\Program Files\Microsoft SQL Server\110\DTS\Binn\;C:\Program Files (x86)\Microsoft SQL Server\110\Tools\Binn\;C:\Program Files\Microsoft SQL Server\110\Tools\Binn\;C:\Program Files (x86)\Microsoft SQL Server\110\Tools\Binn\ManagementStudio\;C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\PrivateAssemblies\;C:\Program Files (x86)\Microsoft SQL Server\110\DTS\Binn\;C:\Program Files\Intel\WiFi\bin\;C:\Program Files\Common Files\Intel\WirelessCommon\;C:\Program Files (x86)\Microsoft SQL Server\120\DTS\Binn\;C:\Program Files (x86)\Microsoft SQL Server\130\DTS\Binn\;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Program Files (x86)\Microsoft SQL Server\Client SDK\ODBC\130\Tools\Binn\;C:\Program Files (x86)\Microsoft SQL Server\130\Tools\Binn\;C:\Program Files (x86)\Microsoft SQL Server\130\Tools\Binn\ManagementStudio\;C:\Program Files\dotnet\;C:\Program Files\Git\cmd;C:\Program Files\nodejs\;C:\Program Files (x86)\Common Files\Siemens\System;C:\Users\073349\.cargo\bin;C:\texlive\2016\bin\win32;C:\Program Files\Common Files\Intel\WirelessCommon\C:\Users\073349\AppData\Roaming\npm;C:\Program Files (x86)\Microsoft VS Code\bin;C:\Users\073349\AppData\Roaming\npm
9 verbose lifecycle [email protected]~flake: CWD: C:\Projects\Unity\Dev\ProtractorTests
10 silly lifecycle [email protected]~flake: Args: [ '/d /s /c',
10 silly lifecycle   'protractor-flake --protractor-path=node_modules/.bin/protractor --parser=standard --max-attempts=2 --local.conf.js' ]
11 silly lifecycle [email protected]~flake: Returned: code: 1  signal: null
12 info lifecycle [email protected]~flake: Failed to exec flake script
13 verbose stack Error: [email protected] flake: `protractor-flake --protractor-path=node_modules/.bin/protractor --parser=standard --max-attempts=2 --local.conf.js`
13 verbose stack Exit status 1
13 verbose stack     at EventEmitter.<anonymous> (C:\Users\073349\AppData\Roaming\npm\node_modules\npm\lib\utils\lifecycle.js:289:16)
13 verbose stack     at emitTwo (events.js:106:13)
13 verbose stack     at EventEmitter.emit (events.js:192:7)
13 verbose stack     at ChildProcess.<anonymous> (C:\Users\073349\AppData\Roaming\npm\node_modules\npm\lib\utils\spawn.js:40:14)
13 verbose stack     at emitTwo (events.js:106:13)
13 verbose stack     at ChildProcess.emit (events.js:192:7)
13 verbose stack     at maybeClose (internal/child_process.js:890:16)
13 verbose stack     at Process.ChildProcess._handle.onexit (internal/child_process.js:226:5)
14 verbose pkgid [email protected]
15 verbose cwd C:\Projects\Unity\Dev\ProtractorTests
16 verbose Windows_NT 6.1.7601
17 verbose argv "C:\\Program Files\\nodejs\\node.exe" "C:\\Users\\073349\\AppData\\Roaming\\npm\\node_modules\\npm\\bin\\npm-cli.js" "run" "flake"
18 verbose node v7.6.0
19 verbose npm  v5.3.0
20 error code ELIFECYCLE
21 error errno 1
22 error [email protected] flake: `protractor-flake --protractor-path=node_modules/.bin/protractor --parser=standard --max-attempts=2 --local.conf.js`
22 error Exit status 1
23 error Failed at the [email protected] flake script.
23 error This is probably not a problem with npm. There is likely additional logging output above.
24 verbose exit [ 1, true ]

package.json - script

 "flake": "node_modules/.bin/protractor-flake --protractor-path=node_modules/.bin/protractor --parser=standard --max-attempts=2 --local.conf.js"

All the node_modules directory references etc. seem right and the files all appear to be there. I don't think I am doing anything wrong - but suspect I might be as nobody else is having this issue?

  • Windows 7 (64 bit)
  • Node.js v7.6.0
  • Protractor v5.1.2
  • Protractor flake v2.5.1

Running all the tests when one or some fails

Hi,

I am using mocha and mochawesome-screenshots and all my tests are re running when 1 or some fails. I noticed that for jasmine reporter, there is a solution - do you have one for mocha and mochawesome-screenshots?

no stacktrace for timeouts, so re-runs everything

when some test fails due to timeout, there is no stacktrace. Its 'undefined'. And hence it ends up re-running all the test. Is this by design?

[chrome #1-11]
[chrome #1-11]
[chrome #1-11] Failures:
[chrome #1-11]
[chrome #1-11] 1) Open SomeTestName Test should find the Search button Function
[chrome #1-11] Message:
[chrome #1-11] timeout: timed out after 120000 msec waiting for spec to complete
[chrome #1-11] Stacktrace:
[chrome #1-11] undefined
[chrome #1-11]
[chrome #1-11] Finished in 208.122 seconds
[chrome #1-11] 4 tests, 5 assertions, 1 failure
[chrome #1-11]

jasmineNodeOpts: {
    defaultTimeoutInterval: 120000,
    onComplete: null,
    isVerbose: true,
    showColors: true,
   includeStackTrace: true
},

Handle webdriver timeouts

Hello,
I am trying to use protractor-flake to rerun some failed scenarios that are failing sometimes because my selenium grid gives a timeout when forwards some tests to the nodes.
When this occurs, I get this log:

03:43:15.738 [chrome #01-34] WebDriverError: Error forwarding the new session Error forwarding the request Read timed out 03:43:15.738 [chrome #01-34] at WebDriverError (/var/lib/go-agent-4/pipelines/FrontEnd-TVG4-Dev/workspace/tvg-tvg4-angular/tests/web-tests/node_modules/selenium-webdriver/error.js:26:26) 03:43:15.738 [chrome #01-34] at Object.checkLegacyResponse (/var/lib/go-agent-4/pipelines/FrontEnd-TVG4-Dev/workspace/tvg-tvg4-angular/tests/web-tests/node_modules/selenium-webdriver/error.js:580:13) 03:43:15.738 [chrome #01-34] at /var/lib/go-agent-4/pipelines/FrontEnd-TVG4-Dev/workspace/tvg-tvg4-angular/tests/web-tests/node_modules/selenium-webdriver/lib/webdriver.js:64:13 03:43:15.738 [chrome #01-34] at Promise.invokeCallback_ (/var/lib/go-agent-4/pipelines/FrontEnd-TVG4-Dev/workspace/tvg-tvg4-angular/tests/web-tests/node_modules/selenium-webdriver/lib/promise.js:1329:14) 03:43:15.738 [chrome #01-34] at TaskQueue.execute_ (/var/lib/go-agent-4/pipelines/FrontEnd-TVG4-Dev/workspace/tvg-tvg4-angular/tests/web-tests/node_modules/selenium-webdriver/lib/promise.js:2790:14) 03:43:15.738 [chrome #01-34] at TaskQueue.executeNext_ (/var/lib/go-agent-4/pipelines/FrontEnd-TVG4-Dev/workspace/tvg-tvg4-angular/tests/web-tests/node_modules/selenium-webdriver/lib/promise.js:2773:21) 03:43:15.738 [chrome #01-34] at /var/lib/go-agent-4/pipelines/FrontEnd-TVG4-Dev/workspace/tvg-tvg4-angular/tests/web-tests/node_modules/selenium-webdriver/lib/promise.js:2652:27 03:43:15.738 [chrome #01-34] at /var/lib/go-agent-4/pipelines/FrontEnd-TVG4-Dev/workspace/tvg-tvg4-angular/tests/web-tests/node_modules/selenium-webdriver/lib/promise.js:639:7 03:43:15.738 [chrome #01-34] at process._tickCallback (node.js:405:9) 03:43:15.738 [chrome #01-34] From: Task: WebDriver.createSession() 03:43:15.738 [chrome #01-34] at acquireSession (/var/lib/go-agent-4/pipelines/FrontEnd-TVG4-Dev/workspace/tvg-tvg4-angular/tests/web-tests/node_modules/selenium-webdriver/lib/webdriver.js:62:22) 03:43:15.739 [chrome #01-34] at Function.createSession (/var/lib/go-agent-4/pipelines/FrontEnd-TVG4-Dev/workspace/tvg-tvg4-angular/tests/web-tests/node_modules/selenium-webdriver/lib/webdriver.js:295:12) 03:43:15.739 [chrome #01-34] at Builder.build (/var/lib/go-agent-4/pipelines/FrontEnd-TVG4-Dev/workspace/tvg-tvg4-angular/tests/web-tests/node_modules/selenium-webdriver/builder.js:458:24) 03:43:15.739 [chrome #01-34] at [object Object].DriverProvider.getNewDriver (/var/lib/go-agent-4/pipelines/FrontEnd-TVG4-Dev/workspace/tvg-tvg4-angular/tests/web-tests/node_modules/protractor/built/driverProviders/driverProvider.js:42:27) 03:43:15.739 [chrome #01-34] at [object Object].Runner.createBrowser (/var/lib/go-agent-4/pipelines/FrontEnd-TVG4-Dev/workspace/tvg-tvg4-angular/tests/web-tests/node_modules/protractor/built/runner.js:203:37) 03:43:15.739 [chrome #01-34] at /var/lib/go-agent-4/pipelines/FrontEnd-TVG4-Dev/workspace/tvg-tvg4-angular/tests/web-tests/node_modules/protractor/built/runner.js:293:21 03:43:15.739 [chrome #01-34] at _fulfilled (/var/lib/go-agent-4/pipelines/FrontEnd-TVG4-Dev/workspace/tvg-tvg4-angular/tests/web-tests/node_modules/q/q.js:834:54) 03:43:15.739 [chrome #01-34] at self.promiseDispatch.done (/var/lib/go-agent-4/pipelines/FrontEnd-TVG4-Dev/workspace/tvg-tvg4-angular/tests/web-tests/node_modules/q/q.js:863:30) 03:43:15.739 [chrome #01-34] at Promise.promise.promiseDispatch (/var/lib/go-agent-4/pipelines/FrontEnd-TVG4-Dev/workspace/tvg-tvg4-angular/tests/web-tests/node_modules/q/q.js:796:13) 03:43:15.739 [chrome #01-34] at /var/lib/go-agent-4/pipelines/FrontEnd-TVG4-Dev/workspace/tvg-tvg4-angular/tests/web-tests/node_modules/q/q.js:556:49

So on the final log, I have checked that these type of tests (failed with exit code 1) are not rerunned.

03:19:06.063 [launcher] chrome #01-6 failed with exit code: 1 03:19:06.063 [launcher] chrome #01-12 passed 03:19:06.064 [launcher] chrome #01-13 passed 03:19:06.064 [launcher] chrome #01-14 passed 03:19:06.064 [launcher] chrome #01-11 failed with exit code: 1 03:19:06.064 [launcher] chrome #01-15 passed 03:19:06.064 [launcher] chrome #01-16 passed 03:19:06.064 [launcher] chrome #01-17 passed 03:19:06.064 [launcher] chrome #01-18 passed 03:19:06.064 [launcher] chrome #01-19 passed 03:19:06.064 [launcher] chrome #01-20 passed 03:19:06.064 [launcher] chrome #01-21 passed 03:19:06.064 [launcher] chrome #01-23 passed 03:19:06.064 [launcher] chrome #01-22 failed with exit code: 1 03:19:06.064 [launcher] chrome #01-25 passed 03:19:06.064 [launcher] chrome #01-24 failed with exit code: 1 03:19:06.064 [launcher] chrome #01-26 passed 03:19:06.064 [launcher] chrome #01-27 passed 03:19:06.064 [launcher] chrome #01-28 passed 03:19:06.064 [launcher] chrome #01-29 passed 03:19:06.065 [launcher] chrome #01-30 passed 03:19:06.065 [launcher] chrome #01-31 passed 03:19:06.065 [launcher] chrome #01-33 passed 03:19:06.065 [launcher] chrome #01-34 failed 1 test(s) 03:19:06.065 [launcher] chrome #01-35 failed 1 test(s) 03:19:06.065 [launcher] chrome #01-36 passed 03:19:06.065 [launcher] chrome #01-32 failed with exit code: 1 03:19:06.065 [launcher] overall: 2 failed spec(s) and 5 process(es) failed to complete 03:19:06.065 [launcher] Process exited with error code 100 03:19:06.072 Re-running tests: test attempt 2 03:19:06.072 Re-running the following test files: 03:19:06.072 /var/lib/go-agent-4/pipelines/FrontEnd-TVG4-Dev/workspace/tvg-tvg4-angular/tests/web-tests/mock-tests/tests/account/desktop/my-tracks-1-spec.js 03:19:06.072 /var/lib/go-agent-4/pipelines/FrontEnd-TVG4-Dev/workspace/tvg-tvg4-angular/tests/web-tests/mock-tests/tests/account/desktop/my-tracks-2-spec.js

The only ones that are re-runned are these -> (03:19:06.065 [launcher] chrome #1-34 failed 1 test(s)).

Please edit this comment to see the breaklines.

Work with jasmine-reporters output

We've had a lot of questions about usage with reporters like jasmine Junit and others. We should verify and document some use cases in the README or an examples folder.

report chalk color disabled after hooking flake

hi guys, nice lib.
may be this is a simple misconfiguration but to summarize, without flake, my jasmine customized reports are colorizing logs normally, after hooking those tests with flake I lose colors.

is there any configuration that i need to consider for this?

thanks

add support for ts

if sourcemap is enabled failures report files with ts file but spec must execute js file.

how to use it

Hi,
I might missing something, but I didn't understand exactly how to use it programatically. where exactly do I put it?
is there an example of use?
Thanks a lot.

Allow custom parser as parameter

Hello,

We are facing some issues with the parser with typescript files, if the source map is included, the plugin try to rerun the ts files instead of the js one, since the tool rely on :
let FAILED_LINES = /at (?:\[object Object\]|Object)\.<anonymous> \((([A-Za-z]:\\)?.*?):.*\)/g

We also plan to put some logic after the parsing in a custom parser to work around the timeout issue, because in our use case, the main goal of using this plugin is to try to get rid of the instabilities of our back-end, and those issues often result in a global timeout (can't go to the next page, and then wait forever (jasmine timeout) for a Locator to be resolved.

Since is seems to be difficult for the plugin to handle all the use cases, is it possible to handle a "customParser" as parameter, this way if the existing ones doesn't fit with our needs we can give it to protractor-flake command line as parameter.

It would require an update of
let parser = getParser(parsedOptions.parser)

Hanging Test

I just got this working while testing with a few tests and works as expected. I am now trying to run my full suite ~765 tests and it is hanging on this test. The output is below.

// package.json
// ...
    "jasmine-core": "^2.4.1",
    "jasmine-reporters": "~2.0.5",
    "jasmine-spec-reporter": "^2.4.0",
// ...
    "phantomjs": "just-boris/phantomjs", // PhantomJS 2
    "protractor": "^2.5.0",
    "protractor-flake": "^1.0.1",
    "protractor-jasmine2-screenshot-reporter": "^0.1.4",
/// ...
  E2E: New Program Setup Description View
    โœ— should upload program image, verify upload, and delete program image
      - Failed: stale element reference: element is not attached to the page document
  (Session info: chrome=47.0.2526.106)
  (Driver info: chromedriver=2.19.346078 (6f1f0cde889532d48ce8242342d0b84f94b114a1),platform=Windows NT 6.1 SP1 x86_64) (WARNINe server did not provide any stacktrace information)
Command duration or timeout: 11 milliseconds
For documentation on this error, please visit: http://seleniumhq.org/exceptions/stale_element_reference.html
Build info: version: '2.47.1', revision: '411b314', time: '2015-07-30 03:03:16'
System info: host: 'M47Z0V52', ip: '156.45.40.192', os.name: 'Windows 7', os.arch: 'amd64', os.version: '6.1', java.version: '155'
Session ID: b72f7159f81314064081c035b5906f57
Driver info: org.openqa.selenium.chrome.ChromeDriver
Capabilities [{platform=XP, acceptSslCerts=true, javascriptEnabled=true, browserName=chrome, chrome={userDataDir=C:\Users\stramppData\Local\Temp\scoped_dir14320_20577}, rotatable=false, locationContextEnabled=true, mobileEmulationEnabled=false, version=4526.106, takesHeapSnapshot=true, cssSelectorsEnabled=true, databaseEnabled=false, handlesAlerts=true, browserConnectionEnabled=, webStorageEnabled=true, nativeEvents=true, hasTouchScreen=false, applicationCacheEnabled=false, takesScreenshot=true}]
        (Session info: chrome=47.0.2526.106)
        (Driver info: chromedriver=2.19.346078 (6f1f0cde889532d48ce8242342d0b84f94b114a1),platform=Windows NT 6.1 SP1 x86_64) (NG: The server did not provide any stacktrace information)
      Command duration or timeout: 11 milliseconds
      For documentation on this error, please visit: http://seleniumhq.org/exceptions/stale_element_reference.html
      Build info: version: '2.47.1', revision: '411b314', time: '2015-07-30 03:03:16'
      System info: host: 'M47Z0V52', ip: '156.45.40.192', os.name: 'Windows 7', os.arch: 'amd64', os.version: '6.1', java.versi1.7.0_55'
      Session ID: b72f7159f81314064081c035b5906f57
      Driver info: org.openqa.selenium.chrome.ChromeDriver
      Capabilities [{platform=XP, acceptSslCerts=true, javascriptEnabled=true, browserName=chrome, chrome={userDataDir=C:\Usersmema\AppData\Local\Temp\scoped_dir14320_20577}, rotatable=false, locationContextEnabled=true, mobileEmulationEnabled=false, ver47.0.2526.106, takesHeapSnapshot=true, cssSelectorsEnabled=true, databaseEnabled=false, handlesAlerts=true, browserConnectionEn=false, webStorageEnabled=true, nativeEvents=true, hasTouchScreen=false, applicationCacheEnabled=false, takesScreenshot=true}]
          at new bot.Error (C:\Users\stramema\projects\myProject\node_modules\protractor\node_modules\selenium-webdriver\lib\atoror.js:108:18)
          at Object.bot.response.checkResponse (C:\Users\stramema\projects\myProject\node_modules\protractor\node_modules\selenibdriver\lib\atoms\response.js:109:9)
          at C:\Users\stramema\projects\myProject\node_modules\protractor\node_modules\selenium-webdriver\lib\webdriver\webdrive379:20
          at [object Object].promise.ControlFlow.runInFrame_ (C:/Users/stramema/projects/myProject/node_modules/protractor/node_es/selenium-webdriver/lib/goog/../webdriver/promise.js:1857:20)
          at [object Object].goog.defineClass.notify (C:/Users/stramema/projects/myProject/node_modules/protractor/node_modules/ium-webdriver/lib/goog/../webdriver/promise.js:2448:25)
          at [object Object].promise.Promise.notify_ (C:/Users/stramema/projects/myProject/node_modules/protractor/node_modules/ium-webdriver/lib/goog/../webdriver/promise.js:564:12)
          at Array.forEach (native)
          at [object Object].promise.Promise.notifyAll_ (C:/Users/stramema/projects/myProject/node_modules/protractor/node_modullenium-webdriver/lib/goog/../webdriver/promise.js:553:15)
          at goog.async.run.processWorkQueue (C:\Users\stramema\projects\myProject\node_modules\protractor\node_modules\seleniumriver\lib\goog\async\run.js:130:15)
          at runMicrotasksCallback (node.js:337:7)
      Error
          at [object Object].ElementArrayFinder.applyAction_ (C:\Users\stramema\projects\myProject\node_modules\protractor\lib\et.js:392:21)
          at [object Object].self.(anonymous function) [as getAttribute] (C:\Users\stramema\projects\myProject\node_modules\protr\lib\element.js:76:19)
          at [object Object].self.(anonymous function) [as getAttribute] (C:\Users\stramema\projects\myProject\node_modules\protr\lib\element.js:721:11)
          at C:\Users\stramema\projects\myProject\e2e\admin\programs-route\new-program-setup\program-description\program-descripview.po.js:57:32
          at C:\Users\stramema\projects\myProject\node_modules\protractor\node_modules\selenium-webdriver\lib\webdriver\webdrive720:12
          at [object Object].promise.ControlFlow.runInFrame_ (C:/Users/stramema/projects/myProject/node_modules/protractor/node_es/selenium-webdriver/lib/goog/../webdriver/promise.js:1857:20)
          at [object Object].promise.ControlFlow.runEventLoop_ (C:/Users/stramema/projects/myProject/node_modules/protractor/nodules/selenium-webdriver/lib/goog/../webdriver/promise.js:1729:8)
          at [object Object].eval (C:/Users/stramema/projects/myProject/node_modules/protractor/node_modules/selenium-webdriver/oog/../webdriver/promise.js:2043:12)
          at goog.async.run.processWorkQueue (C:\Users\stramema\projects\myProject\node_modules\protractor\node_modules\seleniumriver\lib\goog\async\run.js:130:15)
      From: Task: <anonymous>
          at [object Object].pollCondition [as _onTimeout] (C:/Users/stramema/projects/myProject/node_modules/protractor/node_mo/selenium-webdriver/lib/goog/../webdriver/promise.js:1614:14)
      From: Task: <anonymous wait>
          at [object Object].promise.ControlFlow.wait (C:/Users/stramema/projects/myProject/node_modules/protractor/node_modulesnium-webdriver/lib/goog/../webdriver/promise.js:1606:15)
          at [object Object].webdriver.WebDriver.wait (C:\Users\stramema\projects\myProject\node_modules\protractor\node_modulesnium-webdriver\lib\webdriver\webdriver.js:716:21)
          at [object Object].to.(anonymous function) [as wait] (C:\Users\stramema\projects\myProject\node_modules\protractor\libractor.js:65:25)
          at [object Object].that.waitForPhotoUpdate (C:\Users\stramema\projects\myProject\e2e\admin\programs-route\new-program-\program-description\program-description-view.po.js:56:20)
          at C:\Users\stramema\projects\myProject\e2e\admin\programs-route\new-program-setup\program-description\program-descripview.spec.js:74:12
          at [object Object].promise.ControlFlow.runInFrame_ (C:/Users/stramema/projects/myProject/node_modules/protractor/node_es/selenium-webdriver/lib/goog/../webdriver/promise.js:1857:20)
          at [object Object].goog.defineClass.notify (C:/Users/stramema/projects/myProject/node_modules/protractor/node_modules/ium-webdriver/lib/goog/../webdriver/promise.js:2448:25)
          at [object Object].promise.Promise.notify_ (C:/Users/stramema/projects/myProject/node_modules/protractor/node_modules/ium-webdriver/lib/goog/../webdriver/promise.js:564:12)
          at Array.forEach (native)
          at [object Object].promise.Promise.notifyAll_ (C:/Users/stramema/projects/myProject/node_modules/protractor/node_modullenium-webdriver/lib/goog/../webdriver/promise.js:553:15)
      From: Task: Run it("should upload program image, verify upload, and delete program image") in control flow
      From asynchronous test:
      Error
          at Suite.<anonymous> (C:\Users\stramema\projects\myProject\e2e\admin\programs-route\new-program-setup\program-descriptrogram-description-view.spec.js:64:3)
          at Object.<anonymous> (C:\Users\stramema\projects\myProject\e2e\admin\programs-route\new-program-setup\program-descripprogram-description-view.spec.js:3:1)
          at Module._compile (module.js:460:26)
          at Object.Module._extensions..js (module.js:478:10)
          at Module.load (module.js:355:32)
          at Function.Module._load (module.js:310:12)
// Relevant test code

// program-description-view.spec.js
it('should upload program image, verify upload, and delete program image', function (done) {
    var fileToUpload = '../../../../data/testProgramImage.jpg';

    var absolutePath = path.resolve(__dirname, fileToUpload);

    browser.executeScript('$(\'.program-image-container .upload-btn input[type="file"]\').attr("style", "");').then(function () {
      view.fileSelector.sendKeys(absolutePath);

      view.assertUserPhoto(view.defaultProgramImageUrlFragment);

      view.waitForPhotoUpdate(view.uploadedProgramImageUrlFragment).then(function () {
        view.assertUserPhoto(view.uploadedProgramImageUrlFragment);

        view.programImageDeleteBtn.click();

        view.waitForPhotoUpdate(view.defaultProgramImageUrlFragment);

        view.assertUserPhoto(view.defaultProgramImageUrlFragment);

        done();
      });
    });
  });

//program-description-view.po.js
that.assertUserPhoto = function (imageUrlFragment) {
    expect(that.programImage.getAttribute('src')).toContain(imageUrlFragment);
  };
that.waitForPhotoUpdate = function (expectedPhoto) {
    browser.ignoreSynchronization = true;

    return browser.wait(function () {
      return that.programImage.getAttribute('src').then(function (src) {
        return (src.indexOf(expectedPhoto) > 0);
      });
    }, 30000).thenFinally(function () {
      browser.ignoreSynchronization = false;
    });
  };

Parsers do not validate if a testrunner's output is parsable

The parsers might need some type of smoke test/check to determine if the output of a testrunner is parsable or not. Probably show a warning message that the parser might not be able to accurately parse an output if the smoke test fails?

I ran into a case where the test-runner output would partially match multi parser's regex. The parser was able to extract the spec filenames but was unable to determine accurately if the spec passed or failed. Because of this, the parser would mark all the spec files as 'failed' and rerun all the spec file.

If you think this might be a useful feature, I can work towards a PR for it.

Protractor-flake: duplicate tests in Allure report

Hi Nick,

I was wondering if you had some experience with printing results of rerunning flake tests to Allure tool. After rerunning my tests 3 times, Allure will generate a report counting all failed tests and passed in the first run tests. I would like just to print failed tests in the last re-run. Maybe you have any ideas how to do so?

Thanks,
Tatiana

Provide verbose mode

Hi,

it would be nice to have the possibility to start protractor-flake in verbose mode.
When I e.g. install protractor and I forget to download the selenium-servier binaries I get a standard pretty understandable error message once I start my e2e tests. But when I start my tests with protractor-flake instead, this message will be swallowed and the user is stuck. I'm also not able to pass the --verbose flag to protractor alongside the execution of protractor-flake or am I missing something here?

protractor-flake --max-attempts=3 -- protractor-config.js --verbose

Thanks!

Message re-running test file, but still re-runs my other files

screenshot_rerun

Because of a timeout error not giving a stacktrace with a filename to work with, I've added a console.error(error.stack) in the afterEach() in every e2e file. The error log will appear when the currentSpec.failedExpectations is filled.

Before, it would get the message that it had a timeout and could not find a spec, because of the stacktrace not giving any...

But now, as seen in the screenshot, it finds the file, but still runs my other e2e files AFTER finishing the one that was found. It will start my workout-overview.e2e and after that, will continue with another e2e file, that was not mentioned in the re-run list.

Does anyone have any idea, how I can look into the specs that have been chosen by protractor-flake to re-run? Or any other idea how to try and debug/solve this problem.

my spec glob = [../src/test/integration/**/*.e2e.js]
is it maybe adding the found spec to this list? and not removing the glob? or something like that...

FR: parse jasmine logging based on a console.log instead of framework log

Current situation:
Currently protractor-flake for jasmine can use 2 parsers, the standard.js and the multi.js parser. Both parsers are based on the logging of the framework and use complex regular expressions due to changes in the framework loggings.

Desired situation:
Replace both parsers with a new parser that will read the log based on a console.log users need to add to their framework. This is currently already advised for CucumberJS 1.
The advantage of parsing the log of protractor based on a console.log is that:

  • it will be more stable and framework independent then the current parsing because that one is based on the logging of the framework itself.
  • it will be less complex
  • single logging can be used for both single and multiple instances

To do:

  • [] create a new parser called jasmine, standard and multi will be removed and this will result in a new major release
  • [] test this for Jasmine and make sure the current functionality won't break (support for suites and so on)
  • [] create new unit tests
  • [] create new docs for Jasmine with examples how to implement this

Does not work for sharded setup

The current project I am on has passing builds even though some protractor test fails.

After some investigation I found that if I remove parallelism, everything is consistent.

Assuming

  • I have 10 tests
  • test 10 is flaky
  • test 5 fails
  • protractor is configured to shard the test
    shardTestFiles: true
    maxInstances: 2

I suspect the following happens

  • test 5 fails
  • test 10 fails (overrides the failure for test 5 ?)
  • flake reruns test 10
  • test 10 passes the second time

==> everything seems fine but shouldn't because test 5 did not pass.

Is it something I am doing wrong in the configuration?

TypeError: list.split is not a function

Version protractor-flake: 2.0.1
Version protractor: 3.2.2

Situation:
We use a custom grunt taskmanager to create protractor tasks. In those tasks we provide the steps: that need to be run.
When protractor-flake provides the array of failed specs protractor is run again for the second time. protractor then fails with this error

Re-running tests: test attempt 2
Re-running the following test files:
/Users/wswebcreation/test/e2e/features/functional/another.flakey.feature
/Users/wswebcreation/protractor-flake/node_modules/protractor/lib/cli.js:113
  var patterns = list.split(',');
                      ^

TypeError: list.split is not a function
    at processFilePatterns_ (/Users/wswebcreation/protractor-flake/node_modules/protractor/lib/cli.js:113:23)
    at Object.<anonymous> (/Users/wswebcreation/protractor-flake/node_modules/protractor/lib/cli.js:121:16)
    at Module._compile (module.js:425:26)
    at Object.Module._extensions..js (module.js:432:10)
    at Module.load (module.js:356:32)
    at Function.Module._load (module.js:311:12)
    at Module.require (module.js:366:17)
    at require (module.js:385:17)
    at Object.<anonymous> (/Users/wswebcreation/protractor-flake/node_modules/protractor/bin/protractor:5:1)
    at Module._compile (module.js:425:26)

Cause:
The protractorArgs array (see index.js file) holds two --specs and two paths to the specs. The original --specs from the taskmanager and the new added --specs by protractor-flake (protractorArgs.push('--specs', specFiles.join(','));). protractor only allows multiple specs when they are provided as a comma-separated set of files.

Posible solution:
In my opinion the initial provided set of specs are not needed now anymore, only the failed set of specs should be run.

// this piece of code in index.js
    if (specFiles.length) {
      protractorArgs = protractorArgs.filter((arg) => !/^--suite=/.test(arg))
      protractorArgs.push('--specs', specFiles.join(','))
    }
// should be changed by this
    if (specFiles.length) {
      protractorArgs = protractorArgs.filter((arg) => !/^--suite=/.test(arg))
      var specIndex = protractorArgs.indexOf('--specs')
      if(specIndex > -1){
        protractorArgs.splice(specIndex, 2)
      }
      protractorArgs.push('--specs', specFiles.join(','))
    }

If agreed I can provide a PR for this

protractor-flake does not retry failed specs if the expectation is in a promise

It seems that when you place an expectation in a promise in your test, the failed spec is not registered by protractor-flake as a spec it should retry.

Example:

it('should eventually do something', function (done) {
        var aPromise = somePage.somePromise();
        aPromise.then(function (value) {
           expect(value).toBe('something');
           done();
        });
   });

When value is not 'something', the test fails and counts as a failed test in the amount of failed tests at the end of the test run. But it is not being retried.

Example of a console log of such a situation (3 tests have expectations in promises):

...
11:07:56 [11:07:56] I/launcher - chrome #01-71 passed
11:07:56 [11:07:56] I/launcher - chrome #01-73 passed
11:07:56 [11:07:56] I/launcher - chrome #01-48 failed 3 test(s)
11:07:56 [11:07:56] I/launcher - chrome #01-68 passed
11:07:56 [11:07:56] I/launcher - chrome #01-77 passed
11:07:56 [11:07:56] I/launcher - chrome #01-79 passed
11:07:56 [11:07:56] I/launcher - chrome #01-81 failed 1 test(s)
11:07:56 [11:07:56] I/launcher - chrome #01-78 passed
11:07:56 [11:07:56] I/launcher - overall: 4 failed spec(s)
11:07:56 Closing report
11:07:56 [11:07:56] E/launcher - Process exited with error code 1
11:07:56 
11:07:56 Using standard to parse output
11:07:56 Re-running tests: test attempt 2
11:07:56 Re-running the following test files:
11:07:56 /home/jenkins/workspace/TA-3-test-chrome/viewtest/tests/referral/supplyPageSpec.js

It seems that when it is the only test that fails, all files are being retried, because it somehow notices there was a failure but does not have a list of failed tests. When another test fails (like the 'supplyPageSpec' above), only the supplyPageSpec is being retried and it 'forgets' about the other 3.

I solved this by moving the promises to a page object so I didn't have to place an expectation in a promise, but it seems to be a bug that the test is not being retried, am I right?

Receiving error "SyntaxError: Invalid or unexpected token"

I have tried to pass suites and params args of protractor through the command line using protractor-flake and am receiving the following error.

(function (exports, require, module, __filename, __dirname) { @IF EXIST "%~dp0\n
ode.exe" (
                                                              ^
SyntaxError: Invalid or unexpected token
    at createScript (vm.js:56:10)
    at Object.runInThisContext (vm.js:97:10)
    at Module._compile (module.js:542:28)
    at Object.Module._extensions..js (module.js:579:10)
    at Module.load (module.js:487:32)
    at tryModuleLoad (module.js:446:12)
    at Function.Module._load (module.js:438:3)
    at Module.runMain (module.js:604:10)
    at run (bootstrap_node.js:390:7)
    at startup (bootstrap_node.js:150:9)

I have tried various combinations of command line arguments as below and for all am receiving similar issues.

node_modules\.bin\protractor-flake.cmd --protractor-path=node_modules\.bin\protractor.cmd --parser standard --node-bin node --max-attempts=3 --color=magenta -- suite=synthetic params.env syn_sat protractor.conf.js

node_modules\.bin\protractor-flake.cmd --protractor-path=node_modules\.bin\protractor.cmd --parser standard --node-bin node --max-attempts=3 --color=magenta protractor.conf.js -- suite=synthetic params.env syn_sat

node_modules\.bin\protractor-flake.cmd --protractor-path=node_modules\.bin\protractor.cmd --parser standard --node-bin node --max-attempts=3 --color=magenta -- protractor.config.js

node_modules\.bin\protractor-flake.cmd --protractor-path=node_modules\.bin\protractor.cmd --parser standard --node-bin node --max-attempts=3 --color=magenta protractor.conf.js -- suite=synthetic params.env syn_sat --

Not able to use grunt run to run protractor tests with protractor-flake

I created the following target on grunt run:

"protractor-flake": {
                    cmd: './node_modules/.bin/protractor-flake',
                    args: [
                        '--protractor-path=./node_modules/.bin/protractor',
                        '--max-attempts=2',
                        '-- test/protractor-e2e/protractor.conf.js'
                    ]
                }

But when running grunt run:protractor-flake, I'm facing the following error:

**you must either specify a configuration file or at least 3 options. See below for the options:

Usage: protractor [configFile] [options]
configFile defaults to protractor.conf.js
The [options] object will override values from the config file.
See the reference config for a full list of options.

Options:
  --help                                 Print Protractor help menu
  --version                              Print Protractor version
  --browser, --capabilities.browserName  Browsername, e.g. chrome or firefox
  --seleniumAddress                      A running selenium address to use
  --seleniumSessionId                    Attaching an existing session id
  --seleniumServerJar                    Location of the standalone selenium jar file
  --seleniumPort                         Optional port for the selenium standalone server
  --baseUrl                              URL to prepend to all relative paths
  --rootElement                          Element housing ng-app, if not html or body
  --specs                                Comma-separated list of files to test
  --exclude                              Comma-separated list of files to exclude
  --verbose                              Print full spec names
  --stackTrace                           Print stack trace on error
  --params                               Param object to be passed to the tests
  --framework                            Test framework to use: jasmine, mocha, or custom
  --resultJsonOutputFile                 Path to save JSON test result
  --troubleshoot                         Turn on troubleshooting output
  --elementExplorer                      Interactively test Protractor commands
  --debuggerServerPort                   Start a debugger server at specified port instead of repl

And if I execute the following in the command line, my tests are correctly executed:

./node_modules/.bin/protractor-flake --protractor-path=./node_modules/.bin/protractor --max-attempts=2 -- test/protractor-e2e/protractor.conf.js

Note: here is a reference for grunt-run: https://www.npmjs.com/package/grunt-run

Fix to handle lowercase drive letters on Windows

When I run protractor flake on Jenkins it is not finding failed specs as the log is listing the Windows drive letter in lowercase.
eg. d:\workspace\blah\blah.js

Can you please update the FAILED_LINES regex to be case insensitive?
let FAILED_LINES = /at (?:[object Object]|Object). ((([A-Z]:)?.?):.)/gi

Or to include lowercase drive letters?
let FAILED_LINES = /at (?:[object Object]|Object). ((([A-Za-z]:)?.?):.)/g

https://github.com/NickTomlin/protractor-flake/blob/master/src/failed-spec-parser.js#L12

Thanks

Re run whole suite / not failed ones

Hi there!

  • Operating system and version
    Jasmine version: 2.4.1
    Browser name: chrome
    Browser version: 53.0.2785.143
    Platform: LINUX
    Javascript enabled: true
    Css selectors enabled: true
  • Node.js version - v8.1.2
  • Protractor version - ~4.0.10
  • Protractor flake version : "^3.0.1"
  • Protractor configuration file :
    flake :
    #!/usr/bin/env node

/**

  • flake is a node script that uses protractor-flake to re-run failed tests. Note
  • that it reruns tests at the file level, so if one test fails, it will rerun all
  • the tests in that file. Still... awesome.
  • usage:
  • ./flake conf.js [other protractor args]
    */
    //const rerun = require('./module');
    const protractorFlake = require('protractor-flake');

// skip first two passed args (node and self)
let protractorArgs = process.argv.splice(2);
protractorFlake({
protractorPath: 'node_modules/.bin/protractor',
maxAttempts: 2,
parser: 'standard', //even tried with multi
nodeBin: 'node',
color: 'magenta',
protractorArgs: protractorArgs
}, function(status, output) {
process.exit(status);
});

Package.json :

{
"version": "0.0.0",
"private": true,
"name": "weardex-app-bazooka-test",
"description": "weardex",
"license": "MIT",
"devDependencies": {
"jasmine-reporters": "~1.0.1",
"jasmine-spec-reporter": "^4.2.1",
"moment": "~2.9.0",
"mysql": "~2.8.0",
"protractor": "~4.0.10",
"protractor-console": "^3.0.0",
"protractor-html-screenshot-reporter": "0.0.19",
"protractor-jasmine2-screenshot-reporter": "^0.4.0"
},
"scripts": {
"postinstall": "node_modules/protractor/bin/webdriver-manager update",
"prestart": "npm install",
"start": "node_modules/protractor/bin/webdriver-manager start",
"pretest": "npm install",
"test": "./flake conf.js",
"debug": "npm install;node_modules/protractor/bin/protractor debug config.js",
"update-index-async": "node -e "require('shelljs/global'); sed('-i', /\/\/@@NG_LOADER_START@@[\s\S]*\/\/@@NG_LOADER_END@@/, '//@@NG_LOADER_START@@\n' + cat('bower_components/angular-loader/angular-loader.min.js') + '\n//@@NG_LOADER_END@@', 'app/index-async.html');""
},
"dependencies": {
"node-redshift": "0.0.6",
"protractor-flake": "^3.0.1"
}
}
conf.js :

/**

  • Protractor config file for running LIVE test #cases
    */
    var path = require('path');
    var SpecReporter = require('jasmine-spec-reporter').SpecReporter;
    var reporter = new SpecReporter({
    spec: {
    displayStackTrace: true;
    }
    })
    // var HtmlScreenshotReporter = require('protractor-jasmine2-screenshot-reporter');
    // var reporter = new HtmlScreenshotReporter({
    // dest: 'screenshots',
    // filename: 'Dashboard_E2E_Results.html',
    // showConfiguration: true,
    // reportFailedUrl: true,
    // inlineImages: true,
    // captureOnlyFailedSpecs: true,
    // reportOnlyFailedSpecs: false,
    // showSummary: true,
    // showQuickLinks: true,
    // reportTitle: "SUMMARY_OF_FAILED_SPECS_E2E_RESULTS",
    // metadataBuilder: function(currentSpec, suites, browserCapabilities) {
    // return { id: currentSpec.id, os: browserCapabilities.get('browserName') };
    // },
    // pathBuilder: function(currentSpec, suites, browserCapabilities) {
    // // will return chrome/your-spec-name.png
    // return browserCapabilities.get('browserName') + '/' + currentSpec.fullName;
    // }
    // });

exports.config = {
// seleniumAddress: 'http://0.0.0.0:4444/wd/hub',
// seleniumAddress: 'http://localhost:4444/wd/hub',
// CSS Selector for the element housing the angular app - this defaults to
// body, but is necessary if ng-app is on a descendant of .
rootElement: 'body',
allScriptsTimeout: 60000,
framework: 'jasmine',
jasmineNodeOpts: {
// If true, display spec names.
isVerbose: true,
// If true, print colors to the terminal.
showColors: true,
// If true, include stack traces in failures.
includeStackTrace: true,
// Default time to wait in ms before a test fails.
defaultTimeoutInterval: 60000,
print: function();
},
params : {
// lang : 'zh_CN'
lang : 'en'
},

    baseUrl: 'https://dashboard.visenze.com',
    multiCapabilities: [
        //{'browserName': 'firefox'},
         {'specs': ['tests/regression/ftpUpload.js'],
         'browserName': 'chrome',
         },
         // {'specs': ['tests/regression/uploadAndTag.js'],
         // 'browserName': 'chrome',
         // },
         {'specs':['tests/regression/forgetPassword.js','tests/regression/changePassword.js','tests/regression/updateProfile.js','tests/regression/application.js'],
         'browserName': 'chrome',
         },
         {'specs':['tests/regression/teamMember/adminOperations.js','tests/regression/teamMember/powerUserOperations.js','tests/regression/teamMember/readOnlyUserOperations.js'],
         'browserName': 'chrome',
         },
         {'specs':['tests/regression/permission.js','tests/regression/overview.js','tests/regression/apiIntegration.js','tests/regression/clientSideIntegration.js'],
         'browserName': 'chrome',
         },
         {'specs': ['tests/regression/systemManagement2.js','tests/regression/urlRouting/userOperations.js'],
          'browserName': 'chrome'
         },
         {'specs': ['tests/regression/uploadDatafeed.js','tests/regression/schemaEdit.js','tests/regression/search.js'],
           'browserName': 'chrome'
         },
//        {'specs': ['tests/regression/gettingStarted.js'],
//                  'browserName': 'chrome'
//        },
    ],
     plugins: [{
     package: 'protractor-console',
     logLevels: ['severe']
    }],
    // Maximum number of total browser sessions to run. Tests are queued in
    // sequence if number of browser sessions is limited by this parameter.
    // Use a number less than 1 to denote unlimited. Default is unlimited.
//    maxSessions: 1,


    // onPrepare: function(){
    //     // Add a screenshot reporter and set location for storing screenshots
    //     var now = new Date();
    //     var timeStamp = now.getFullYear() + '-' + (now.getMonth() + 1) + '-' + now.getDate() + '-' + now.getHours() + 'h' + now.getMinutes() + 'm' + now.getSeconds()+'s';
    //     var HtmlReporter = require('protractor-html-screenshot-reporter');
//
//        jasmine.getEnv().addReporter(new HtmlReporter({
//            baseDirectory: './test_results/screenshot_reports',
//            pathBuilder: function pathBuilder(spec, descriptions, results, capabilities) {
//                return path.join(capabilities.caps_.browserName, descriptions.join('_'));
//            },
//            docName: 'html_screenshots_test_report_' + timeStamp + '.html',
//            takeScreenShotsOnlyForFailedSpecs: true
//        }));
        // require('jasmine-reporters');
        // jasmine.getEnv().addReporter(new jasmine.JUnitXmlReporter('./test_results/xml_reports', true, true, ''));
 
  //  }
  //  Setup the report before any tests start
// beforeLaunch: function () {
//     return new Promise(function (resolve) {
//         reporter.beforeLaunch(resolve);
//     });
// },


// // Close the report after all tests finish
// afterLaunch: function (exitCode) {
//     return new Promise(function (resolve) {
//         reporter.afterLaunch(resolve.bind(this, exitCode));
//     });
// },
//Onprepare function
onPrepare: function () {
    var width = 1300;
    var height = 1200;
    browser.driver.manage().window().maximize();
    browser.driver.manage().timeouts().implicitlyWait(20000);
    browser.driver.manage().window().setSize(width, height);
    jasmine.getEnv().addReporter(reporter);
    afterAll(function (done) {
        process.nextTick(done);
    });
}


};

You can collect the first few items easily with the trouble cli npx @nicktomlin/trouble protractor protractor-flake (or npm i -g @nicktomlin/trouble && trouble protractor protractor-flake)

custom parser integration:Does that require build again

@NickTomlin How to integrate a custom parser and add some print statements into the same to check if that is working as expected.I created the custom parser as below:

module.exports = {
  name: 'custom parser',
  parse (output) {
    let failedSpecs = []
    let match = null
    let FAILED_LINES =  /^(.*(Spec.js).*)$/g
    while (match = FAILED_LINES.exec(output)) { // eslint-disable-line no-cond-assign
      if (failedSpecs.indexOf(match[1]) === -1) {
        failedSpecs.add(match[1])
      }
    }

    return failedSpecs
  }
}

however the only change is let FAILED_LINES = /^(.*(Spec.js).*)$/g from the default custom parser in /test/unit/support

I am not sure if this is picking the changes..is there a way to add some console logging in the parser?

Running the same via following command:

protractor-flake --parser /usr/local/lib/node_modules/protractor-flake/test/unit/support/custom-parser.js  --max-attempts=3 -- conf.js 
(node:25005) [DEP0022] DeprecationWarning: os.tmpDir() is deprecated. Use os.tmpdir() instead.
[21:23:13] I/launcher - Running 1 instances of WebDriver
[21:23:13] I/local - Starting selenium standalone server...
[21:23:13] I/local - Selenium standalone server started at http://192.168.0.101:51511/wd/hub
no params received
beta
Running on Environment::::beta
ConsoleReporter is deprecated and will be removed in a future version.
Error: Failed expectation

at /Users/poojaag/workspace_pooja/src/ShipWithAmazonUIAutomation/src/specs/CountShipmentFromDashboardSpec.js:43:40

Started
Started


problem trying to remove a folder:build/brazil-integ-tests/


No specs found
Finished in 0.004 seconds



No specs found


Finished in 0 seconds

Improve README.md with protractor.conf.js

The current README.md install & usage instructions aren't exactly clear, do you require & use this within your current protractor.conf.js? Initially I'm inclined to think there is a bit more to it as:

protractorFlake({
  protractorPath: '/path/to/protractor',

protractorFlake wants to know where protractor lives. True?

About making flake message more visible

The messages/logs from protractor-flake are hard to notice in long stream of logs. Is there a way to make the messages more visible?

Some of the messages that I am talking about are:
'Re-running tests: test attempt ' + testAttempt + '\n'
'Re-running the following test files:\n'

I am thinking we could have an option to add colors to various log levels? Most of the messages generated by flake are 'info' messages. We could add an option to make 'info' yellow? Or underline?

Let me know if you have any other ideas. I can work on creating a PR for adding color if you like it.

Not working with latest version of protractor - 2.3.0

Hi

We have been using your plugin for a few weeks now with version 2.2.0 of protractor and it has been working perfectly. After upgrading to latest version we now see that there is an issue when the tests try and re-run. We get an error like below. Am not sure what is causing it?

WARNING - pattern C:\GIT\titan-ui\test\e2e\integration\data-management\project-workspace\projectAssetsSpec.js did not match any files.
WARNING - pattern C:\GIT\titan-ui\test\e2e\integration\dashboard\dashboardSpec.js did not match any files.
WARNING - pattern C:\GIT\titan-ui\test\e2e\integration\data-management\project-workspace\projectUploadSpec.js did not match any files.
WARNING - pattern C:\GIT\titan-ui\test\e2e\integration\data-management\projects\projectsDashboardSpec.js did not match any files.

protractor flake re running only last failure spec

I am experiencing protractor flake re running only last failure spec file. Also interesting thing if there is only one failure spec then it re run all specs...

OS- Window 7, Node.js - 6.9.5 , Protractor- 5.1.2, Protractor-flake - 3.0.0
---protractor configuration file------

var HtmlScreenshotReporter = require('protractor-jasmine2-screenshot-reporter');
var htmlReporter = new HtmlScreenshotReporter({
	 	dest: 'e2e/testreports', 
	    filename: 'e2eReport.html', 
	    reportOnlyFailedSpecs: false,
		captureOnlyFailedSpecs: true, 
		showSummary: true,
		showQuickLinks: true
});

exports.config = {
    seleniumServerJar: '../../node_modules/protractor/node_modules/webdriver-manager/selenium/selenium-server-standalone-3.4.0.jar',
    specs: [some specs],
    framework:'jasmine2',
    baseUrl: some_url,
    multiCapabilities: [
        {browserName: 'chrome'}
    ],
    directConnect: true,

    onPrepare: function() {
        browser.driver.manage().window().maximize();
        browser.driver.manage().timeouts().setScriptTimeout(60000);
        jasmine.getEnv().addReporter(htmlReporter);
        var jasmineReporters = require('jasmine-reporters');
        jasmine.getEnv().addReporter(new jasmineReporters.JUnitXmlReporter({
            consolidateAll: true,
            savePath: 'e2e/testreports',
            filePrefix: 'e2eReport'
        }));
    },
    
    jasmineNodeOpts:{
        onComplete: null,
        isVerbose: false,
        showColors:true,
        includeStackTrace:true,
        defaultTimeoutInterval: 120000,
        restartBrowserBetweenTests:true
    },

    // Setup the report before any tests start
    beforeLaunch: function() {
        return new Promise(function(resolve){
        	htmlReporter.beforeLaunch(resolve);
        });
    },

    // Close the report after all tests finish
    afterLaunch: function(exitCode) {
        return new Promise(function(resolve){
        	htmlReporter.afterLaunch(resolve.bind(this, exitCode));
        });
    } 
};

protractor.conf.txt

----- log file ------

[16:30:47] Using gulpfile C:\mypro\gulpfile.js
[16:30:47] Starting 'RUN:E2E'...
[16:30:47] Starting 'webdriver_update'...
[16:30:49] I/http_utils - ignoring SSL certificate
[16:30:49] I/http_utils - ignoring SSL certificate
[16:30:49] I/http_utils - ignoring SSL certificate
[16:30:49] I/http_utils - ignoring SSL certificate
[16:30:49] I/http_utils - ignoring SSL certificate
[16:30:49] I/http_utils - ignoring SSL certificate
[16:30:49] I/update - chromedriver: file exists C:\mypro\node_modules\protractor\node_modules\webdriver-manager\selenium\chromedri
ver_2.32.zip
[16:30:49] I/update - chromedriver: unzipping chromedriver_2.32.zip
[16:30:50] I/update - chromedriver: chromedriver_2.32.exe up to date
[16:30:50] I/update - selenium standalone: file exists C:\mypro\node_modules\protractor\node_modules\webdriver-manager\selenium\se
lenium-server-standalone-3.4.0.jar
[16:30:50] I/update - selenium standalone: selenium-server-standalone-3.4.0.jar
up to date
[16:30:51] I/update - geckodriver: file exists C:\mypro\node_modules\protractor\node_modules\webdriver-manager\selenium\geckodrive
r-v0.19.0.zip
[16:30:51] I/update - geckodriver: unzipping geckodriver-v0.19.0.zip
[16:30:51] I/update - geckodriver: geckodriver-v0.19.0.exe up to date
[16:30:51] Finished 'webdriver_update' after 3.96 s
[16:30:51] Starting 'e2e'...
[16:30:51] Finished 'e2e' after 17 ms
[16:30:51] Finished 'RUN:E2E' after 3.99 s
Report destination:   e2e\testreports\e2eReport.html
[16:30:53] W/driverProviders - Using driver provider directConnect, but also fou
nd extra driver provider parameter(s): seleniumServerJar
[16:30:53] I/launcher - Running 1 instances of WebDriver
[16:30:53] I/direct - Using ChromeDriver directly...

Failures:
1) #National Data Filter Category  - [national-data-filter.spec.js]  should have
  zeros da
ta
  Message:
    Expected 'actual (52,243)' to contain '(0)', 'expecting total has 0 data'.
  Stack:
    Error: Failed expectation
        at C:\mypro\e2e\spec\national-data-filter.
spec.js:44:17
        at elementArrayFinder_.then (C:\mypro\node
_modules\protractor\lib\element.ts:840:22)
        at ManagedPromise.invokeCallback_ (C:\mypr
o\node_modules\selenium-webdriver\lib\promise.js:1366:14)
        at TaskQueue.execute_ (C:\mypro\node_modul
es\selenium-webdriver\lib\promise.js:2970:14)
        at TaskQueue.executeNext_ (C:\mypro\node_m
odules\selenium-webdriver\lib\promise.js:2953:27)
        at asyncRun (C:\mypro\node_modules\seleniu
m-webdriver\lib\promise.js:2813:27)
        at C:\mypro\node_modules\selenium-webdrive
r\lib\promise.js:676:7
        at process._tickCallback (internal/process/next_tick.js:103:7)
  Message:
    Expected 'actual (52,243)' to contain '(0)', 'expecting
total has 0 data'.
  Stack:
    Error: Failed expectation
        at C:\mypro\e2e\spec\national-data-filter.
spec.js:48:17
        at elementArrayFinder_.then (C:\mypro\node
_modules\protractor\lib\element.ts:840:22)
        at ManagedPromise.invokeCallback_ (C:\mypro\node_modules\selenium-webdriver\lib\promise.js:1366:14)
        at TaskQueue.execute_ (C:\mypro\node_modul
es\selenium-webdriver\lib\promise.js:2970:14)
        at TaskQueue.executeNext_ (C:\mypro\node_m
odules\selenium-webdriver\lib\promise.js:2953:27)
        at asyncRun (C:\mypro\node_modules\seleniu
m-webdriver\lib\promise.js:2813:27)
        at C:\mypro\node_modules\selenium-webdrive
r\lib\promise.js:676:7
        at process._tickCallback (internal/process/next_tick.js:103:7)

2) #Deep Link Test - [pf-deeplink.spec.js]  should display the search result whe
n clicking on different deeplink 
  Message:
    Failed: No element found using locator: By(css selector, *[id="contentSearch
BodyPopularSearch"])
  Stack:
    NoSuchElementError: No element found using locator: By(css selector, *[id="c
ontentSearchBodyPopularSearch"])
        at WebDriverError (C:\mypro\node_modules\s
elenium-webdriver\lib\error.js:27:5)
        at NoSuchElementError (C:\mypro\node_modul
es\selenium-webdriver\lib\error.js:168:5)
        at elementArrayFinder.getWebElements.then (C:\mypro\node_modules\protractor\lib\element.ts:851:17)
        at ManagedPromise.invokeCallback_ (C:\mypro\node_modules\selenium-webdriver\lib\promise.js:1366:14)
        at TaskQueue.execute_ (C:\mypro\node_modul
es\selenium-webdriver\lib\promise.js:2970:14)
        at TaskQueue.executeNext_ (C:\mypro\node_m
odules\selenium-webdriver\lib\promise.js:2953:27)
        at asyncRun (C:\mypro\node_modules\seleniu
m-webdriver\lib\promise.js:2813:27)
        at C:\mypro\node_modules\selenium-webdrive
r\lib\promise.js:676:7
        at process._tickCallback (internal/process/next_tick.js:103:7)
    From: Task: Run it("should display the search result when clicking on differ
ent deeplink for local data") in control flow
        at Object.<anonymous> (C:\mypro\node_modul
es\jasminewd2\index.js:94:19)
        at C:\mypro\node_modules\jasminewd2\index.
js:64:48
        at ControlFlow.emit (C:\mypro\node_modules
\selenium-webdriver\lib\events.js:62:21)
        at ControlFlow.shutdown_ (C:\mypro\node_mo
dules\selenium-webdriver\lib\promise.js:2565:10)
        at shutdownTask_.MicroTask (C:\mypro\node_
modules\selenium-webdriver\lib\promise.js:2490:53)
        at MicroTask.asyncRun (C:\mypro\node_modul
es\selenium-webdriver\lib\promise.js:2619:9)
    From asynchronous test:
    Error
        at Suite.<anonymous> (C:\mypro\e2e\spec\pf
-deeplink.spec.js:15:6)
        at Object.<anonymous> (C:\mypro\e2e\spec\p
f-deeplink.spec.js:7:1)
        at Module._compile (module.js:570:32)
        at Object.Module._extensions..js (module.js:579:10)
        at Module.load (module.js:487:32)
        at tryModuleLoad (module.js:446:12)

3) #Deep Link Test - [pf-deeplink.spec.js]  should display the search result whe
n clicking on different deeplink for other 
  Message:
    Failed: No element found using locator: By(css selector, *[id="contentSearch
BodyPopularSearch"])
  Stack:
    NoSuchElementError: No element found using locator: By(css selector, *[id="c
ontentSearchBodyPopularSearch"])
        at WebDriverError (C:\mypro\node_modules\s
elenium-webdriver\lib\error.js:27:5)
        at NoSuchElementError (C:\mypro\node_modul
es\selenium-webdriver\lib\error.js:168:5)
        at elementArrayFinder.getWebElements.then (C:\mypro\node_modules\protractor\lib\element.ts:851:17)
        at ManagedPromise.invokeCallback_ (C:\mypro\node_modules\selenium-webdriver\lib\promise.js:1366:14)
        at TaskQueue.execute_ (C:\mypro\node_modul
es\selenium-webdriver\lib\promise.js:2970:14)
        at TaskQueue.executeNext_ (C:\mypro\node_m
odules\selenium-webdriver\lib\promise.js:2953:27)
        at asyncRun (C:\mypro\node_modules\seleniu
m-webdriver\lib\promise.js:2813:27)
        at C:\mypro\node_modules\selenium-webdrive
r\lib\promise.js:676:7
        at process._tickCallback (internal/process/next_tick.js:103:7)
    From: Task: Run it("should display the search result when clicking on differ
ent deeplink for national data") in control flow
        at Object.<anonymous> (C:\mypro\node_modul
es\jasminewd2\index.js:94:19)
        at C:\mypro\node_modules\jasminewd2\index.
js:64:48
        at ControlFlow.emit (C:\mypro\node_modules
\selenium-webdriver\lib\events.js:62:21)
        at ControlFlow.shutdown_ (C:\mypro\node_mo
dules\selenium-webdriver\lib\promise.js:2565:10)
        at shutdownTask_.MicroTask (C:\mypro\node_
modules\selenium-webdriver\lib\promise.js:2490:53)
        at MicroTask.asyncRun (C:\mypro\node_modul
es\selenium-webdriver\lib\promise.js:2619:9)
    From asynchronous test:
    Error
        at Suite.<anonymous> (C:\mypro\e2e\spec\pf
-deeplink.spec.js:30:6)
        at Object.<anonymous> (C:\mypro\e2e\spec\p
f-deeplink.spec.js:7:1)
        at Module._compile (module.js:570:32)
        at Object.Module._extensions..js (module.js:579:10)
        at Module.load (module.js:487:32)
        at tryModuleLoad (module.js:446:12)

12 specs, 3 failures
Finished in 41.129 seconds

[16:31:38] I/launcher - 0 instance(s) of WebDriver still running
[16:31:38] I/launcher - chrome #01 failed 3 test(s)
[16:31:38] I/launcher - overall: 3 failed spec(s)
Closing report
[16:31:38] E/launcher - Process exited with error code 1

Using standard to parse output
Re-running tests: test attempt 2
Re-running the following test files:
C:\mypro\e2e\spec\pf-deeplink.spec.js
Report destination:   e2e\testreports\e2eReport.html
[16:31:41] W/driverProviders - Using driver provider directConnect, but also fou
nd extra driver provider parameter(s): seleniumServerJar
[16:31:41] I/launcher - Running 1 instances of WebDriver
[16:31:41] I/direct - Using ChromeDriver directly...

log.txt

I am not sure what's happening. Would you please look into it? Thanks

Unable to Switch Conf File

@NickTomlin - Im seeing an issue where I'm not able to pass in the config file as a build arg. Is Flake configured to allow a conf file other than the regular protractor.conf.js ? This is my gulp task:

gulp.task('e2e-testEnv', ['build-e2e'], function(cb) {

  var protractorFlake = require('protractor-flake');

  protractorFlake({
    configFile: 'protractor.conf.testEnv.js',
    maxAttempts: 3,
    protractorArgs: [
    ]
  }, function (status, output) {
    process.exit(status);
  });
});

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.