Giter VIP home page Giter VIP logo

bigtest's People

Contributors

2468ben avatar azdaroth avatar bekzod avatar blimmer avatar cibernox avatar cowboyd avatar dagda1 avatar dependabot[bot] avatar ef4 avatar github-actions[bot] avatar jbolda avatar jenweber avatar jherdman avatar jnicklas avatar jorgelainfiesta avatar lazybensch avatar lolmaus avatar minkimcello avatar mupkoo avatar offirgolan avatar pittst3r avatar rjschie avatar rwjblue avatar samkeathley avatar samselikoff avatar taras avatar turbo87 avatar willrax avatar wkich avatar wwilsman avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

bigtest's Issues

Convergence always does not update default timeout

By default, using .always() sets it's timeout to be one-tenth of the convergence timeout. When the convergence timeout is updated using .timeout(), the old one tenth timeout is still used for that .always convergence in the stack.

new Convergence()
  .timeout(5000).always(() => true) // 50ms default timeout
  .timeout(4000).always(() => true) // 40ms default timeout
  .timeout(3000).always(() => true) // 30ms default timeout
  .do(() => {}) // ensures the previous always uses it's own timeout
  .timeout(100).run() // takes at least 120ms, fails to be within 100ms

When no timeout is provided to the .always() method, we should calculate the default one-tenth timeout when .run() is called. In this example, the minimum 20ms timeout would be used and the whole convergence would take 60ms.

Add `typeable` property

Add a property to @bigtest/interaction which enters text into an input letter by letter as if you were typing

Side-effects in hooks not contributing to individual test time

The @bigtest/mocha readme instructs you to put side-effects in the hooks and assertions in the its. This is causing slow tests to not be flagged in the test results, because the slowness is not in the test proper but rather a before hook. Total suite time appears to be counted properly. Here is a test one can run to demonstrate what I'm talking about:

describe('slow side effects', function() {
  beforeEach(function() {
    return new Promise(resolve => {
      setTimeout(resolve, 200);
    });
  });

  it('reports the slow side effects', function() {
    expect(true).to.be.ok;
  });
});

When you run this test the results will not flag it as a slow test by appending the milliseconds to the test result:

screen shot 2018-04-20 at 9 48 43 am

I would like to see this:

screen shot 2018-04-20 at 9 51 12 am

Yes, this seems more like a mocha API issue, but it is exacerbated by putting all side-effects in hooks as slow tests will almost never get flagged as far as I can tell. I am wondering if there are any changes we can make here to resolve this in some way.

Archive repo

We should archive this repo now that we’re fully moved over to the bigtestjs org

Add contributing.md

We should add a contributing.md file detailing how to get the project setup, how to run tests, & possibly explain how the tests work.

Fill out README

It would be nice to have a small getting started / how to use section and why this repo exists (what problem is it solving?)

Create a command line interface

The CLI should be able to perform various functions such as running the entire test suite, running a particular test, or starting the orchestrator.

Add validations to Interactions

After watching @jgwhite's talk at ember conf I want to add validations to all of our interaction methods. For example when clickable is called on an element we should ensure it has:

  • proper role (button, link)
  • can be tabbed to
  • etc

Basically we should validate that what the user is wanting to interact with is fully interactable for everyone (screen reader, keyboard, and mouse users)

I expect this issue to be broken down into smaller issues when work starts.

Support async/await syntax

Based on the popularity of async/await usage in tests, supporting this without .run() would reduce the boilerplate required to use this pattern with @bigtest/interaction page objects.

With .run():

it('does things', async () => {
  await page.fillName('name namerson').run();

  expect(page.name).to.equal('name namerson');

  await page.clickSubmit().run();

  expect(page.hasSuccessMessage).to.be.true;
});

Proposed:

it('does things', async () => {
  await page.fillName('name namerson');

  expect(page.name).to.equal('name namerson');

  await page.clickSubmit();

  expect(page.hasSuccessMessage).to.be.true;
});

Create a convention for a data directory

There are a number of tasks that need to happen on the file system. These are things like concatenating the manifest.js or writing the command port or the pid of the server. All of these need a directory in which to work or record artifacts.

We need to come up with a convention for where this directory will live and how to find it.

For example, one popular strategy would be to look upwards for this directory similar to the way tools look for a .git directory or a package.json directory. We'll also want to provide a wrapper library for interacting with this directory from both the the CLI and server. That way, no matter what directory they're running from, the multiple process can agree on which project they're attached to.

Questions:

  • How does this directory get initialized?
  • What happens when it cannot be found?
  • What files get put in it and what format will they have? Is that even a scope for this discussion?

Pre-compile bigtest server sources and package for NPM distribution

We used ts-node to bootstrap development, but ultimately in production we won't want to pay the startup costs of the TS compiler. We need a strategy to distribute the @bigtest/server package that pre-built, (along with source-maps) So that when we invoke the bigtest command, it begins running immedaiatel.

Optionally allow orchestrator to serve agent pre-compiled assets

The agent is the master application running Innside of connected browsers that uses the harness (which runs inside of an iframe) to run tests against and application. Currently, we are using parcel to build and serve the agent to the browser. This is convenient for development when agent code is under heavy development. However, in production, the agent is never going to change and so we don't want to pay the startup time and resources to fire up a parcel server. Instead, let's add the ability to serve the agent not from parcel, but from a simple static file server.

Use ESLint

I'm sure a few stylistic things have already slipped by. To make sure everything remains consistent across packages now and in the future, we should implement ESLint across the entire repo and choose a good starting point for our rules.

I recommend eslint-config-standard as the starting point we can expand on, but I'm welcome to other suggestions as well.

Define and implement `@bigtest/suite` expression format for declaring tests

We believe that the entire way of thinking about tests as fundamentally a collection of imperative scripts that may or may not have shared blobs of setup code critically limits the capabilities of working with those those tests at runtime. Instead we propose enforcing a more functional separation of actions and assertions.

What does this look like?

When writing tests written in the conventional style established by unit test frameworks, or even the more expressive BDD-style frameworks inspired by RSpec like Jest and Mocha, we often come up against a problem of expressiveness and good practices vs performance. This is especially problematic in acceptance tests where run time is generally slow.

Let’s take a look at a common way of writing a test in the traditional way to explore the problem.

describe("logging in", () => {
  it("can successfully logs in and log out", async () => {
    await createUser({ email: "[email protected]" });
    await fillIn("Email", { with: "[email protected]" });
    await clickButton("Log in");

    expect(headline("Welcome to the App")).to.exist();
    expect((text("Signed in as Jonas")).to.exist();

    clickButton("Log out");
    
    expect(headline("Welcome to the App")).to.exist()
    expect(text("Signed in")).not.to.exist();
  });
});

Even in this relatively simple example, there are a few problems.

  1. It is good practice to write tests in a setup->action->assert style, but we are not following this practice here. While it would be “purer” to write the second part of the test (logging out) as a separate test, it would also be much slower, since we’d need to repeat the first part.

  2. The describe/it terminology does nothing to make this test more understandable. While it works well, and makes sense for unit tests, in this case there really isn’t anything to describe, and “it successfully logging in” doesn’t make any sense.

  3. The steps in the example could really use some elaboration. What happens on the first line? We are setting up a user for the example, but this is not explicitly stated. While in this simple example it is fairly easy to discern a logic to the flow, this will not always be the case if the examples become more complicated.

  4. A good practice is to write a single assertion per example. The main purpose of this is so that the assertion can be adequately described, and a failed assertion be shown with its proper context. However, writing examples this way is prohibitively slow, since we’d have to rerun the whole example for each assertion.

Completely Rethinking it

We propose a completely new style of writing examples which solves these problems by imposing more structure onto the way tests are expressed. We are calling these tree style tests, because the structure of the examples forms a tree. Branches of the tree represent actions to be taken and the leaves represent assertions.

In this model, assertions must be side-effect free, that is an assertion cannot change the state of the test in any way. This makes it possible to execute multiple assertions after a single action, and even to proceed with further actions after executing an assertion.

To further break down the examples into smaller units, the action taken by a branch node in a tree may be broken down into multiple steps, and there are two distinct kind of steps, setup and action steps.

To get away from the technical jargon somewhat, we have chosen to call setup steps “Given” steps, actions steps “When” steps, and assertion steps “Then” steps. This follows the terminology of tools such as cucumber, rspec-given and similar BDD-style frameworks.

If we imagine the example as a data structure, it could look somewhat like this:

Example {
  givens: GivenSteps[],
  whens: WhenStep[],
  thens: ThenStep[],
  children: Example[]
}

Let’s look at the example above and imagine what it could look like as a tree:

unnamed

In pseudo-code this could look as follows:

example("logging in", () => {
  given("a valid user", async () => {
    await createUser({ email: "[email protected]", password: "1234"});
  });
  when("I fill in valid credentials", async () => {
    await fillIn("Email", { with: "[email protected]" });
    await fillIn("Password", { with: "1234" });
  });
  when("I press the 'Log in' button", async () => {
    await clickButton("Log in");
  });
  then("I should be on the home page of the application", () => {
    expect(headline("Welcome to the App")).to.exist();
  });
  then("I should be signed in", () => {
    expect(text("Signed in as Jonas")).to.exist();
  });

  example("logging out", () => {
    when("I log out", async () => {
      await clickButton("Log out");
    });

    then("I should be on the home page of the application", () => {
      expect(headline("Welcome to the App")).to.exist()
    });
    then("I should not be signed in", () => {
      expect(text("Signed in")).not.to.exist();
    });
  });
});

While this example is significantly longer than the original, it is much clearer about the steps being performed, the assertions being executed and the hierarchy of the examples.

By making the actions of a test an explicit, 1st class entity, this allows us to make several game-changing optimizations when actually running the tests.

Don't run redundant actions (setup).

Under classic runners, we need to run the entire setup chain for every single assertion. In this hypothetical example, that would be a total of ten actions: one set for each then declaration.

before

assertion 1: logging in/I see the headline "Welcome"

  1. fill in credentials
  2. when I push the "login button"

assertion 2. logging in/the I see the text. "Signed In"

  1. fill in credentials
  2. when I push the "login button"

assertion 3. logging in/logging out: then I see the headline "Welcome"

  1. fill in credentials
  2. when I push the "login button"
  3. when I push the "log out button"

assertion 4. logging in/logging out/then I do not see the text "Signed In"

  1. fill in credentials
  2. when I push the "login button"
  3. when I push the "log out button"

However, because we are now orienting our tests around a specific sequence of actions contained in separate code, we only need to run each sequence of actions one time to put an application into a particular state, and then make any number of pure assertions against that state. In this case, it means we only have to run five actions.

after

Sequence 1

actions

  1. fill in credentials
  2. when I push the "login button"

assertions

  • I see the headline "Welcome"
  • I see the text. "Signed In"

Sequence 2

actions

  1. fill in credentials
  2. when I push the "login button"
  3. when I push the "logout button"

assertions

  • I see the headline "Welcome"
  • I do not see the text. "Signed In"

Even with this trivial test suite, we've cut the amount of actions (slow code) that needs to be run by half. Basically, it's a a geometric level of savings which will yield gigantic cost reduction for larger, more complicated test suites.

Fail super fast

Another advantage of this approach is that failures at a high level in a tree will automatically fail the rest of the tree, rather than the test suite attempting to execute the same bound-to-fail code again and again. This shortens the feedback cycle in case of errors. This is especially valuable with acceptance tests, where due to synchronization issues, we will often have to wait a long time before deciding to fail an assertion.

Requirements

The pseudo code outlined above is one example of how we might declare, but it's the tree nature of the suite and the hard separation of side-effects and assertions that gives us the key runtime capabilities. What it actually looks like is still up in the air but nailing down a draft of it is the goal of this issue.

Given that, there are still some high-level constraints to observe:

No side effects

Classic runners all express test suites by mutating a shared global "root test case". E.g.:

describe('a context, () => {
  beforeEach(() => {
  });
  it('does stuff', () => {
    expect(thing).to.beACertainWay();
  });
});

@bigtest/suite is a test suite description language which will declare test suites as immutable data structures. E.g.

import { describe } from '@bigtest/suite';

export default describe('context, () => {
})

Decoupled from runtime semantics

Classic test runners treat the test suite declaration also as the runtime data about executing a test trial. @bigtest/suite will be a separate package that contains only the suite declaration syntax. (It can live in the server for the time being though)

Specifically, it will not impose any constraints on how a test is actually run. At the highest level, test modules are just functions that return a value (the suite)

Among other things, if we are just working with a tree value, then we can experiment with other syntaxes in the future on how to build that tree value.

Gracefully pass values from actions to assertions

In Mocha and Jest, if you want to pass data from the actions down to child actions or child assertions, you need to declare a mutable variable, assign to it during the action, and then read it during your assertion. For example:

describe('view profile', () => {
  let user = null;
  beforeEach(() => { 
    user = createUser();
  });
  beforeEach(async () => {
    await visit(`/users/${user.id}`);
  });
});

Rather, we need a way for data to flow naturally down the tree. One way might be to pass the return value of each action downwards:

describe('view profile', () => {
  beforeEach(() => { 
    return { user: createUser() };
  });
  beforeEach(async ({ user }) => {
    await visit(`/users/${user.id}`);
  });
});

Convert command server to GraphQL endpoint

Rather than implement our own command interpreter we'll get one for free using GraphQL. This will allow us to not only get a command a query language (and response payload) with little effort, but we'll be able to use all of the tooling surrounding the GraphQL ecosystem (like the interactive webshell)

Right now, the command server doesn't do anything except echo the text "Your wish is my command" to any response. Instead, it should be a full GraphQL endpoint. The GraphQL endpoint should only handle responses of type application/json so that later we can serve the web UI over text/html

The goal here is to get the GraphQL interpreter in place, not to actually do anything with it yet, so we'll define and implement minimal schema that just contains an echo query that can be used to verify that the http mechanics are up and running:

type Query {
  echo(text: string!): string;
}

thus
query: { echo(text: "hello world") }
response: { "echo": "hello world" }

Let's remove page objects from @bigtest/interaction...

@bigtest/interaction is named after the Interaction class, but it's primary usage is with page objects. The page objects themselves are just thin wrappers around interactions. They only add property getters that may throw on certain conditions, otherwise their methods just get forwarded to the interaction. The term "page object" is actually a misnomer within our library because we can make much smaller, composable, component-based, interaction objects... in fact we should just call them that: Interaction Objects.

Although, do we need the thin layer objects to begin with? @interaction (formerly @page) already builds custom Interaction subclasses. These custom subclasses are returned when page-object property methods are called. We would just need to move the other getter properties down to the custom interaction subclasses, and page-objects can be removed completely.

If we do add the property getters to interactions, that adds a potential pitfall:

new CustomInteraction()
  .fillName('name')
  .click('.submit')
  .someProperty 
// ^ this is just a getter that returns a value;
// the above interactions never actually happen.

However, this actually helps when composing interactions, and can be converged on using the interaction's own convergence methods

// when composing interactions ...
let interaction = new CustomInteraction()
  .fillName('name')
  .click('.submit')

// ... this works ...
expect(interaction.someProperty).to.equal('something')

// ... and you can converge on values via convergence methods
let afterProp = await interaction
  .do(() => interaction.someProperty)

// or write the assertions within the interaction convergence
new CustomInteraction()
  .fillName('�name')
  .click('.submit')
  .once(function() {
    expect(this.someProperty).to.equal('something')
  })

What does this mean for existing page objects?

Well, nothing. Page objects are such a thin wrapper that by just moving the getters down to the underlying interaction, the API stays exactly the same. We would just alias @page to @interaction, deprecate @page, and remove it for the v1 release.

Technically, any use of page.interaction would need to be replaced with just page since the interaction would no longer be wrapped. But that usage is pretty minimal.

What are the responsibilities of the Agent?

Right now, our agent successfully logs "Hello World" into the host application. Given the steps required to get there, this is actually more awesome than it seems. Ultimately though, it will need to become more complex than that.

This quest is to document how the agent will carry out its work of controlling the application under test, and communicating with / responding to requests from the server.

Key questions:

  • What are use-cases to cover, and what would complete sequence diagrams of those use-cases look like.
  • What are some hypothetical examples?
  • What server features will be necessary in order to support the agent.
  • What are the major exception conditions that we foresee?

Quest: Define and implement new interactor syntax

Part of #142

After two years of learning, it's time to re-think the interactor API to be more intuitive based on users' existing knowledge of the DOM and the principles of component oriented design.

To do this we'll model it as a queryable, continuous graph of strongly typed relationships rather than an object instance created out of thin air (every interactor is ultimately traceable to the root) Ultimately this will be more understandable and ultimately more composable.

Get all the button on the page:

import { Button } from `@bigtest/interactor`;

let buttons = Button.all(); //=> Iterable<Button>

let first = buttons.first();
class Form extends Interactor {
  cancel = Button.first();
  submit = Button.last();

  input = TextField.first('[data-test-input]');

  errors = Validation.
      .where('[data-validation-error]')
      .where(':visible');
}

In this way, it takes on the characteristics of a Link relationship or an ActiveRecord relationship (for those familiar with those technologies).

These query methods do not actually look up the button when the method is called, it only defines the relationship of the button to the structure of the document or sub-document. For example, the only time that the DOM is actually queried for existence of the button would be at the time an action method likeclick() is invoked. Even in that situtation however, the actions will be convergent which means that they will perform themselves whenever the button becomes available, and not fail if they aren't at the exact moment that the action is invoked)

This is only a straw man proposal, and definition of the Interactor syntax is part of the scope of this issue

Action items

`it.always` is sometimes flakey in CI

We're relying on mocha's .timeout() to set the convergence timeout, and sometimes we see this error: Timeout of 100ms exceeded. The convergence should still finish or throw before then, but there may be some timing issue where the convergence doesn't start right away. We should maybe not rely on mocha's timeout for always convergences.

See folio-org/ui-eholdings#231 (comment)

Deprecate `once` in favor of `when`

We like the API of when better than once. So let's deprecate once & remove it in v1.0.0

When we tackle this we should answer: does always need to be renamed too?

Group common property helpers

While working with interaction page objects, I'm seeing stuff similar to this a lot:

hasField = isPresent('input[data-test-field]');
focusField = focusable('input[data-test-field]');
fillField = fillable('input[data-test-field]');
blurField = blurable('input[data-test-field]');

What if we simplified that with a single property helper?

field = input('input[data-test-field]');

That has a predetermined set of properties...

page
  .field.focus()
  .field.fill('value')
  .field.blur()

We could do something similar with other properties as well.

error = element('[data-test-error]');

...

expect(page.error.exists).to.be.true
expect(page.error.text).to.equal('some error')

And maybe even a way to compose these common properties?

field = element('[data-some-thing]').focusable().blurable().fillable()

Include Sinon & chai by default

We want to be able to import both of these packages (include sinon & chai) from @bigtest/mocha:

import { expect, snion } from '@bigtest/mocha'
// ....

Building a distributable NPM package, with binaries.

We're currently using ts-node to run the files dynamically from the source root via yarn start. This is good for running the server in isolation while developing, but that's not how folks will be using bigtest in production, and certainly not how we'll be using it in the MVP (#28).

There, we'll be invoking bigtest from within the project directory, where we won't have access to the project scripts. We'll want to have a bigtest executable, than can be launched as described in the MVP ticket:

$ yarn bigtest --start-with 'yarn start' --app http://localhost:3000

When this runs, we don't want to pay the startup cost of compiling all the TypeScript source at runtime, we just want to boot the JavaScript and get going. Same goes for the agent server and the harness server. There's no need to start up parcel for those processes since they won't be changed. Instead, we'll need to run servers to serve the pre-compiled agent application (agent.html) and test harnes (harness.js) so that they can be loaded into connected browsers.

Using `hasClass` with CSS Modules

On an app using CSS Modules, what should be the strategy for using the hasClass() page object property?

  1. Import the CSS file with the style into the page object as styles so you can check for hasClass(styles.hasError, '#idThatWouldHaveError').

  2. Modify hasClass() to accept a regex as the className - that way a class .hasError--{hash} can be found with hasClass('^has-Error--.*$', '#idThatWouldHaveError').

Option 1 seems more bulletproof, but should page objects be dealing with CSS like that? Option 2 would be a nice option, even if it wouldn't work for apps using CSS Modules that don't leave part of the original class name.

Helper for checking every member of a collection against a predicate

It would be nice to have a way to check every member of a collection against a predicate that returns the results &&ed together.

Given a collection such:

  items = collection($selector, {
    isSelected: hasClass('selected', $child)
  });

The ability to add something like the following property would be boss:

  allItemsSelected = all(this.items(), item => item.isSelected);

A way to unpause when using the pause helper

The pause helper on @bigtest/mocha & in @bigtest/interaction is really helpful for debugging tests. But when you pause a test with Karma the runner hangs and you need to kill the test server and restart it to rerun any tests.

I'm not sure how we handle this. The first thought is save the resolve method of the never ending promise to a global function so you can resolve it at any time. But this pollutes the global space 🤔

ES modules not configured correctly

It is common in a webpack environment when using babel to not transform modules within the node_modules directory. Our package's "module" field points directly to the src files, which are typically compiled with @babel/preset-env. When babel is configured to not compile our packages within node_modules, bundles that use our packages can end up broken.

This can be seen in @bigtest/interaction tests by removing the (!?/@bigtest) regexp within the babel-loader exclude option in the karma.config.js file. This causes the package to fail to properly extend the convergence class and thereby causes every test to fail.

BigTest Alpha

Now that we have an implementation and architecture forming, it's time to laser focus our requirements towards the smallest possible usable system; one that puts together all of the pieces that we currently have lying on the workshop floor. This issue lays out the experience of the MVP, and references the remaining work needed to implement that experience.

Implementation

This section is currently more of a brain dump, but will involve links to other issues eventually.

Experience

We want to drive as close to the usage of bigtest as it will actually happen in the wild. That means being able to drop it into middle of an existing project, and start writing tests against that project immediately. To simulate that experience, we'll use create-react-app to generate a TodoMVC implementation and then run the todo mvc test suite against it implemented as a BigTest suite.

So, given that we have this existing TodoMVC app, we should be able to execute the following commands:

# add bigtest to the project
$ yarn add -D @bigtest/server

Now, we can write our tests in bigtest/**/*.test.ts files. For our example application, we'll have a todomvc.test.ts file that contains the TodoMVC test suite expressed as as BigTest suite.

With our application in hand, and the test suite to boot, we'll fire up the BigTest server:

$ yarn bigtestd --app http://localhost:3000 --start-with 'yarn start'
bigtest server v1.5.6
[info] - connect browsers to http://localhost:24000
[info] - run commands at http://localhost:24001
[info] - 25 tests found in `~/project/todomvc/bigtest`

The --app parameter is the url where the application under test is going to be served by its own build system and the --start-with parameter is the command to run to start the application at that url. It's by completely abstracting the application behind http like this that BigTest is able to completely avoid any special hooking into the build system. The test files will be built and maintained completely separately.

Let's look at the output: bigtestd will automatically find the tests living in bigtest/todomvc.test.ts, build them automatically with typescript, and be able to get basic metadata about it like how many tests there are.

Note that TypeScript support is optional, but that we'll be using it in our example just so that it can be shown to be done.

The other thing it will do is print out the port to which browsers that we want to test can be connected. We can connect any number of browsers this way just by pointing them att http://localhost:24000 For our MVP we're going to require that browsers be connected manually, but eventually we'll have commands to launch them automatically as part of the test suite.

$ open -a Safari http://localhost:24000

This will point the Safari browser against our proxy server which will load the agent, the harness, and pull the test suite into the harness. Now it is a connected browser, and we can issue commands that involve it. Which is a convenient segue into our command server. The other thing that got echoed to the console when we started bigtestd was the url where it can accept commands: http://localhost:24001. When you navigate your browser to this url, you'll come to an instance of GraphiQL where you can query the state of the bigtest server, or you can issue commands. In our case, we could issue a query to get the name of all the browsers currently connected:

browsers {
  name
  status
}

resolves to

{
  "browsers": [{
    "name": "Safari",
    "status": "ready"
  }]
}

Or to get metadata about the test suite:

suite {
  length
  tests {
    context {
      description
    }
    description
  }
}

might return something like:

{
  "suite": {
    "length": 25,
    "tests": [{
      "context": {
        "description": "New Todo",
      },
      "description": "clears the input field when item is added"
    }]
  }
}

Now that we have a connected browser, we can actually create a trial with it:

run(browser: "Safari") {
  id
  status
  passed {
    length
  }
  failed {
    length
  }
}

This runs the test suite against the "Safari" browser and collects results at the Trial object referenced by id:

{
  "run": {
    "id": "Safari/1",
    "status": "running",
    "passed": {
      "length": 0
    },
    "failed": {
      "length": 0
     }
  }
}

And we can now query for this trial directly now that we have the id:

trials(id: "Safari/1") {
  status
  passed {
    length
  }
  failed {
    length
  }  
}

which we can use to post updates to anywhere:

{
  "trial": {
    "status": "complete",
    "passed": {
      "length": 24
    },
    "failed": {
      "length": 1
     }
  }
}

Optionally have orchestrator server test harness.js from pre-compiled assets

The harness is the code actually running inside the document with the application under test. In development, we're building this file (harness.js) using a parcel server so that when we make changes to it, it can be restarted. However, this file will not change when actually used in production, and so we need to have the orchestrator fire up a simple server to serve the static file in production. By contrast, in development, harness.js will be served and dynamically built by parcel.

A way to reference the original mocha method

Currently you can reference window.it or global.it to get the original mocha method, but we should probably have a nicer way for going about this.

I need this for running a11y tests. The a11y checker looks at the rendered dom and reports violations back. This can't be run in a convergence.

Autotag releases

Whenever we merge a new version of a package to master, it automatically publishes that package to npm. This is 🔥

To aid in reaching the branch points in the event that we have to work with older versions of the code, or that we need to make patch releases, we should also have a postpublish script that creates the tag for the new package and pushes it back to the origin repo.

Update Mirage DevDependencies

I noticed that mirage is using babel-* dependencies. The rest of this repo uses babel 7 (now under the babel org @babel/*). Babel 7 is required for module resolution to work correctly in yarn workspaces / monorepos. It also uses babel-preset-es2016 which has been deprecated in favor of @babel/env.

Mirage also uses an older karma. The current version is 2.0.0 while mirage specifies 1.7.0. This is only an issue because when we yarn we will end up with multiple versions of karma. They will be correctly placed in the corresponding package's node_modules, but having all packages utilize the same versions allows us to hoist dependencies into the root node_modules directory.

Using `isPresent` without a selector

When omitting selector using any of the page object properties, the property uses the page object's root element instead. isPresent without the selector will throw an error when the root does not exist. It should catch this error and return false if either the root or the selector within the root does not exist.

Run test suite using harness.

Once a browser is connected (via an agent), a trial can be initiated from the server. The proposed sequence is:

  1. Browser loads the agent url which calls back home, now browser is considered connected.
  2. Server receives command to begin trial run
  3. server sends message to agent to initiate trial run
  4. Agent loads suite definition and begins a trial.
  5. for each action sequence, an iframe is created in the harness pointing to proxy
  6. proxy injects both the test suite module and the harness into source
  7. harness connects to agent in parent frame
  8. agent tells harness to run test
  9. harness runs test and posts result back.

Eventually, we'll pursue a strategy where the same iframe can be recycled again and again, but this will require putting hooks into the source application on how to tear itself down completely.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.