Giter VIP home page Giter VIP logo

Comments (17)

pygy avatar pygy commented on May 31, 2024 2

I thought a bit about this.

Agreed re. o.define(name, handler)

I think the handler signature should be (actual, expected) => string.

The result is used as the pass() message. If the handler throws, we fail the assertion.

For test clarity, it would be nice if the extensions could be scoped per spec using prototypal inheritance and possibly to disallow global extensions. We could also disallow shadowing existing assertions.

Regarding stack traces, currently we return the first non-framework stack trace. We could use the same logic to filter the stack.

from ospec.

pygy avatar pygy commented on May 31, 2024

The lack of extensibility is actually deliberate, Leo wanted the API surface to be as small as possible to keep the learning curve as low as possible for Mithril wannabe contributors...

You can also define helpers that defer to the basic assertions... The trouble is that, currently, we only report the first non-framework line of the stack trace in the report (which would point at the helper function, not very useful). If we printed all non-framework lines, it would make this more usable.

from ospec.

dead-claudia avatar dead-claudia commented on May 31, 2024

@pygy My proposal right here is meant to replace the very idiom o(!!cond(a, b)).equals(true), which a variation of it is often used in Mithril itself. The custom assertions defined via this desugar to effectively that minus potentially better error messages, so in effect it does delegate to the basic assertions - I'm explicitly not proposing the complex mess that's Chai's assertions.

Hypothetically, mithril-query could integrate via this, as a concrete example of its simplicity and usage:

function test(sel, cond) {
    return Array.isArray(sel) ? sel.some(cond) : !!cond(sel)
}

o.define("has", (o, s) => test(s, o.has))
o.define("notHas", (o, s) => !test(s, o.has))
o.define("contains", (o, s) => test(s, o. contains))
o.define("notContains", (o, s) => !test(s, o. contains))
o.define("hasAtLeast", (o, n, s) => test(s, s => o.find(s).length >= n))

You can also define helpers that defer to the basic assertions... The trouble is that, currently, we only report the first non-framework line of the stack trace in the report (which would point at the helper function, not very useful). If we printed all non-framework lines, it would make this more usable.

While I agree this behavior is an issue, it's separate from my proposal here, and so I'd rather discuss it in a separate issue. (Stack trace parsing can get very complicated very quick, because of async traces and the sheer wild west it is right now. I know because I've done it.)

from ospec.

dead-claudia avatar dead-claudia commented on May 31, 2024

Scoped assertions are a bit more complicated to implement with the current architecture than I'd prefer, and few test frameworks with built-in assertions have them.

I have ideas on how to enable them while also simplifying the framework, but they're very semver-major.

I think the handler signature should be (actual, expected) => string.

The result is used as the pass() message. If the handler throws, we fail the assertion.

This is too limiting:

  • Some require 3 arguments (float equality, mithril-query's hasAtLeast, etc.)
  • Most assertions aren't direct equality, and need a way to generate a custom error message. (My proposals here also have this issue.)
    • Something that interpolates %0, %1, etc. could work to suffice for the message.

Regarding stack traces, currently we return the first non-framework stack trace. We could use the same logic to filter the stack.

👍

from ospec.

pygy avatar pygy commented on May 31, 2024

This is too limiting:

* Some require 3 arguments (float equality, mithril-query's `hasAtLeast`, etc.)

* Most assertions aren't direct equality, and need a way to generate a custom error message. (My proposals here also have this issue.)
  
  * Something that interpolates `%0`, `%1`, etc. could work to suffice for the message.

The handler can do whatever it wants with the values. They validate the input however they like. If they need several values as input, they can accept a structured object, or an array...

We could also do (actual, ...references) => mapping to o(value).myThing(a, b, c). Structs are probably better for readability, but worse for type-ability...

I have something in mind for scoped extensions as a semver minor addition. I don't like the idea of global extensions, actually, I'd have to pinch my nose hard to add them.

from ospec.

pygy avatar pygy commented on May 31, 2024

Here's a preliminary version: 1c52581

from ospec.

pygy avatar pygy commented on May 31, 2024

Now that I think of it, you can get the same benefits by using this (variations of) pattern:

o({success: true}).deepEquals(myAssertion(x,y,z))(optionalContext)

... where myAssertions may return {problems: [...]}.

You lose the subject-verb-object feel though.

I didn't explain why I don't like global extensions...

I hate it when I dive into a test suite, only to find mysterious globals defined by either the testing framework or extensions thereof, and there's not local indication of where to find info about them.

I don't want to inflict more of that to others.

With scoped extensions, you can track what's happening at the source file level (optionally by following import or require invocations).

from ospec.

dead-claudia avatar dead-claudia commented on May 31, 2024

@pygy

I didn't explain why I don't like global extensions...

I hate it when I dive into a test suite, only to find mysterious globals defined by either the testing framework or extensions thereof, and there's not local indication of where to find info about them.

I don't want to inflict more of that to others.

With scoped extensions, you can track what's happening at the source file level (optionally by following import or require invocations).

I have a semver-major idea to resolve this:

  • o(a).equals(b)(message)o.equals(a, b)
  • o(a).notEquals(b)(message)o.notEquals(a, b)
  • o(a).deepEquals(b)(message)o.matches(a, b)
  • o(a).notDeepEquals(b)(message)o.notMatches(a, b)
  • Each of these would throw an error that would be caught and handled accordingly
  • New method: o.fail(template, {...params}) always throws, where params.actual and params.expected are set on the error itself and where template is a string with %{key} interpolations (%{ can be escaped by doubling the percent)

This would have the added benefit of not having the mess of assertions that currently exists, and coding them is dead simple. The new forms could be added in a minor release with the first forms runtime-deprecated, and then we could remove the first. And removing that object would remove a significant chunk of code from the framework, both in error tracking and in just not having a class.

As for why removing support for message? In most cases, you should be defining child tests for them, and they have never reliably worked in the first place.

from ospec.

dead-claudia avatar dead-claudia commented on May 31, 2024

As for custom assertions? They're just functions. Nothing special required on our end, and it's infinitely scalable with literally zero effort by us.

from ospec.

pygy avatar pygy commented on May 31, 2024

I don't understand how the whole thing would work...

  • o(a).equals(b)(message)o.equals(a, b)

What happens to the message?

In what scenario would you call o.throw?

Also, I don't understand how it relates to mysterious globals that you can only learn about by scouring the project tree looking for details about the testing setup, vs o.addExtension('name', require(myExtension), or even myExtension.decorate(o))... In the latter case, you can easily look at npm.im/my-extension (or follow a local path) and understand what happens...


I have been thinking about ospec, and there are pain points that I would like to address.
Leo optimized ospec for the simplicity of test writing and good test presentation (hence the o(actual).equals(expected)(withContext) API that makes assertions look like bullet points with English sentences).

The API was also very small, and it is an important feature for newcomers.

There are two drawbacks when using ospec though:

  • since assertions are so simple, you can easily end up with tons of repetitive tests. A simple but backwards incompatible code change can become daunting because you must also update the test suite, sometimes extensively, even though better testing abstractions (i.e. custom assertions) could have made the transition easier. Some may see it as a feature not a bug, but you can value stability in general without liking the pain of having to redo tons of tests when needed. In practice, you end up with sclerotic code bases that are hard to change.

  • in many places you end up with tests that look like this:

o("renders plain text", function() {
  render(root, "a")
  o(root.childNodes.length).equals(1)
  o(root.childNodes[0].nodeValue).equals("a")
})

where a failure of the first test can entail an exception while trying to evaluate the second one. In the report, the error thrown is often more prominent that the previous assertion failure, and is thus distracting.

I wish I could do

o("renders plain text", function() {
  render(root, "a")

  o(root.childNodes.length).equals(1)
  && o(root.childNodes[0].nodeValue).equals("a")
})

If there are several dependent assertions, one can use parentheses and the coma operator

  o(root.childNodes.length).equals(2)
  && (
    o(root.childNodes[0].nodeValue).equals("a"),
    o(root.childNodes[1].nodeValue).equals("b")
  )

Similarly if a set of simple assertions failed at the start of a spec, you know the whole spec is going to explode, and you'll have to parse screens of errors to get salient info.

Combined with assertions that return booleans, o.bail() would skip the next tests in the suite.

o("simple test", () => {
  o(foo).equals(bar) || o.bail()
})

Returning booleans would make us lose the ability to add context message though, so we'd have to introduce another mechanism. o(foo).equals(bar, message) could work, but you were mentioning assertions that require more than one parameter (e.g. how many decimals should be considered for float equality), this is in conflict with that...

Edit: wrote under unrelated pressure, ended up with many small but important errors.

from ospec.

dead-claudia avatar dead-claudia commented on May 31, 2024

@pygy

I don't understand how the whole thing would work...

  • o(a).equals(b)(message)o.equals(a, b)

What happens to the message?

Already answered: (copied from my previous comment)

As for why removing support for message? In most cases, you should be defining child tests for them, and they have never reliably worked in the first place.

In my experience, I've not seen a single case of an explicit message being used where wrapping the assertion in its own test wasn't more appropriate. And that's across every code base I've looked at ever, regardless of source language. Even in loops like this, separate tests have always been more appropriate:

// Avoid
for (let i = 0; i < 0x10FFFFF; i++) {
  assert.equal(getHex(i), whatever, `hex ${i.toString(16).padStart(4, "0")}`)
}

// Prefer
function testHex(i) {
  o(`hex ${i.toString(16).padStart(4, "0")}`, () => {
    assert.equal(getHex(i), whatever)
  })
}

for (let i = 0; i < 0x10FFFFF; i++) testHex(i)

In what scenario would you call o.throw?

The general format of all the assertions would be like this:

function assertFoo(expected, actual, ...) {
  if (!cond(expected, actual, ...)) {
    o.fail("Expected %{expected}, found %{actual}", {expected, actual})
  }
}

For a heavier example, my clean-assert library is built on this core pattern with assert.is(Type, value) following it fairly directly. Here's that translated to this quite directly:

export function is(Type, object) {
    if (object == null || !isType(Type, object)) {
        if (typeof Type === "string") {
            o.fail(
                "Expected {actual} to be " +
                (/^[aeiou]/.test(Type) ? "a " : "an ") + Type,
                {actual: object}
            )
        } else {
            o.fail(
                "Expected {object} to be an instance of {expected}",
                {actual: object.constructor, expected: Type, object: object}
            )
        }
    }
}

function isType(Type, value) {
    if (typeof Type === "string") {
        switch (Type) {
        case "array":
            return Array.isArray(value)

        case "iterable":
            return typeof value[Symbol.iterator] === "function"

        case "array-like":
            return typeof value.length === "number"

        case "reference":
            return typeof value === "function" || typeof value === "object"

        default:
            return typeof value === Type
        }
    } else if (Type != null && (
        typeof Type === "function" || typeof Type === "object"
    )) {
        if (Symbol.hasInstance in Type) return !!Type[Symbol.hasInstance](value)
        if (typeof Type === "function") return value instanceof Type
    }

    throw new TypeError(
        "`Type` must be a string, function, or object with `Symbol.hasInstance`"
    )
}

Also, I don't understand how it relates to mysterious globals that you can only learn about by scouring the project tree looking for details about the testing setup, vs o.addExtension('name', require(myExtension), or even myExtension.decorate(o))... In the latter case, you can easily look at npm.im/my-extension (or follow a local path) and understand what happens...

If custom assertions are simple functions you import, you can just look up the module, whether it be in your project or from npm. It's just a matter of knowing the module system - you don't need to know much else.

I have been thinking about ospec, and there are pain points that I would like to address.
Leo optimized ospec for the simplicity of test writing and good test presentation (hence the o(actual).equals(expected)(withContext) API that makes assertions look like bullet points with English sentences).

I agree and I do share that concern. However, ospec isn't Cucumber and in all but the simplest of cases it doesn't seem to resemble that bullet point much.

The API was also very small, and it is an important feature for newcomers.

BTW, my proposal would result an equally simple API in the end.

There are two drawbacks when using ospec though:

  • since assertions are so simple, you can easily end up with tons of repetitive tests. A simple but backwards incompatible code change can become daunting because you must also update the test suite, sometimes extensively, even though better testing abstractions (i.e. custom assertions) could have made the transition easier. Some may see it as a feature not a bug, but you can value stability in general without liking the pain of having to redo tons of tests when needed. In practice, you end up with sclerotic code bases that are hard to change.

FWIW I've hit this point many times just within Mithril, and that's where ospec really struggles. It's kind of like Tape in that respect - it's nice for simpler stuff but the moment you need flexibility, you're fighting the testing framework. And Mithril has quite a few places where it needs that extra flexibility, particularly around DOM testing.

  • in many places you end up with tests that look like this:
o("renders plain text", function() {
  render(root, "a")
  o(root.childNodes.length).equals(1)
  o(root.childNodes[0].nodeValue).equals("a")
})

where a failure of the first test can entail an exception while trying to evaluate the second one. In the report, the error thrown is often more prominent that the previous assertion failure, and is thus distracting.

I wish I could do

o("renders plain text", function() {
  render(root, "a")

  o(root.childNodes.length).equals(1)
  && o(root.childNodes[0].nodeValue).equals("a")
})

If there are several dependent assertions, one can use parentheses and the coma operator

  o(root.childNodes.length).equals(2)
  && (
    o(root.childNodes[0].nodeValue).equals("a"),
    o(root.childNodes[1].nodeValue).equals("b")
  )

Similarly if a set of simple assertions failed at the start of a spec, you know the whole spec is going to explode, and you'll have to parse screens of errors to get salient info.

Combined with assertions that return booleans, o.bail() would skip the next tests in the suite.

o("simple test", () => {
  o(foo).equals(bar) || o.bail()
})

Returning booleans would make us lose the ability to add context message though, so we'd have to introduce another mechanism. o(foo).equals(bar, message) could work, but you were mentioning assertions that require more than one parameter (e.g. how many decimals should be considered for float equality), this is in conflict with that...

My plan here the whole time was to have o.isEqual and friends throwing on failure, so that gets avoided entirely. And yes, I know that issue all too well.

Edit: wrote under unrelated pressure, ended up with many small but important errors.

No problem!

from ospec.

pygy avatar pygy commented on May 31, 2024

Ah, I had missed your last paragraph.

Here's an example of context messages that would be trickier with tests. You'd need a wrapping spec, and then you'd have to pass state between the before hook and the tests by using closure variables in the spec scope.

The general format of all the assertions would be like this:

Now, API your plan starts to make sense, but I don't necessarily agree...

By having assertions just throw, you're conflating two information channels: one that goes to the report, and one that goes to the test. You can catch the error from the test suite, but then the failure goes unreported unless you rethrow it or fail it manually. try/catch is also more cumbersome to use than booleans.

I'm not a fan of the o.fail proposal either. 1) because you're re-inventing template strings (i know that the Mithril tests should run on IE11, but still... we can use a string formatter as test-util, I wouldn't bake this into ospec) and 2) because of the negative valence of the word.

With the system I propose, assertions can use return/throw to send data to the report, and the engine converts that to booleans that can be used in test space, optionally to bail() if appropriate.

Suppose that we had

o.addExtension({satisfies(value, valiator) { return validator(value) })

const equalsWithPrecision = delta => ref => value => {
  if (typeof value !== "number" throw `${format(value)} isn't a number`
  if (Math.abs(ref - value) > delta) {
    throw `${value} out of range ${ref} +/- ${delta}`
  } else [
    return `${value} in range ${ref} +/- ${delta}`
  }
}

we could then do

const toTheMilionth = equalsWithPrecision(0.000001)

o(myValue).satisfies(toTheMilionth(0.2))

We could probably drop the general extension mechanism, and replace it with satisfies. Not even sure notSatisfies would be of use, vs not(fn)

o.not = f => value => {
  var threw = false, result
  try { result = f() } catch (e) { threw = true; result = e }
  if (threw) return `expected: ${result instanceof Error ? result.message : result}`
  else throw `unexpected: ${result}`
}

... but this is bikeshedding...

.satisfies has the advantage that it is backwards compatible (I do like the bullet points, personally, and returning booleans opens the door to multi-level bullet points, taking the metaphor further). It also makes extensions plain functions as you proposed, and it works without needing o.fail.

I'd like to take this as far as possible without breaking B/W compat, release a v4, and then perhaps investigate boolean return values.

A B/W compat alternative to boolean return values would be o.k that would be set to the success value of the last assertion...

Edit: in general, I think that the use of wholesome language in APIs is important (when possible, off course).

from ospec.

dead-claudia avatar dead-claudia commented on May 31, 2024

@pygy

  1. because you're re-inventing template strings (i know that the Mithril tests should run on IE11, but still... we can use a string formatter as test-util, I wouldn't bake this into ospec)

Yes, they're templates, but it's IE-compatible and it's more like inventing a specific type of tagged template. Also, BTW, that formatter could be as simple as JSON.stringify(value, null, 2) to start.

  1. because of the negative valence of the word.

The word was meant to be negative, as that's kind of the point: it's literally to report a failed assertion.


I still don't want to block the next release on this - we have enough for a good minor/patch release, and this is already in of itself pretty contentious.

from ospec.

pygy avatar pygy commented on May 31, 2024

@isiahmeadows re. bail/fail and o.k

The first release of ospec used to crash on the first error thrown, and it was problematic when working on the Mithril core router, because the router API test suite was running before the core router test suite. When changing the core router, you'd get API crashes, and wouldn't get proper info from the core test suite.

So I changed ospec to treat exceptions as test failures. It is an improvement because now the relevant info gets out there in the log, but now it gets drowned in the failures.

So I propose that we now have errors thrown in test suites (and done(error) and error thrown in promises) to interrupt the current spec (except the after hook).

With this, there's not need for o.bail, and we still give test authors some control for corner cases (say, a test suite that depends on a DB connection can throw in the o.before hook if its preconditions are unmet).

I've pushed a patch in #17, the exact semantics may have to be tweaked.

Currently, throwing in o.before() cancels the whole spec, and nested specs, with the excpetion of the corresponding o.after().

Throwing in a test or a o.*Each hook cancels the following tests and nested specs as well, but it doesn't interrupt the current series (so if you throw in beforeEach, the test and afterEach still run.

We may have to discuss the exact semantics, but I think that this is superior to what we have now.

from ospec.

dead-claudia avatar dead-claudia commented on May 31, 2024

@pygy I just remembered another use for an o.fail: asserting rejections. When you want to verify something rejects, o.throws isn't really an option, but a .then(() => o.fail("expected a rejection"), e => { ... }) where you're then asserting something about the rejection is actually pretty useful in practice.

from ospec.

pygy avatar pygy commented on May 31, 2024

@dead-claudia can I close this?

from ospec.

dead-claudia avatar dead-claudia commented on May 31, 2024

Yes, it can be closed.

from ospec.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.