Giter VIP home page Giter VIP logo

proof's Introduction

proof's People

Contributors

flatheadmill avatar jgpelletier avatar vielmetti avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

proof's Issues

Suggest test naming conventions.

Suggest using proof._coffee as the harness name instead of harness._coffee. Of course, a harness can be regular CoffeeScript, JavaScript or Streamlined JavaScript. Suggest test.t as a test name convention, because syntax highlighting can be discovered through the shebang line on most rational editors. Bring the examples in proof into line with the conventions.

Ensure next step executes after previous step completes.

Currently, it might be possible for a callback be called back immediately, causing the next step execute immediately. We need to wait until the step function itself completes. There may be case where multiple callbacks are being requested, but the first one returns immediately so that the next step executes while the previous step continues to register callbacks.

Nicer error reporting.

Give some consideration to the difference between an exception and an error of usage. As it stands, it is not pretty to barf with a stack trace when the user specifies a test program twice in a test run. A simple error message plus a usage message ought to be enough.

Maybe error messages can go to: http://bigeasy.github.com/proof/errors#doubles where you can find more about this message.

Create arguments to adjust time formatting.

It's nice to have the decimal points line up. If you know you're going to have a long running test, you can run your Proof test glob with a wider column for test run time.

Add process id to unified runner output.

Add the process id of test to the amalgamated asynchronous inquiry output, so we can report what might have been failure notes, such as exceptions. We don't want to get into the habit of parsing output. That falls into the trap of complexity.

Perhaps, if we're running at Travis CI, or running against the progress runner, we can emit JSON instead of using util.inspect. Maybe we just use JSON because it can parse.

Implement `$cleanup`.

Implement cleanup according to my README.md where I claimed that it ran before and after each test. Currently, there is no such thing. I believe I have a start on some form of tear down, but I'm not doing teardown.

Develop a strategy for testing exceptions.

I've yet to test that exceptions are correctly thrown. It can be tedious to write a try/catch block that also needs to test that the exception was not thrown. Supposed you simply put a fail underneath the point in the block where the exception is thrown, but that doesn't fit with the logic of Proof. How do you tie the fail call to the equality test for the expected exception?

Do you have a method that returns an exception? It would be nice to be able to do a detailed inspection of an exception that has properties other than a message.

Hide `Test` private members with custom prefix.

Because the context is a place for the user to sore things, we need to hide our private members with more than a single underscore. Already it collides with the Streamline.js callback. Need to add a prefix like __proof_, to go all out with the hiding.

I'd do this as part of the async commit, but it is going to be a noisy diff, so I'd like to isolate it in a single commit.

Cull and color README.md.

Remove the last of the README.md stuff that you've kept around in case you needed inspiration. Remove any petulance, defensiveness. Add syntax highlighting for JSON and CoffeeScript examples.

Detect compilation errors.

Some tests fail due to compilation errors. They fail immediately and are marked as successful runs, but with an undefined number of tests. They need to be marked as failures and the entire test run needs to be marked as a failure.

To reproduce, create a test program that will not compile.

A passed test when no tests are expected is reported as success.

A passed test when no tests are expected is reported as success. It should be reported as a failure. Tests are failures when their expected assertions do not match their actual assertions. I might be only checking to see if expected matches passed. Need have all three be equal.

Use a name besides `flow`.

We already have the ability to pluck a function bound to the Test object out of the Test object. Let's think of a name for generating a callback. What about this.stash or this.store or this.cubbyhole?

Add pipeline scaffolding.

This is a euphemism. What we're really talking about here is creating an interface to bash to do the rigamarole of testing. Creating an extensible mechanism, like the one in git that allows users to extend the proof executable to suit the needs of their testing environment. That way, proof can continue to stay simple, providing the runner and some formatters. Pipelines in Node.js are not as easy as they are in bash, nor are they as easy as Node.js people say they are.

This makes proof less conducive to Windows. I'm okay with that. Windows is not a development platform.

Actually, it changes nothing, because piping was always part of Proof. This will only make the pipelines reusable.

Print summary the when plan arrives in `proof progress`.

Currently, if a test hangs before the first assertion, there is no indication of which test has hung. The progress runner will have received a plan, but will not print to the console until it gets its first assertion. It should print a summary when it gets the plan.

Display compilation error output.

Gather up standard error and standard out prior to printing the plan. If the plan never comes and the test program instead exists with a non-zero value, then display the gathered output. This is the nature of test programs that failed to compile.

Rename `store` to `callback`.

Create a single callback function that does it all. It stores, returns an error only callback, or acts as a callback.

Display error messages after a failed assertion.

This could be way too verbose for a long running test like the ones in TImezone, but for a handful of failed tests at Travis CI, this is going to be the shortest path to a working build.

If a test is not okay, all output that is not test runner output is printed until the next test runner output.

Print antecedents to an assertion failure or bailout.

When a test fails, we only see what came after the failed test, so it if we add debugging output to see what lead up to the test, the proof errors display won't show it. This is a problem when trying to figure out what's going on remotely, like at Travis CI.

JavaScript tests.

Support JavaScript tests with optional positional parameters. The problems with converting from CoffeeScript to JavaScript are the lack of a this short cut, so that we do not have a shorthand to get to our assertion functions, plus no destructured assignment, so we cannot simply unpack our context.

To deal with the assertions, I'm going to grab the $ variable and use that for Proof state. The $ variable is passed in as the first argument, if we detect it in the function signature. That acts as this. In fact, it can simply be a synonym for this.

Yes, I said detected in the function signature. Everything after $ and before the end of the function signature, or a _ is considered a parameter to unpack.

Display failed assertions from progress formatter.

When running at the command line, it is probably enough to know that the test failed. You can rerun it directly. When running from Travis CI, you're going to probably want to know which assertions failed, especially if tests have been passing on your development machine.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.