Giter VIP home page Giter VIP logo

capti's Introduction

Capti

Capti is a lightweight end-to-end testing framework for REST APIs. Define your requests and expected response values in an intuitive YAML format, and streamline your endpoint testing.

  - test: Get recipe
    description: "Should be able to get recipe information"
    request:
      method: GET
      url: ${BASE_URL}/recipes/${RECIPE_ID}
    expect:
      status: 2xx
      body:
        id: ${RECIPE_ID}
        name: Guacamole
        ingredients: $exists

Features

  • Define test suites to model the behavior of your users.
  • Write HTTP endpoint tests that make HTTP requests and assert expected responses.
  • Use various matchers in your tests to make pattern-based assertions.
  • Define and extract variables from responses to reuse repeated values and test authentication and stateful resource flows.
  • Provide setup and teardown scripts for your tests and test suites, enabling CI workflows.

Next Steps

Please visit the documentation to learn more about Capti and how you can use it in your projects.

Planned Development

Capti is under active development and is not production ready. If you want to contribute, feel free to reach out (or just start opening issues and PRs, whatever).

Upcoming Features

  1. More matchers - such as "$key_exists some_key" for objects, "$starts_with some_prefix", "$contains some_value", etc.
  2. Testing endpoints under load, testing endpoint throttling or API limits.
  3. Support for specifying a local .env file for loading variables.
  4. Support for printing more detailed results of testing to local files, as well as setting verbose log levels for more information.

Stretch Features

  1. Support for other frameworks?
  2. Coverage reports?
  3. Plugin API for custom matchers?
  4. Whatever you suggest or require for your project.

Contributing

What would you find useful in a tool like this? Feel free to create an issue or just jump right in and fork/clone/code something up.

To run the app, ensure you already have Rust installed, and you have a REST API project you can test it on (or use the included test-app, a simple Express Rest API). Clone the repo locally, and run cargo build to create the project binary, located at ./target/debug/capti.

Run this binary in a project containing some tests you've written (specify your test directory as an argument to running the binary) following the guidance above.

Note: If the above step is confusing, take a look at the "test" script in the test_app package.json file.

capti's People

Contributors

wvaviator avatar

Stargazers

 avatar  avatar  avatar  avatar

Watchers

 avatar

capti's Issues

Test Output

There should be progress output of some kind while the tests are running. Currently information is only conveyed once the tests finish. Perhaps additionally a verbose command line output argument could be included for more detailed information.

Consider log and fern crates for this.

Additionally, test failure output needs formatted better. Sometimes it provides too much information, and for some reason it escapes to the next line after like 20 characters.

Proposed architecture update (custom matchers)

Warning: 2am can't-sleep idea

  • Create a custom Value variant, akin to serde_json::Value and serde_yaml::Value, MValue (short for match value)
  • Include an additional MValue::Matcher variant, in addition to the usuals (Array, Object, String, etc)
  • Write custom serde deserialization visitor for MValue to interpret values and any Matchers
  • Instead of hard coding a Matcher enum, keep a global map of matcher strings to Matcher trait objects (MatcherMap)
  • Objects with Matcher trait require implementing a method which takes in the proposed match MValue and returns true or false

The goal of the above architecture change is to ultimately be able to support externally defined custom matcher extension plugins. In the case of a custom plugin written in any language, it would consist of an executable that Capti can call, initially with a --registration argument to get information needed to register the matcher in the MatcherMap, and later be able to call the executable with the proposed match and expect a true or false value back (or perhaps a more complex response, in the case where the matching engine should continue evaluating).

Custom matchers could then be written in any language - wrapper APIs for TypeScript and Rust can be provided initially that allow the developer to define just two functions, and the wrapper would handle converting the result of those functions to the interface needed by Capti to utilize the matcher.

Params not URL Encoded

Just realized that I never did any URL encoding for request params provided. This should be implemented using the urlencoded crate.

Wait until server running option

Currently, for scripts, wait_until can be either seconds or finished. For servers, neither of these are ideal because they can take a variable amount of time to spin up and will never finish.

A third option, potentially called port_open or otherwise, could be specified. Internally, this would poll the port in question until it has been opened by the server.

setup:
  before_all:
    - description: "start the app server"
      script: "npm start"
      wait_until: port_open 3000

This will require special deserialization logic for serde since it won't be able to differentiate between this string and the 'finished' string as untagged enum variants.

Read env variables

Environment variables should be read from the following format:

tests:
  - test: "get hello"
    request:
      method: GET
      url: "http://localhost:3000/hello"
      headers:
        Authorization: Bearer {API_SECRET}
    expect:
      status: 200
      body:
        message: "Hello, world!"

By default, the system should first check if the environment variable already exists. Optionally, a .env file should be able to be specified as a top-level option for importing variables into the tests. Under the hood, this should just read the .env file and store the values as temporary local variables associated with the test suite.

Note that environment variables should be able to be used anywhere in the file where text is included - both in the request, for example, in descriptions - or even as matchers for expected responses.

Concurrent Setup

One issue that may come up is having multiple suites, ideally running in parallel, that depend on the same setup functions. For example, if all suites rely on npm start, then each one will try to run the command and all but one command will succeed.

With port listening, this may not be an issue - since each run of the command can check the port first to see if it's already open. However, for something like starting a test database in a Docker container for example - it could get messy.

Ideas I've had so far:

  1. A centralized config where before_all scripts can be defined for running all tests
  2. A way to detect duplicate scripts across suites and only run them once, maybe by script/command or by some user-defined naming convention
  3. Rely on port checking, as mentioned before
  4. Create an app config section of each suite the specifies a start script and a port number, and save before_all scripts for suite-level changes
  5. Figure out some way to execute suites in pseudo-isolated environments, or with port-mapping, without any required dependencies like Docker

Concurrency Options

The level of concurrency with a test suite or a collection of suites should be customizable in some manner.

Sequential testing may be desired in cases where requests may follow a logical pattern - for example:

  1. Authenticate
  2. POST a new resource
  3. GET the new resource that was created
  4. PATCH the resource
  5. GET the resource again and verify the updates
  6. DELETE the resource
  7. GET the resource again to verify it no longer exists

This workflow would not work with concurrency. Inversely, making a series of unrelated GET requests to different endpoints can be concurrent, and doing so would speed up testing.

Perhaps a suite-level option, something like an optional "parallel: true". I do think that sequential should be the default for all tests in a suite, since it is probably the most intuitive and unlikely to cause issues if the option is not specified. However, at the suite-level, I think that suites should by default run in parallel, and the user should be encouraged to write their test suites in isolation. Optionally a command-line argument could be supplied to run them one-at-a-time.

Static env loaded variables

Currently, env variables are only loaded when the test suite is run. Ideally, they should be able to be loaded and used within static variable definitions, even redefining them with the same name.

Variables as values

Currently variables are defined as strings only.

There's no reason they couldn't be defined as values instead (provided the user doesn't try to expand them in the middle of a string). This could enable entire requests to be captured as variables, eliminating more repeat work.

suite: Recipe Test
variables:
  RECIPE_REQUEST:
    method: GET
    url: http://localhost:3000/recipes

tests:
  - test: Gets a recipe
    request: ${RECIPE_REQUEST}
    expect:
      status: 2xx

Initially, the deserialization would interpret the request as a Value::String("${RECIPE_REQUEST}"), and variable population could take in this Value and return a Value, and in this case take in a Value::String and return a Value::Object.

There would have to be handling of cases where more of a string is defined - what should happen, for example, if in this case the user tries to define `some_field: "Request info: ${RECIPE_REQUEST}" - ideally the app should never exit early, so this will need to be handled - perhaps by serializing the request object as JSON and inserting it into the string.

Non-test requests / Repeat option

The user may want to define a test that involves first making multiple other requests before making an assertion. One example would be to check if the user is properly limited in the number of resources they can create. I think these should be defined inline in a test as some type of "before" option, maybe an array. Additionally, an "repeat" option could be included to indicate how many times the request should be made.

Here is an example of what I'm thinking:

  - test: Max recipes allowed
    description: The user should only be allowed to create 10 recipes, any more should return a 400
    before: # an array of actions to perform before making assertions
      - request:
          method: POST
          body: ${RECIPE_BODY}
        continue_if: # a condition that must be met for the test to continue without failing
          status: 2xx
        repeat: 10 # repeat this item 10 times
    request:
      method: POST
      body: ${RECIPE_BODY}
    expect:
      status: 4xx

In this particular case, each action in the before array is executed sequentially and repeated where indicated. This format is identical to a TestDefinition, the continue_if is just another wording for an expect (and perhaps it should still be called expect) it just doesn't result in a "passed test" if it passes, but if it doesn't, the entire test is marked as failed.

Consideration could be given to defining an "after" array as well, for any cleanup requests (but should those fail the test if they fail?)

Local variable extraction

Alongside the request and expect sections of a test suite, there should be an extract section as well. This section should work much like the expect section, but any variables used here will be populated, rather than read.

tests:
  - test: "sign in"
    description: "sign in test user for subsequent tests"
    request:
      method: POST
      url: "http://localhost:3000/signin"
      body:
        username: {TEST_USER_USERNAME} # these are read from existing env variables
        password: {TEST_USER_PASSWORD}
    expect:
      status: 2xx
    extract:
      headers:
        Authorization: Bearer {JWT_TOKEN} # this gets populated instead of read, can be used in subsequent requests

Extraction logic is subject to change - the above would be the most ideal if it is feasible. Conceptually, a parser could read the text before and after the variable, and match the response char by char. However, it could get tricky if the extracted variable contains the same text as text that comes after, for example if an extract matcher reads message: {GREETING} How are you? and for some reason the extract variable GREETING contains the word "How", how can the logic be arranged to avoid the collision? Maybe two pointers, reading chars backwards and forwards, and then grabbing the remainder as a trimmed slice?

Reporting

There should be a command-line option to output a more detailed report of the tests. These detailed reports can print out the full test name and description, the request and response, any additional logs (which may require setting up layered logging), and the results.

Support other HTTP methods

Currently only GET is supported. It should be trivial to add the other request methods, and then build example suites for each method.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.