wvaviator / capti Goto Github PK
View Code? Open in Web Editor NEWA lightweight, YAML-based framework for end-to-end REST API endpoint testing.
License: MIT License
A lightweight, YAML-based framework for end-to-end REST API endpoint testing.
License: MIT License
Currently variables are defined as strings only.
There's no reason they couldn't be defined as values instead (provided the user doesn't try to expand them in the middle of a string). This could enable entire requests to be captured as variables, eliminating more repeat work.
suite: Recipe Test
variables:
RECIPE_REQUEST:
method: GET
url: http://localhost:3000/recipes
tests:
- test: Gets a recipe
request: ${RECIPE_REQUEST}
expect:
status: 2xx
Initially, the deserialization would interpret the request as a Value::String("${RECIPE_REQUEST}"), and variable population could take in this Value and return a Value, and in this case take in a Value::String and return a Value::Object.
There would have to be handling of cases where more of a string is defined - what should happen, for example, if in this case the user tries to define `some_field: "Request info: ${RECIPE_REQUEST}" - ideally the app should never exit early, so this will need to be handled - perhaps by serializing the request object as JSON and inserting it into the string.
One issue that may come up is having multiple suites, ideally running in parallel, that depend on the same setup functions. For example, if all suites rely on npm start
, then each one will try to run the command and all but one command will succeed.
With port listening, this may not be an issue - since each run of the command can check the port first to see if it's already open. However, for something like starting a test database in a Docker container for example - it could get messy.
Ideas I've had so far:
There should be a command-line option to output a more detailed report of the tests. These detailed reports can print out the full test name and description, the request and response, any additional logs (which may require setting up layered logging), and the results.
Warning: 2am can't-sleep idea
The goal of the above architecture change is to ultimately be able to support externally defined custom matcher extension plugins. In the case of a custom plugin written in any language, it would consist of an executable that Capti can call, initially with a --registration
argument to get information needed to register the matcher in the MatcherMap, and later be able to call the executable with the proposed match and expect a true or false value back (or perhaps a more complex response, in the case where the matching engine should continue evaluating).
Custom matchers could then be written in any language - wrapper APIs for TypeScript and Rust can be provided initially that allow the developer to define just two functions, and the wrapper would handle converting the result of those functions to the interface needed by Capti to utilize the matcher.
Currently, for scripts, wait_until can be either seconds or finished. For servers, neither of these are ideal because they can take a variable amount of time to spin up and will never finish.
A third option, potentially called port_open or otherwise, could be specified. Internally, this would poll the port in question until it has been opened by the server.
setup:
before_all:
- description: "start the app server"
script: "npm start"
wait_until: port_open 3000
This will require special deserialization logic for serde since it won't be able to differentiate between this string and the 'finished' string as untagged enum variants.
The user may want to define a test that involves first making multiple other requests before making an assertion. One example would be to check if the user is properly limited in the number of resources they can create. I think these should be defined inline in a test as some type of "before" option, maybe an array. Additionally, an "repeat" option could be included to indicate how many times the request should be made.
Here is an example of what I'm thinking:
- test: Max recipes allowed
description: The user should only be allowed to create 10 recipes, any more should return a 400
before: # an array of actions to perform before making assertions
- request:
method: POST
body: ${RECIPE_BODY}
continue_if: # a condition that must be met for the test to continue without failing
status: 2xx
repeat: 10 # repeat this item 10 times
request:
method: POST
body: ${RECIPE_BODY}
expect:
status: 4xx
In this particular case, each action in the before array is executed sequentially and repeated where indicated. This format is identical to a TestDefinition, the continue_if is just another wording for an expect (and perhaps it should still be called expect) it just doesn't result in a "passed test" if it passes, but if it doesn't, the entire test is marked as failed.
Consideration could be given to defining an "after" array as well, for any cleanup requests (but should those fail the test if they fail?)
Currently, env variables are only loaded when the test suite is run. Ideally, they should be able to be loaded and used within static variable definitions, even redefining them with the same name.
The level of concurrency with a test suite or a collection of suites should be customizable in some manner.
Sequential testing may be desired in cases where requests may follow a logical pattern - for example:
This workflow would not work with concurrency. Inversely, making a series of unrelated GET requests to different endpoints can be concurrent, and doing so would speed up testing.
Perhaps a suite-level option, something like an optional "parallel: true". I do think that sequential should be the default for all tests in a suite, since it is probably the most intuitive and unlikely to cause issues if the option is not specified. However, at the suite-level, I think that suites should by default run in parallel, and the user should be encouraged to write their test suites in isolation. Optionally a command-line argument could be supplied to run them one-at-a-time.
Alongside the request
and expect
sections of a test suite, there should be an extract
section as well. This section should work much like the expect section, but any variables used here will be populated, rather than read.
tests:
- test: "sign in"
description: "sign in test user for subsequent tests"
request:
method: POST
url: "http://localhost:3000/signin"
body:
username: {TEST_USER_USERNAME} # these are read from existing env variables
password: {TEST_USER_PASSWORD}
expect:
status: 2xx
extract:
headers:
Authorization: Bearer {JWT_TOKEN} # this gets populated instead of read, can be used in subsequent requests
Extraction logic is subject to change - the above would be the most ideal if it is feasible. Conceptually, a parser could read the text before and after the variable, and match the response char by char. However, it could get tricky if the extracted variable contains the same text as text that comes after, for example if an extract matcher reads message: {GREETING} How are you?
and for some reason the extract variable GREETING
contains the word "How", how can the logic be arranged to avoid the collision? Maybe two pointers, reading chars backwards and forwards, and then grabbing the remainder as a trimmed slice?
Just realized that I never did any URL encoding for request params provided. This should be implemented using the urlencoded crate.
Currently only GET is supported. It should be trivial to add the other request methods, and then build example suites for each method.
Environment variables should be read from the following format:
tests:
- test: "get hello"
request:
method: GET
url: "http://localhost:3000/hello"
headers:
Authorization: Bearer {API_SECRET}
expect:
status: 200
body:
message: "Hello, world!"
By default, the system should first check if the environment variable already exists. Optionally, a .env
file should be able to be specified as a top-level option for importing variables into the tests. Under the hood, this should just read the .env
file and store the values as temporary local variables associated with the test suite.
Note that environment variables should be able to be used anywhere in the file where text is included - both in the request, for example, in descriptions - or even as matchers for expected responses.
There should be progress output of some kind while the tests are running. Currently information is only conveyed once the tests finish. Perhaps additionally a verbose command line output argument could be included for more detailed information.
Consider log and fern crates for this.
Additionally, test failure output needs formatted better. Sometimes it provides too much information, and for some reason it escapes to the next line after like 20 characters.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.