Giter VIP home page Giter VIP logo

nodejs-integration-tests-best-practices's People

Contributors

danielgluskin avatar divekjohns avatar goldbergyoni avatar lirantal avatar marcobiedermann avatar mikicho avatar mingo023 avatar pawda avatar rluvaton avatar rubengmurray avatar tanguyantoine avatar wajeht avatar wrumsby avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nodejs-integration-tests-best-practices's Issues

Lint our code including test specifics lint rules

Here is a recommended eslint config:


{
  "plugins": ["@typescript-eslint/eslint-plugin", "jest"],
  "extends": [
    "plugin:security/recommended",
    "plugin:promise/recommended",
    "plugin:@typescript-eslint/eslint-recommended",
    "plugin:@typescript-eslint/recommended",
    "plugin:import/errors",
    "plugin:import/warnings",
    "plugin:import/typescript",
    "prettier",
    "prettier/@typescript-eslint"
  ],
  "root": true,
  "env": {
    "node": true,
    "jest": true
  },
  "rules": {
    "jest/no-disabled-tests": "error",
    "jest/no-focused-tests": "error",
    "jest/no-identical-title": "error",
    "jest/prefer-to-have-length": "warn",
    "jest/valid-title": "error",
    "jest/valid-expect": "error",
    "jest/no-if": "error",
    "jest/require-top-level-describe": "error",
    "jest/no-test-prefixes": "error",
    "jest/prefer-todo": "warn",
    "jest/expect-expect": "error",
    "jest/no-deprecated-functions": "error"
  }
}

Add and test a GET route

Currently the example app shows only /POST tests, we should add /GET and in overall ensure we have tests for the basic integration KATA:

For each scenario, one should test at least the following:

  1. API response ✅
  2. The after state - Whether the system has the right data now, usually using /GET request. We don't have this ❗️
  3. External call was made - For example, when email/SMS should get sent. Do we have this? ❓
  4. Invalid input - Do we have this ❓
  5. 3rd party failure simulation - If interacting with external service, ensure we react fine to failure. Do we have this? ❓

Handling authorization when testing against API

@jhenaoz @Thormod @mikicho

I picked this topic as my first challenge. Before I code anything, I'd like to hear your thoughts 🔥

The challenge:

  • API is accessible only for the authorized user
  • The auth module is outside the scope of the tests, it might even be a 3rd party service
  • Performing real-login demands adding a real user to the auth system (which might be outside the scope of the test or the Microservice we approach)
  • Log-in in every test might be slow as it demands one more HTTP request per test
  • As always, we wish to change/mock as few code as possible
  • Login should work also against a remote environment where stubbing is not possible (courtesy of @mikicho )

Solutions (not all of them are good, just stating the options):

  • Stub the auth function/module/middleware and instruct it to authorize the request
  • Intercept the network call to the users/auth service and trick it to respond with a positive result
  • Place a back-door, if config.allowUnAuthorizedRequests then allow requests to be handled without a token
  • If it's a monolith, seed a real user, log-in once, store and reuse the token across all requests
  • Other?

Speed database for local env setup

@jhenaoz @Thormod @goldbergyoni

I want to focus on MySQL and Postgres, which are the most popular databases engines in those days.
We can add more databases later (MongoDB is a good candidate IMO)

Also, Because of WSL2, I think we can recommend using Linux practices for Windows, WDYT?

The challenge:

  • In-memory database setup (as much as possible)
  • Minimize disk writes.
  • Database specific configuration
  • Docker for Mac performance issues (if any)

Optional Solutions

  • RAM Disk (folder)
  • Disable logging
  • Disable durability

Postgres:

Tasks

  • Define a benchmark scenario

Write example application main tests

  • When adding a valid order, then the response is 200 @goldbergyoni
  • When adding a valid order, then the new order is retrievable @Thormod
  • When adding a valid order, then an email is sent to the user @jhenaoz
  • When the user doesn't exist, then the response is 400

This is about simulating 3rd party behaviour, the users service, using nock @mikicho

Best practices ideas

Example here:
#43

YO, Two strategic topics:

  1. Repo name and concept - Seems like we're going with the repo name: 'Integration Tests Best Practices', Any counter thoughts? Maybe 'Node.js Tests Best Practices' which makes it even more generic, opens the door for more topics but also less focused?

  2. Best practices sketch - I'm including here a first draft of best practices idea and categorization (there will be more and each will include a longer explanation with code example), does this feel like the right taxonomy and will end in interesting content?

The golden principles

Super simple, declarative and short (7 loc) tests

Web server setup (3)

  • Random port - Done
  • Return the full address (maybe)
  • Expose open and close methods - Done
  • Mind remote environment (maybe)
  • Same process - Done

Infrastructure setup (6)

  • Use docker-compose - Done
  • In global-setup - Done
  • Optimize speed - Done
  • Keep up in dev env - Done
  • Use production migration to build-up - Done
    - RAM folder - Not yet

Basic tests - (5)

  • Axios, not supertest (configure global instance that doesn't throw when HTTP=!200)
  • Generate JWT secret for authentication -
  • Assert for objects including the status
  • Structure Describes by route and stories
  • Keep unit tests good practices (AAA, name)

Tests isolation (8)

  • Intercept any outside calls, isolation
  • Define network interception in beforeEach, cleanup in afterEach
  • Disable all network requests except those which were explicitly defined (nock.enableNetConnect)
  • Once you have a default, but need to override in a specific test - Create a unique request pattern or remove a global scope
  • Simulate collaborator failures
  • Explicitly define request schema to test the outgoing call (how explicit to be?)
  • Record requests to discover various integration patterns and collaborator to build the tests upon

Dealing with data (7)

  • Each test act on its own records - Avoid coupling
  • Seed only metadata -
  • Clean-up only in the end -
  • Use randomizing factory (e.g. Rosie like) -
  • Check response schema -
  • Test large responses -
    - More than single page in paging (Should write)
    - Insert two in parallel (Should write)

Error handling and metrics

  • Test various error handling flows and outcome
    - Test metrics
  • Test OpenAPI documentation
  • Test the contracts
  • Test for memory leaks
  • Tag pure black-box tests to reuse against remote environment
  • Test DB/ORM migrations

Message queue related testing (8)

  • Flatten the test, get rid of callbacks
  • Thoughtful decision about real vs fake
  • Test a poisoned message
  • Test idempotency
  • Test DLQ
  • E2E (Retry, DLQ, names)
  • Test ACK
  • Test start failure, Zombie
  • Test for metadata, JWT

Workflow (6)

  • Tune a test runner for ongoing testing
  • Start with integration tests
  • Focus on feature coverage
  • Test various error handling
  • The KATA
  • Slow test
  • When unit

Other ideas

  • Parallelize requests

More will come here. Suggest more?

Kicking off 🎉

@jhenaoz @mikicho @Thormod

I'm excited to kick-off this project which I find to be super-important. We all know how tricky is perfecting the environment and the integration tests themselves. This can be a great source for many and our playground to master this technique. We will receive feedback and validation from the community and there is no better way for self-improvement.

I suggest the following workflow:

  1. Put a meeting on the calendar for next week. It's always nice to see at least once the people you work with. When might be a good time?
  2. Pick your favorite feature below to nurture: Push it in your convenient time, share your thoughts and progress in our meeting. This OSS world so obviously progress on your own pace whether this means just reading 5min about this feature or coding 100 lines of code. Accomplish this feature in 3 days or one year. Dedicate to this lib 5 min/month or 20 hours, whatever works for you!

List of features

  • Tests against API
  • Isolating 3rd party services
  • Stubbing the backend behavior to simulate corner cases
  • Authentication/Login ✨
  • Database setup with speedy RAM folder that supports both Linux, Mac & Windows ✨
  • Local env setup for speedy and convenient tests ✨
  • Documentation based contract tests (validating Swagger correctness) ✨
  • Consumer-driven contract tests (with PACT) ✨
  • Tests with message queues ✨
  • Schema migration and seeding ✨
  • Data seeding ✨
  • Data cleanup ✨
  • Error handling tests ✨
  • Testing for proper logging and metrics ✨
  • Debug configuration and other dev tooling ✨
  • Frameworks examples: Serverless, Nest, Fastify, Koa ✨

A TypeScript recipe

Create a recipe with the same code as the example-ap, only with TypeScript, so both the SUT and the test code are typed.

Before picking this issue, it's recommended to have a short tech planning chat here and only then upon sync to get up to speed.

Enrich our DB model

The example app DB schema is too simplistic and might not tell the truth when conducting performance benchmarks. It could be great to add a few more fields to the main table and also one more relation.

Before picking this issue, it's recommended to have a short tech planning chat here and only then upon sync to get up to speed.

🔥 Challenge: Migration tests recipe

Intergation tests are powerful becuase they allow testing the dark parts of the engine. Migrations are one of the darkest corners.

Two proposed tests:

  • Undo all migrations without failures - This proves that we can undo. But it doesn't promise anything about the logic
  • Ensure that migration is logically correct - Assuming that in v0.1 the field order.hasSupportTickets is optional and for some records it is null. Then in v0.2 this field is mandatory + New logic is presented: If this field is true, an order can not be deleted. The migration should set all NULL fields to true. A Typical test will insert records where this field is set (not null), it not even possible to insert an empty hasSupportTickets column because v0.2 doesn't allow this. But, in production for some rows it's null as they were inserted previously!

We can test this with:

test('When older Order record exist with empty hasSupportTickets and trying to delete it, then deletion fails with HTTP 409', () => {

// Arrange

migrate the DB to v0.1
insert an order with empty hasSupportTickets
migrate the DB to v0.2

// Act
try to delete the order

// Assert
...
}

How to test if a delete request actually deletes the resource

We have a user which has a few comments.
When we delete the user, we should delete all its comments as well.
How can we test that the comments get deleted?
We can't do this through an API because the user doesn't exist anymore and:

GET /api/users/3/comments

Will return 404 error code because user 3 doesn't exist and not because user has 0 comments.

Nock enable specific localhost port

Do we want to demonstrate how to enable requests to a specific port on localhost with nock?

nock.disableNetConnect()
nock.enableNetConnect('127.0.0.1:43050')

Maybe this is too nock specific and had nothing to do about testing

Apply nock best practices

  • Clean on afterEach
  • Override using unique request properties (1st option) or by global scope
  • nock.enableNetConnect on afterAll

Nock setup file

I'm new to nock.
Is it common to create a set of nocks in the global setup and override them if possible on the local tests?
For example for URLs that get called in a lot of requests (authorization, permissions, sqs...)

Reuse tests against a remote environment

Some tests are grey-box tests (e.g. they stub some code object) and can only get executed against the local env. Otheres are pure black-box, they approach the API and don't assume that the api-under-test is within the same process - These tests can get executed against remote environment like staging. To achieve this, one should tag those tests and then grep for them during execution

Deferring our weekendthon in a bit

@jhenaoz @mikicho @Thormod

I have a suggestion to defer the weekenthon in 3 weeks, here's why:

  • I should have a work vacation on January 1st week, I can then spend 1-2 full days on bringing it really close to release and then in our Weekendthon we can finalize a version in hours
  • A new developer, Daniel @DanielGluskin, is joining our forces 💪, welcome Daniel! It will take him 1-2 weeks to get up to speed probably

I would propose Friday January 7th or 14th + Sunday 9th or 16th, each meeting 2 hours, gonna be fun and insightful.

WDYT?

Create MQ examples

My 2 cent - Stub has better ROI than fake (live MQ), Sebas makes the call

Recipe idea: Stateful factory as a good pattern

It's too easy for integration test to step on each other toes by adding identical records when there is a unique constraint. For example, test 11 tries to add the user {name: 'messi'}, but also test 47 adds the same user. One of them will fail because 'name' is unique.

Solution 1, what I used by now - Each test add a timestamp:

const userToAdd = {name:messi ${Math.random()})

Solution 2, use a stateful factory - The test just calls some factory that manages the state and always provides fresh records:
dataFactory.getUser('messi')//returns 'messi-55', or any unique number

A better approach might be to use some factory lib like rosie:
https://www.npmjs.com/package/rosie

Which one is better?

[Based on comments from @mikicho]

A recipe with Mocha

Clone the example app, only use Mocha instead of Jest

Before picking this issue, it's recommended to have a short tech planning chat here and only then upon sync to get up to speed.

Isolate 3rd party services using nock

  • Show how by default test don't approach the external service
  • Show how a specific test overrides the default and SIMULATE some scenario
  • Show how to revert to default after eachTest so a test won't leave a dirty state

📰 October - Catch-up

@jhenaoz @Thormod @mikicho

Dear collaborator, This is just a catch-up summary to help busy people follow and chime-in:

  • We have a tasks list for everyone, see the issues
  • @mikicho Created amazing performance research on the impact of RAM folder. In Linux and Windows we see 3x improvement, in Mac it's slower, now investigating why
  • @goldbergyoni created the first recipe under 'various-receipes/authentication', I suggest you take a 5min look to see how a recipe can look like and approach the main-example instead of duplicating the API
  • How to progress - We can just work async, each at its own pace. Maybe having some weekend coding party, 2-3 hours, can bring a great boost? ❓
  • Small tasks that you might pick:
  • Create a simple CI for this repo
  • Create a Swagger for this API. Later on or now, we can create a recipe that tests our Swagger!, kind of minimal contract test. See jest-openapi, it's really cool and nice drill
  • Create a recipe for memory leakage, see leakage
  • Seed two large tables like countries list. This will make our examples more realistic as real world project sometime have to seed large data
  • Clean up data on global afterAll - There are various approaches for this, some are faster than others (e.g. purge vs drop schema vs cascading delete)

Test error handling recipe

I'm working on the error handing tests recipe, what do you think that this should cover? My thoughts below

We assume that the app has its own CustomError which can tell whether the error is catastrophic (process should exit) and a dedicated ErrorHandler object

Core Flows

  • When a typical non-catastrophic error is being thrown inside API request, then it gets logged and metric is fired
  • When a typical catastrophic error is being thrown inside API request, then log+metric and the process exits
  • When a typical non-catastrophic error is being thrown on startup, then it's logged and metric is fired
  • When invalid request arrives (400), then it's logged and metric is fired

Different Error objects

  • When CustomError is being thrown, then it is being handled (log+metric)
  • When JS Error object is being thrown, then it is being handled (log+metric)
  • When string is being thrown, then it is being handled (log+metric)
  • When null is being thrown, then it is being handled (log+metric)

Different throwing code location

  • When error is thrown during API request, then it is being handled (log+metric)
  • When error is thrown during message queue processing, then it is being handled (log+metric)
  • When error is thrown on warmup, then it is being handled (log+metric)
  • When error is thrown by middleware, then it is being handled (log+metric)
  • When error is thrown by domain service, then it is being handled (log+metric)
  • When error is thrown by async timer, then it is being handled (log+metric)
  • When error is thrown by global event emitter (e.g. DB connection), then it is being handled (log+metric)
  • When error is uncaught exception, then it is being handled (log+metric)
  • When error is unhandled rejection, then it is being handled (log+metric)

CI that is based on GitHub actions

Configure GitHub actions for this repo so anytime we PR/push, all the test will run. Since most of the tests include a DB, which should take care to configure the CI with docker-compose.

Before picking this issue, it's recommended to have a short tech planning chat here and only then upon sync to get up to speed.

Proper cleaning with before/after hooks

Hey,

Some thoughts regarding cleaning after nock (and in general),

some tests have:

nock.disableNetConnect();
nock.enableNetConnect('127.0.0.1');

Inside the beforeAll hook, but no proper clean up is never done:
nock.enableNetConnect();

We also use nock(...).persist() on a few occasions and never clean them up so they persist between different tests. A good nock clean up would be:

nock.cleanAll();
nock.enableNetConnect();

Maybe add a new rule such as: “Each test should clean after itself”? (same goes for sinon and all). If we use nock inside the beforeAll hook, we need to clean inside afterAll, same for beforeEach and afterEach.

Therefore, we shouldn’t have calls like this:

beforeEach(() => {
    nock.cleanAll();

Because we assume if nock was used in another test it was already cleaned up by that test.

Another small thing, we use async (done) inside most of the before/after hooks, isn't redundant to use both async and done in the same time?

Rule: test independency

Golden rule - A test visitor will always find the failure reason within the test or at the worst case in the nearest hook.

Practically:

  • If the intercepted request affects the test result (e.g. user service returns that 404) - It must be part of the test!
  • If the intercepted request is needed for all the tests in the file - Put it in beforeEach
  • If there are many intercepted requests and the definition becomes verbose - Extract it to a helper file that is being called by the test file hook

Makes sense?

Originally posted by @goldbergyoni in #19 (comment)

More isolation scenarios

We could show even better how to simulate the real-world chaos in a lab by showing the following tests:

// ❌ Anti-Pattern: We didn't test the scenario where the mailer does not reply (timeout)
// ❌ Anti-Pattern: We didn't test the scenario where the mailer reply slowly (delay)
// ❌ Anti-Pattern: We didn't test the scenario of occasional one-time response failure which can be mitigated with a retry

Optimize DB configuration

This doesn't include the entire DB interaction rather just the docker-compose config of DB to be as performant as possible. If we include benchmarks graph - It will be amazing

We already have one for PG, the next two that are needed: Mongo, MySQL

There's no need to change code, just share an example docker-compose with optimizations for testing

See PG recipe here:
#86

Discussion: environment setting

As we arrange some tests with:
process.env.SEND_MAILS = "true";

We need to clean this value on end but we cannot simply set it to “false” as it might actually originally be “true”.

Instead we might restore the environment back to its initial state after each run:

BeforeAll(() => { defaultEnv = process.env …
AfterEach(() => { process.env = defaultEnv ...

On the other hand, let’s assume my local env.SEND_MAILS is set to “false”. When I block all external calls in a test - it will pass on my machine. But if the same test is run on another machine, in which env.SEND_MAILS is set to “true”, it will fail.

Should each test declare a SEND_MAILS value in its setup? in contrast to the previous point.

What should be considered as a good practice here? Maybe tests should be aware whether they test a predefined behaviour, in which we set SEND_MAILS in the beginning. Others should test a concrete deployment behaviour - in which we do not change the environment, and if we do, we use the first practice and restore to default.

Your opinion?

Stub inside beforeAll

Here we use sinon.stub in beforeAll. This way we provide only one stub for the whole suite, which cannot be restored later.

Should we always stub inside beforeEach or the arrange phase but not in beforeAll?

Release plan

@jhenaoz @Thormod @mikicho

Trying to focus on releasing the great materials that we have here. How about the following simple plan:

  1. Now until Dec 18th - At your convenient time, pick issues or any topics that you like to become familiar with
  2. Weekendthon Dec 18th-20 - At this weekend, we can have 2 sessions of 2 hours to finalize stuff. For example, one session on Friday morning EST (afternoon at Israel) and one session on Sunday morning (afternoon at Israel)

This is just an optional idea, don't feel obliged to opt in. Just let me know whether this works for you.

  1. Release 🥳

Arrange should be through API as well?

@goldbergyoni @Thormod @jhenaoz

we talked before that we need to use the API as much as possible.
For example, we can do:

it('should get user by id', () => {
  // Arrange
  const newUser = httpClient.post('/users', userDetails);
  
 // Act
 const user = httpClient.get(`users/${newUser.id}`);

 // Assert
 ...
});

My concern with this approach that we implicitly testing POST /users endpoint and this might fail the test, as a result, plenty of fails will fail as well and the failure reason won't be clear.
A possible solution (not sure if I'm comfortable with this) is to talk with DB directly in the Arrange step.

WDYT?

sinon.sandbox is obsolete, we can use sinon as-is

Our tests currently use sinon.createSandbox to isolate test doubles between the tests. However, starting from a recent version, sinon is by default sandboxed so we can use is as-is and remove the sandbox usage.

Before picking this issue, it's recommended to have a short tech planning chat here and only then upon sync to get up to speed.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.