Skip to content

Test Automation Strategy

Sergii Diachenko edited this page Oct 3, 2019 · 4 revisions

Test Automation Strategy

This page describes our approach to test automation. Main questions we are trying to address here:

  • What tests to write?
  • What tests do not write?
  • How much is enough?

Overall approach is inspired by Practical Test Pyramid article.

Main driver to big investment of time in test automation as we don't have dedicated QA/QC resources to perform manual testing.

System overview

Note: Refer to Solution Architecture for more in depth understanding of different parts.

Test automation should support all Deployment Configurations we have.

Components which require testing:

  1. Domain logic
  2. Persistence interaction: IRepository and IQuery implementations
  3. Web API layer
  4. Angular SPA:
    1. UI/Presentation logic
    2. Backend interaction
  5. 3rd party systems integration: Google OAuth, reCaptcha, SendGrid, Auth0, etc.

Unit tests

Principles

  • Cover as much cases as possible. This gives faster feedback.

  • Do not break encapsulation. Do not test internal structures and services which are not directly exposed to use cases (Commands and queries).

Implementation

We use unit tests to cover core business logic with all possible scenarios in isolation from external systems to shorten feedback loop.

We fully cover all areas in Domain logic and Angular components.

TDD approach is highly advised here.

Tests are written in a form of executable specifications to serve as knowledge base and traceability matrix.

Applies to backend and any client being developed.

Integration tests

Principles

  • Test use cases which are not covered by unit tests. Some overlap is expected though.

  • Consider time and perfromance of tests

Implementation

Main focus of integration tests is to test Persistence interaction.

Most of the use cases repeat use cases used in unit tests. The only difference that unit tests might have more test cases for the same feature. While integration test will test only few scenarios.

Additionally, we have unit tests written in specification format. Potential for reuse is pretty high. We just need a way to setup real DB instead of mocking interfaces. Everything else, feature files and step definitions, remain mostly the same.

In order to do so we can leverage tag feature and run tests selectively. Next thing is to detect which tag is running and create appropriate fixture. Unfortunately, there is no way to reliably detect test category being run.

Some research on this issue led to using app configuration (files + Env vars) to tell test which fixture to use. See setup instructions

API Tests

Principles

  • Focus test on response structure.

  • Make assertions that will allow non-breaking changes. To fix less tests when new feature introduced.

  • Tests are covering controller logic.

Implementation

We rely on Karate DSL testing framework.

End-to-end tests

In order to separate development of API and Web UI we will employ Consumer Driven contract testing. This approach will allow to change API and UI independently.

Principles

  • Test few scenarios which will ensure contract is not broken. E.g. response structure, response codes.

  • It's not integration test.

API end-to-end

Consumer (Angular only at this moment) will provide contracts to test and API CI will run those tests as a part of test suite.

Angular end-to-end

Angular will have e2e testing configured against mocked backend which will serve as a basis for contract tests. CI will run those tests.

Additional notes

We intentionally do not test 3rd party integrations at the moment because we have a few of them and we haven't experienced lack of test automation in this area. Note, that some integrations are being tested by API tests though.

Performance tests

At this moment Performance/Load/Stress testing is not a priority for us as we don't have high demand.