-
Notifications
You must be signed in to change notification settings - Fork 6
Test Automation Strategy
This page describes our approach to test automation. Main questions we are trying to address here:
- What tests to write?
- What tests do not write?
- How much is enough?
Overall approach is inspired by Practical Test Pyramid article.
Main driver to big investment of time in test automation as we don't have dedicated QA/QC resources to perform manual testing.
Note: Refer to Solution Architecture for more in depth understanding of different parts.
Test automation should support all Deployment Configurations we have.
Components which require testing:
- Domain logic
- Persistence interaction:
IRepository
andIQuery
implementations - Web API layer
- Angular SPA:
- UI/Presentation logic
- Backend interaction
- 3rd party systems integration: Google OAuth, reCaptcha, SendGrid, Auth0, etc.
-
Cover as much cases as possible. This gives faster feedback.
-
Do not break encapsulation. Do not test internal structures and services which are not directly exposed to use cases (Commands and queries).
We use unit tests to cover core business logic with all possible scenarios in isolation from external systems to shorten feedback loop.
We fully cover all areas in Domain logic
and Angular components
.
TDD approach is highly advised here.
Tests are written in a form of executable specifications to serve as knowledge base and traceability matrix.
Applies to backend and any client being developed.
-
Test use cases which are not covered by unit tests. Some overlap is expected though.
-
Consider time and perfromance of tests
Main focus of integration tests is to test Persistence interaction
.
Most of the use cases repeat use cases used in unit tests. The only difference that unit tests might have more test cases for the same feature. While integration test will test only few scenarios.
Additionally, we have unit tests written in specification format. Potential for reuse is pretty high. We just need a way to setup real DB instead of mocking interfaces. Everything else, feature files and step definitions, remain mostly the same.
In order to do so we can leverage tag feature and run tests selectively. Next thing is to detect which tag is running and create appropriate fixture. Unfortunately, there is no way to reliably detect test category being run.
Some research on this issue led to using app configuration (files + Env vars) to tell test which fixture to use. See setup instructions
-
Focus test on response structure.
-
Make assertions that will allow non-breaking changes. To fix less tests when new feature introduced.
-
Tests are covering controller logic.
We rely on Karate DSL testing framework.
In order to separate development of API and Web UI we will employ Consumer Driven contract testing. This approach will allow to change API and UI independently.
-
Test few scenarios which will ensure contract is not broken. E.g. response structure, response codes.
-
It's not integration test.
Consumer (Angular only at this moment) will provide contracts to test and API CI will run those tests as a part of test suite.
Angular will have e2e testing configured against mocked backend which will serve as a basis for contract tests. CI will run those tests.
We intentionally do not test 3rd party integrations
at the moment because we have a few of them and we haven't experienced lack of test automation in this area. Note, that some integrations are being tested by API tests though.
At this moment Performance/Load/Stress testing is not a priority for us as we don't have high demand.