In prior articles (1, 2, 3, 4), we’ve implemented isolated tests which offer us precise and reliable feedback – and are more or less fast depending on whether we’re running unit tests or component tests which need to load a Spring context. But these tests have their limits, precisely because they’re isolated. In this article we’ll deal with tests that are even higher in the pyramid: integration and end-to-end tests.
This article originally appeared on our French Language Blog on .
What to test?
Despite the tests we’ve put in place, there are still scenarios that can slip through the cracks. Indeed, so far we have isolated all our tests, but what will happen when we connect everything together? Are we sure that our journey-booking component correctly calls the connection-lookup? Is the database properly configured and accessible?
The objective of integration tests is to verify that the “wiring” is done correctly, and that our component communicates with its dependencies, no more and no less. We won’t test different business cases (i.e. input parameters, behaviours, etc.), because this has already been done in the unit tests.
The following diagram summarizes the integration tests we will implement:
At the moment, the ease of writing tests is inversely proportional to the complexity of their execution. We’ll remain in the JUnit / Spring environment and the objective is to validate integration, not the business logic. We could almost have just call the method without asserting anything other than that it successfully called the external component.
Without any specific configuration, the Repository test allows us to validate the database connection which we configured in application.yml. The test is therefore very simple (gitlab) :
The real complexity lays in the data insertion scripts, which have to be maintained as the data model evolves and be kept as fast as possible. Here they are implemented by way of the @Sql annotation. By the same token, the cleanup script is critical to the ability to replay the test as many times as necessary and avoid bleeding state into other tests which will then fail and require investigation time.
Testing the Lookup service connection is even more straightforward (gitlab link):
What about running the tests?The tests are easy to write, but we need an entire environment to run them. And with more micro-services and links between them comes more complexity in standing up an environment.
Docker can be our friend when addressing this issue, but we won’t go into the details here. Our code on gitlab contains the necessary maven, Dockerfile, and docker-compose.yml to start a postgresql database and the two microservices. There’s an important point to keep in mind: integration tests are not executed by the standard maven build cycle, instead they’re in a separate project. We’ll need to set up a build pipeline (on gitlab, or jenkins or some other CI tool) to start the services before we can run these tests. See the sample project’s readme for more details.
What to test?
We finally reach the top of the pyramid, with the end-to-end tests mentioned earlier. As their name suggests, the purpose of these tests is to validate the entire application chain. They aim to validate the integration between of all components. In the diagram below, we see that we’ll test our two services as well as the database and the connection to the Open API transport:
Like an integration test, the objective of an end-to-end test is to validate the “wiring”, not the business rules.
End-to-end testing usually involves GUI testing when it comes to web or mobile application, or testing programmatic interfaces like REST APIs as in our case.
Our component’s entry point helps us determine how to test it. When testing a web interface, we’ll likely use Selenium or something equivalent (i.e. Protractor on Angular). When testing an API, we recommend the rest-assured framework, which is what we’ll use for our example (gitlab link):
After configuring the API’s url, rest-assured allows us to use BDD syntax to request different endpoints. It’s thus possible to configure the request headers, parameters, body, etc. using given, to execute it with when, and finally to check the response’s return code, headers, body, etc. via then. Using json path allows us to check the contents of a json object.
As with integration testing, the runtime environment must be in place before these tests can be run. We therefore put end-to-end testing alongside integration testing in a dedicated project. These are then executed at the same time by the build pipeline after spinning up an environment or deploying the services on existing servers.
As initially described, the pyramid didn’t mention other types of tests. But we’d be remiss not to mention a few other types of tests that are important to implement and automate as much as possible.
We haven’t implemented any application security in our example, but know that Spring and the other libraries used provide the necessities (authentication and permission handling) for the component, integration and end-to-end testing that we’ve developed.
Moreover, OWASP is an endless source of free information and best practices for web security (i.e. TOP 10 2017). They provide a comprehensive guide to testing and a list of tools to help automate security testing.
Performance testing often happens very late in the development cycle, which goes against the principle of rapid feedback. However, it is possible to automate some of these tests, run them regularly in the build pipeline, and get feedback on the application’s performance evolution. Tools such as Gatling or JMeter are our the tools of choice for continuously testing performance.
Acceptance tests, or functional tests, are the tests that validate the application from the end-user’s point of view, and are often called User Acceptance Tests (UAT).
We’re divided on these types of tests: they’re generally in the same format as end-to-end tests (typically GUI tests) which we add a layer of BDD (cucumber) to add clarity for the business and testers. For all the reasons explained above, we therefore wish to limit them, but at the same time the business would like all of the rules, use cases, etc. to be validated.
Our advice it advise to limit yourself to some smoke tests, which allow us to traverse the entire application via the famous happy path. Once again, business rules will be validated lower down the pyramid. This involves building a relationship of trust between developers and the business regarding the developers’ ability to test application functionally and ensure non-regression through tests already developed.
It’s time to conclude this series of articles. In the end, we have 26 unit tests, 15 component tests (+ 1 contract test that could likely replace a component test), 2 integration tests and 2 end-to-end tests. We thus respect the testing and our component is secured.
To summarize, and if you had to remember only a few points, remember these:
- Favor tests that provide accurate, fast and reliable feedback.
- Invest heavily in unit tests, which are cheap to write and provide the best feedback on your business code.
- Don’t neglect component tests that validate configuration and glue.
- If you’re building web services / API / micro services, no matter what name you give them, seriously consider contract testing to ensure backwards compatibility.
- Limit the scope of more complex integration and end-to-end tests to validating the plumbing between your application and the rest of the system.
- Automate all of these tests and run them as often as possible on a continuous integration server.
- Do not duplicate tests at each level of the pyramid: each type of test has its own objective. It is a pyramid and not a cube, pyramid that I completed in the following diagram to synthesize this article.