I was asked by a colleague on the bus (hi Ian!) about systems testing. After bending his ear for about an hour I promised to write some notes, so here we are.
Before going further, I must admit that what I will describe here is something of my platonic ideal. I have never had the opportunity to implement all the measures described below. It seems testing is often a victim of scheduling and budget effects. I persist!
Purpose of Testing
Contents
The primary purpose of systems testing is to verify that the system under consideration does what it is supposed to, the way it’s supposed to, in a satisfactory way.
This is a bit nebulous, so let’s explore what that means.
Requirements and Testing
After project initiation, most projects start with requirements gathering.
After development and implementation, most projects start their testing.
Despite the time that passes between requirements gathering and test execution, test preparation starts at the same time as requirements. A requirement identifies not only what the stakeholder wants, but how to determine whether the requirement has been met.
- A requirement that is not measurable, that does not explicitly identify how to determine that it has or has not been met, is not a requirement, it is only a wish.
The initial expression of a requirement might lack specific detail, but by the time the requirements are finalized the success indicators must be known. For instance, “must run fast” is not sufficient, but “must run in less than one minute” is. “Must run in half the time it does now” is incomplete and needs metrics to be gather before the requirement can be completed (find how long it takes now, then halve that time).
It is appropriate to have a test coordinator present while gathering requirements, challenging each one with questions such as “how do you know this is done? How do you know it failed?”
Component Testing
Every component has requirements. That means you can determine whether or not it has been implemented correctly. Do so.
Do so at each and every layer. In a three-tier web application (web interface, application layer, database) you can test each component. Testing from lowest-level to highest-level implementation, at all layers, helps limit the scope of troubleshooting needed when errors are found.
- Test your database layer by calling your CreateWidget() procedure, look in the database to see that the widget is present. Call your CreateWidget() procedure with invalid data and look in the database to see that it is not present.
- Very few things have only a single requirement. A business rule that a widget must have a name means that you must not be able to insert a widget that has no name.
- Test your application layer using tests much like those for the stored procedures. Create a widget, make sure it’s in the database, fetch it back and make sure it looks like it should. Do your negative tests.
- Test your web interface layer using tests much like those for the application layer. Create a widget and fetch it back and look at it (at this point you probably can’t see the database directly, but you have reason to expect the lower layers work).
Depending on ‘User Acceptance Testing’ is insufficient, and inefficient. If a failure is found you have to examine the entire application stack to find the problem. If the user can’t create a widget, the problem could lie in any of the three layers. If testing shows that the database and application layers work correctly then you can be confident the problem lies in the web interface layer.
Also, in this particular case the first two layers (database and application) can probably have their tests automated, or at least scripted. This can mean having much more thorough (and more importantly, repeatable) tests.
Interface Testing
Building a system often includes software development, but modern systems often interface with multiple systems. Test every interface. As above, start with the lowest level possible in order to minimize the scope of troubleshooting.
Network Tests
Test all network interfaces to verify line of sight on the necessary protocols. Most enterprise software has test modes or can be used in a ‘smoke test’ to verify access… but even simple tools can do the job.
- Modern systems often disable ICMP (used by ping(1) and traceroute(1) to test network connectivity), but where ICMP is available these are some good basic tools.
- Telnet to the target ports is a good next step (below I verify line of sight to google on port 80)
[kjdavies@dev ~]$ telnet www.google.com 80 Trying 74.125.202.147... Connected to www.google.com. Escape character is '^]'.
Configuration Tests
I worked with a program that involved connecting several financial systems. The software was well-tested and proven, but each account (business object) had be configured with several options that controlled how transactions were processed, and several key identifiers that controlled how the funds were routed after processing.
Testing at the lowest level possible, initiating payments and refunds via a tool provided by the service provider, let us verify that the merchant accounts were configured correctly so the funds would be routed correctly. Testing with a tool we had created specifically for this purpose let us confirm, from outside the service provider’s system, that the merchant account was configured so the transactions would be processed correctly.
In this case the service provider’s web interface had data validation that prevented the user from even trying certain invalid transactions. Our tool called the same API used by the service provider’s web interface but without the safeguards, and we were able to discover that the merchant account was configured as expected and the transaction request rejected out of hand… or, if not configured correctly, accepted and processed (oops! Good catch!)
Closing Comments
These are really just thoughts off the top of my head. A full treatment of testing would be an immense work: not just a book, an entire library could be created. To summarize, though:
- Test preparation starts with requirements: not with the requirements document, but with information gained during requirements gathering.
- A requirement that doesn’t indicate how to verify success and failure is not a requirement. A requirement must have objective, measurable success criteria.
- Test every component in the system and interface in the system. End to end and integration testing are important, but testing each component means that if you find a failure you can minimize the scope of troubleshooting.