Book Image

Software Test Design

By : Simon Amey
Book Image

Software Test Design

By: Simon Amey

Overview of this book

Software Test Design details best practices for testing software applications and writing comprehensive test plans. Written by an expert with over twenty years of experience in the high-tech industry, this guide will provide you with training and practical examples to improve your testing skills. Thorough testing requires a thorough understanding of the functionality under test, informed by exploratory testing and described by a detailed functional specification. This book is divided into three sections, the first of which will describe how best to complete those tasks to start testing from a solid foundation. Armed with the feature specification, functional testing verifies the visible behavior of features by identifying equivalence partitions, boundary values, and other key test conditions. This section explores techniques such as black- and white-box testing, trying error cases, finding security weaknesses, improving the user experience, and how to maintain your product in the long term. The final section describes how best to test the limits of your application. How does it behave under failure conditions and can it recover? What is the maximum load it can sustain? And how does it respond when overloaded? By the end of this book, you will know how to write detailed test plans to improve the quality of your software applications.
Table of Contents (21 chapters)
1
Part 1 – Preparing to Test
6
Part 2 – Functional Testing
13
Part 3 – Non-Functional Testing
17
Conclusion
Appendix – Example Feature Specification

The spiral model of test improvement

Developing tests from the initial specification into detailed, completed test plans can be thought of as a spiral looping through four repeated stages. Of course, it is more complex in practice, and there is extensive back and forth between the different stages. This simplification illustrates the main milestones required to generate a test plan and the main flow between them. It is similar to Barry Boehm’s spiral model of software development. However, this model only considers the development of the test plan, rather than the entire software development cycle, and doesn’t spiral outwards but instead inwards toward test perfection:

Figure 1.8 – The spiral model of test development

Figure 1.8 – The spiral model of test development

The four stages you go through when iterating a test plan are as follows:

  • Preparing specifications and plans
  • Discussions and review
  • Performing testing
  • Analyzing and feeding back the result

Software development begins with an initial specification from the product owner, which is a vital start but needs several iterations before it is complete. The product owner then introduces and discusses that feature. Based on that initial specification, the development team can prepare an initial implementation, and you can generate ideas for exploratory testing.

Once an initial implementation is complete, you can start improving the specification, the test plan, and the code itself. This begins with Exploratory testing, which is step 3 in the preceding diagram. By trying the code for real, you will understand it better and prepare further tests, as described in this chapter. While there are several essential steps beforehand, the process of improving the code begins with exploratory testing.

Armed with the exploratory test results in step 4, you can then write a feature specification, as shown in step 5 in the preceding diagram. This will be covered in more detail in Chapter 2, Writing Great Feature Specifications. This specification then needs a review – a formal discussion to step through its details to improve them. That review is step 6 and is described in Chapter 3, How to Run Successful Specification Reviews.

When that review is complete, you can perform detailed testing of the feature. That one small box – step 7 in the preceding diagram – is the subject of most of this book and is covered in Chapter 4 to Chapter 13.

Preparing the test plan isn’t the end, however. Based on the detailed testing results, you can refine the specification, discuss it, and perform further targeted testing. That may be to verify the bugs you raised, for example, or to expand the test plan in areas with clusters of bugs. The results of the testing should inform future test tasks. That feedback improves the testing in this and subsequent cycles, asymptotically trending toward, though never quite reaching, test perfection.

Behind this spiral, the code is also going through cycles of improved documentation and quality as its functions are checked and its bugs are fixed.

The preceding diagram shows how both the theoretical descriptions of the feature from the specification and other parts of the test basis must be combined with practical results from testing the code itself to give comprehensive test coverage. Relying only on the documentation means you miss out on the chance to react to issues with the code. Testing without documentation relies on your assumptions of what the code should do instead of its intended behavior.

By looping through this cycle, you can thoroughly get to know the features you are working on and test them to a high quality. While it is just a point on a cycle, we begin that process with a description of exploratory testing, starting with the first important question: is this feature ready to test yet?

Identifying if a feature is ready for testing

It is very easy to waste time testing a feature that is not ready. There is no point in raising a bug when the developers already know they haven’t implemented that function yet. On the other hand, testing should start as early as possible to quickly flag up issues while the code is still fresh in the developers’ minds.

The way to reconcile those conflicting aims is through communication. Testing should start as early as possible, but the developers should be clear about what is testable and what is not working yet. If you are working from a detailed, numbered specification (see Chapter 2, Writing Great Feature Specifications), then they can specify which build fulfills which requirements. It may be that even the developers don’t know if a particular function is working yet – for instance, if they believe a new function will just work but haven’t tried it for themselves. There’s no need to spend a lot of time gathering that information, so long as the developers are clear that they are unsure about the behavior so that you can try it out.

Also, look out for testing code that is rapidly changing or is subject to extensive architectural alterations. If the code you test today will be rewritten tomorrow, you have wasted your time. While it’s good to start testing as soon as possible, that doesn’t mean as soon as there’s working code. That code has to be stable and part of the proposed release. Unit tests written by the developer can indicate that code is stable enough to be worth testing; but if that code isn’t ready yet, find something else to do with your time.

Real-world example – The magical disappearing interface

I was once part of a test team for a new hardware project that would perform video conferencing. There would be two products – one that would handle the media processing and another that would handle the calls and user interface, with a detailed API between the two. The test team was very organized and started testing early in the development cycle, implementing a suite of tests on the API between the two products.

Then, the architecture changed. For simplicity, the two products would be combined, and we would always sell them together. The API wouldn’t be exposed to customers and would be substantially changed to work internally instead. All our testing had been a waste of time.

It sounds obvious that you shouldn’t start testing too early. However, in the middle of a project, it can be hard to notice that a product isn’t ready – the developers are busy coding, and you are testing and finding serious bugs. But look out for repeated testing in the same area, significant architectural changes, and confusion over which parts of a feature have been implemented and which are still under development. That shows you need better communication with the development team on which areas of code they have finished, and which are genuinely ready for testing.

When the development team has finalized the architecture and completed the initial implementation, then you should start testing. Getting a new feature working for the first time is a challenge, though, so the next section describes how to make that process as smooth as possible.