How To Write Your Test Cases - Introduction

How to prepare Test Cases from Requirements will be discussed later, but let's start by explaining the structure of a Test Case.

Before writing Test Cases, let's look at the definition of a Test Case. A Test Case is a set of steps to carry out, along with the expected correct result of those steps,  in order to test some functionality. So you're all ready to start writing Test Cases. The standard Test Case format is:

The format is generally:
Test Step Number - to keep your Test Steps in order and to give a point of reference when you find a defect, e.g. "It happened in Test Step 8"
Action - this is the action to be taken by the Tester executing the Test Case.
Expected Result - this is what should happen when the Tester takes the action specified.

Each Test Case would normally have a Test Case ID and a title.

Each Test Step should be a logical continuation of the previous Test Step. This is a very basic example, and generally Test Cases are much more involved, but this is only to illustrate a point:



There are two principal fields of thought with regard to the level of detail that Test Steps should have. The first is similar to the example I just gave, where each step of the login process is specified, with the result of that step also specified. The other is that only a general instruction needs to be given, for example:



Both approaches have their advantages and their disadvantages. The more detailed approach is more time-consuming both to write and to execute. But it can be executed by someone who has little or no prior knowledge of the application. In the instance of logging in to an application, this is perhaps a trite example, as a login process is fairly standardised. But say, for example, you started a new job and were faced with a Test Step that said simply:


or


or


then you wouldn't know what to do. How do you raise an order, create a contract, or issue a refund? And you'll notice that these Test Steps don't have an Expected Result - just an Action.

Other Tester's, who are familiar with the application, can probably do these things without even thinking about it, but you can't. In this situation, step by step instructions would be very useful to you.

In their absence, you would need instruction in how to do these things. One option is to ask someone else. The upside of this is that when you have started a new job as a Tester, talking to people is good and helps to develop a good relationship with your new colleagues. The downside is that they will probably be busy, and since most people are not natural teachers, they would probably not teach you very well. This is another factor that helps to lead to the impression that Testers don't know much.

The other option is to read the company documentation on the procedure - the only drawback with this is that, in my experience, most companies don't have it - "we haven't got around to producing that documentation", "we've been meaning to do that", "we're always so busy", "maybe that's something that you could do", etc, etc, or sometimes the documentation does exist, but it is out of date.

And the final option - which is also the most common - is to just battle through until you have worked it out by yourself.

When you become more familiar with the application, you too, will be able to raise an order, create a contract, issue a refund, without detailed instructions.

When you reach this stage, you would probably find the step by step instructions to be laborious and time-consuming, and would just automatically give each of them a Pass until you reached the part of the application that you needed to test.

Detailed instructions, though, allow the Test Case to be executed by anyone - not just by someone who is starting a new job. If one of the Test Team is off sick, or you just have a lot of tests to get through to meet a deadline, you can borrow a Tester from another team, and they will be able to start executing tests immediately.

One of my clients had a particular issue with Test Cases. They had acquired another company, which had an application for which general tests had been written. No-one in the acquired company knew how to execute the tests - they had all been written by temporary contract Testers who had moved on (just to add to the issue, the Developers were also temporary contract Developers who had moved on, and nobody knew where the Specifications were). My client wanted to integrate this application with it's own systems, but to do so, they needed their Developers and Testers to know how it worked.

The only option open to them was to get some frontline users to demonstrate the various functions of the application.

To cut a long story short, it cost my client a lot of time and money, just to get to the point where they were ready to start the integration.

Shortly afterwards, they introduced a new policy on Test Case clarity, which was that all Test Cases should be sufficiently clear to allow an unfamiliar Tester to execute them. The aim of writing the Test Case at that point becomes one of striking a happy medium as "sufficiently clear" is subjective - what is clear to one person isn't clear to someone else, so it is a matter of keeping a balance between being excessively detailed and giving next to no detail.

Now that you have begun to get an idea of the 'how', the next question is 'where?'.

Most of the time, you will find yourself using a Test Management Tool. This allows you to enter your Testable Conditions and ensure that each of them is covered by at least one Test Case. It also allows you (or your Test Manager) to produce reports on Test Coverage, how many tests have been executed, how many passed, how many failed, on which version of the application, etc, etc.

When executing the tests, each Test Step will be presented to you with the Action to take and the Expected Result. You will then mark each Test Step as a Pass or as a Fail (there are other possibilities, but we won't go into them here).

Sometimes, though, you won't have a Test Management Tool. In these instances, you will probably find yourself writing (and executing) your tests in a word processor or a spreadsheet (in the examples above, I have used a spreadsheet). This is a very rare occurrence, but it does still happen from time to time.

Of these two options, the spreadsheet is the easiest, as you can make columns for Test Step, Action, Expected Result, Pass/Fail, and then write out the Test Steps in each row.

If you find yourself using a word processor, then you are best off inserting template tables with fields to specify Test Step, Action, Expected Result, Pass/Fail.

For the spreadsheet and word processor options, you should always keep your originals in a separate folder so that you can quickly make a usable copy whenever you have to re-execute a test. Each Test Execution should also specify the version of the application that was tested.

How To Write Test Cases - Part 4, Negative Testing (Boundaries)

Boundary Tests

Boundary tests verify that the application works correctly around boundaries.

So we had better start by defining what a boundary is.

There are many times in the operation of an application where it will take a different action depending on the result of a previous action, or on the value of a piece of data.

For example, supposing that you have a specification for a bank account which pays you £10 whenever you make a deposit of more than £1000, with up to 3 deposits of more than £1000 in a month.

This small scenario has two boundaries in it. The first one is that you must pay in more than £1000 to get the £10. So what happens when you pay in £1000? Do you get the £10 credit or not? According to the specifications, the answer is "No", because the requirement is that you must pay in more than £1000 - and £1000 is not more than £1000.

A deposit of £1000.01, on the other hand is more than £1000, and so should get the £10 credit.

If Developer has written in his code
'if deposit > 1000 then credit = +10'
the code will work correctly. If, however, the Developer has written
'if deposit => 1000 then credit =+10'
the code will not work correctly.

How about
'if deposit > 1000.01 then credit =+10'
or
'if deposit >999.99 then credit =+10'
One will work, and one won't.

But don't let the pseudocode scare you. You don't need to know what is going on inside the code (as long as you are doing black box testing, that is - but more on that later).

The other boundary is how many times per month can you do this - the specification says up to 3 times. This is something that needs clarifying with the BA, because strictly speaking "up to 3" does not include 3 - it means up to and including 2.999 recurring (and you can't really make 0.999 of a deposit, can you).

It is possible that the BA meant "up to and including 3" when writing the Specifications, meaning that you can make a deposit of more than £1000 three times in a month and get a £10 credit. It is also possible that they meant "no more than 2".

While you are clarifying this (and a bunch of other questions) with the BA, the Developers will have gone ahead and already written large amounts of code. So if the Developer understood that "up to 3" meant "no more than 2", they will already have written code to reflect that understanding. By the time the BA gets back to you and says that it means "up to and including 3", the Developers will be working on other things, and may not even realise that the Specification has been clarified in a way that affects code they have written.

But now you know exactly what it is that you have to test - and you will test that. Sometimes, your tests will show that the application behaves as expected - and sometimes, they will show that it behaves differently.

Boundaries appear in all sorts of instances. This bank deposit illustration is just one of many possibilities. Some others are:

* An extra discount on car insurance if you are aged over 25 (does that start at 26 years, or at 25 years and 1 day)
* Car insurance costs more if you make more than 2 claims totalling more than £1000 in a year (same principle as the bank deposit illustration)
* Free postage if you spend over £25 (does a spend of £25 get free postage, or does it need to be at least £25.01)
* After 10 years at 7% growth, your investment will be ... (best make sure the counter counts 10 times - sometimes Developers start a counter at 0, and sometimes at 1 - that can make a big difference to the end result)
You'll find boundaries all over the place, so be on the lookout for them.

How To Write Your Test Cases - Part 3 - Negative Testing (More on fields)

Having looked at testing the alphanumeric attributes of the fields in your application, there are some other small, but important, tests that should be covered.

One of these is to ensure that password fields, regardless of the characters entered, are masked.

This means that each of the characters which comprise the password is shown as another character, usually *, so the password 'mypass1234%' would be shown as '***********', for example. Strictly speaking, this is a Positive Test, and it is something that either works or it doesn't. This means that the Developer has either enabled masking, or they haven't. If they have enabled it, then no amount of different ways of entering characters will disable it - so this is really a test to ensure that it has been done.

Another one is the tab order. Normally, when you enter an application, or a page/window of an application, you will find that the cursor is already placed in the first field that you need to fill.

However, during development, this is often not the case, so this, of course, is your first test. During the development phase of a project, it is quite common for the cursor to be in another field. Behind the scenes, not visible to you, each of the fields is normally numbered - on opening the page/window, the cursor is placed in field #1. If the first field you need to fill has not been defined to be field #1, then the cursor will not be in it.

When you press the tab key on your keyboard, the cursor moves to field #2, and then to field #3, etc, etc. To start and tab in the correct order, each of the fields needs to be numbered accordingly. You will frequently find that this has not been done - and the usual reason is because it has not been written into the Specifications.

This is clearly an issue that could be addressed by you during your review of the Specifications - but if you didn't do so at that time, then now is a good time to raise the question.

If the cursor is jumping around the screen in no logical or coherent order (quite a common scenario), you will not normally find any opposition to numbering the fields logically, even when it is not defined as an application behaviour in the specifications.

Copying and Pasting Specifications are often missing a definition of how an application should handle copying and pasting. Tests on this therefore often give rise to a lot of debate. From a Tester's viewpoint, this is good, as it clarifies a vagueness about the way that the application works. For example, you have a Registration Form in which the applicant is asked to enter their email address, and is then asked to confirm their email address by entering it again. The intention of asking someone to enter their email address twice is to ensure that it is correct. If it is mis-typed the first time, then the applicant is unlikely to make the same mistake the second time they enter it. So should it be possible to copy the email address from the first field and paste it into the second field? When you look at it from the point of view of the intention of asking someone to enter details twice, then it is very clear that copying and pasting should not be allowed by the application. But you will still come across the argument that it makes it easier for the user to complete the form! Perhaps you could suggest at this point that it would be even easier to not even have the second field, which of course, leaves open the possibility that the user will mis-type their email address.

How To Write Your Test Cases Part 2 - Negative Testing (Alphanumeric fields)

The second type of test on the list is to ensure that the application does not do anything it is not supposed to do. This is Negative Testing.

Negative Testing can get complex, but is not necessarily so. In fact, this area of testing provides some "low-hanging fruit" - defects that are there for the finding with not a lot of effort.

One of these types of defect is concerned with alphanumeric tests.

To explain - there are three types of characters:

* alphabetic characters
* numeric characters
* special characters

The first two are self explanatory. The third is simply any character that is neither alphabetic nor numeric - characters such as @ or %, for example.

Alphanumeric tests simply ensure that all the fields under test do not accept the wrong types of characters - an alpha field should not accept non-alphabetical characters, and a numeric field should not accept non-numeric characters. A field for entering an email address should accept alpha, numeric, and special characters, e.g. john.smith32@website.com

These may seem like facile issues, but you would be surprised at the number of times that you will find that a numeric field accepts letters or special characters, or vice-versa. These types of faults are quite common.

By default, these tests also verify that the fields in the application do accept the correct types of characters.

In general, it is best to divide all the fields on a particular page or screen of the application into the type of field that they are and test them together - so you would write tests to cover :

- alphanumeric + special character fields (e.g. email address, password)
- alphanumeric fields (e.g member ID, postcode)
- alpha + special character fields (e.g. name with a hyphen or apostrophe)
- alpha fields (e.g. first name)
- numeric fields (e.g. member number, date)
- numeric + special (e.g date, member id with a hyphen or slash)

Fields which only allow special characters are extremely uncommon, but if they do exist in your application, then you would of course, include a test for them.

You will not normally find that there are too many fields of each type to include in one test, but if there are, then you will have to break your tests down to cover the fields by logical area.

How To Write Your Test Cases Part 1 - Positive Testing

Once you have produced all your Testable Conditions and allocated them to the various functional areas of the application, you are ready to write your Test Cases. You will write these from a number of angles.

These will include:
1) Ensuring that the product does what it is supposed to do;
2) Ensuring that the product does not do anything that it is not supposed to do;
3) Ensuring that each of the component areas can be integrated together and still work correctly;
4) Ensuring that the whole product works from the beginning to the end.

Here, we will just consider the first of these, which is to ensure that the product does what it is supposed to do. This is Positive Testing, and also goes under names such as Happy Path or Golden Path, indicating that everything is just fine when you do what you are expected to do.

This involves using each of your positive Testable Conditions in one or more Test Cases. When each of these has been used in at least one Test Case, you will have covered all of the requirements for the product, and will therefore have written Test Scripts to test that it does everything it is supposed to do.

A good Test Case should, in principle, be broken into three parts, which for the sake of this illustration, I shall refer to as Test Start, Test Middle, and Test End. These would generally move the User to the area to be tested, then test one or more functions in that area, and finally move the User out of the area that has been tested.

I'll give some examples, but you need to bear in mind that testing is an art rather than a science, so there is not a single a = b + c approach.

Example 1
Test Start
Log onto the application

Test Middle
Navigate to the xyz area of the application
Verify that Testable Condition 1 is met

Test End
Log off of the application


If you had a number of different functions that you needed to test in area xyz, then you could of course put them all into one Test Case, but this would not be good practice. Supposing for example that you have five functions that you need to test in the xyz area, and you put them all into the same Test Case. When you come to execute the test, the first two functions pass, but the third function fails. Now your whole test fails because of one defective function, although two of the functions are actually working correctly, and another two have not been tested because you stopped when you reached a point of failure.

If you had written five separate Test Cases instead, you would have had two Test Cases passed, one failed, and two not run.

But if you have five separate Test Cases, then you can quickly see a problem with Example 1, above. The first Test Case would start with logging the User onto the application, and end with logging the User off of the application - and then you would move onto the second Test Case, which would start with logging the User onto the application.

This is clearly a waste of time, as you were already logged into the application, and you were already in the correct area of the application for your second test. In real life, you wouldn't log off of the application so that you could log in again a minute later - you would just stay logged into the application. The answer is to create your Test Case with a prerequisite. So let's rewrite our example:

Example 2
Test Start
Prerequisite: Logged into application

Test Middle
Navigate to the xyz area of the application
Verify that Testable Condition 2 is met

Test End
Navigate away from xyz area of application


So we still meet our principle of Start, Middle, End - and we've avoided logging off so that we can log back on a minute later, but you may be asking why we are navigating away from xyz area so that we can navigate back to it a minute later. Depending on the application you are testing, you may not need to - but many applications need to close a form or a page before anything new can be done on them. This is a call that you will need to make, based on your own knowledge of the application you are testing - and of course, you will have gained a lot of that knowledge through constantly reading and re-reading the Specifications.

How to Prepare Your Testable Conditions for Scripting

Once you have all your Testable Conditions, you are ready to prepare for writing your Test Cases/Scripts. Depending on the size of the application, you could have a small number of Testable Conditions, or a very large number of Testable Conditions.

If it is large, then you will need to break your tests into a number of different functional areas relevant to the application.

The first step in doing this, is of course to determine the functional areas that you are going to test.

After doing this, you should go through each of your Testable Conditions fitting them into your functional areas. In a similar fashion to how you were going through the Specifications to get your Testable Conditions, you will now go through your Testable Conditions time and again until you have fitted each of them into a relevant functional area. This is an aspect of testing that many Testers find boring, but it is also something that will enable you to have a deeper knowledge of the application than the Developers, and will prepare you for the time when you come to write and execute the tests.

Some of your conditions may fit into more than one functional area, so don't be constrained by thinking you can only use a particular condition in only one functional area. In fact, as systems and applications become more complex, there are more and more areas of overlap, but this is touching on Integration Testing, which will be discussed in another article.

Once you have determined the functional areas to test, and the relevant Testable Conditions for each area, you are finally ready to start writing your Test Cases.