DowdTec Ltd

Information Engineering

Test Design Principles

Effective test management requires both a framework and process. The framework requires some established design principles, such as:

  • Positive and Negative Testing
  • Simplicity and Repeatability
  • Maximum Coverage with Minimum Effort
  • Risk-Based Testing
  • Owning the Outcome
  • Granularity

Positive and Negative Testing

Many design factors specify what should happen and when. Sometimes these are Boolean in nature, such as “when you press this button, this happens”. There is an implied negative in this statement, which is “when you don’t press it, this doesn’t happen”. Both states should be tested.

More complex design factors introduce a range of valid values. Each range has implied validity, defined by format and boundaries. Formats can be generic, such as dates, or specific, such as UK telephone numbers, or the maximum length of a text field, whilst boundaries relate to value or set-based validation. Each of these may result in three or four specific tests designed to verify the validation routine and the applications response to failure.

Providing a comprehensive suite of tests with known outcomes reduces the necessary level of experience required of the tester, and therefore dependence upon key individuals.

Simplicity and Repeatability

The value of simple tests is that they can be executed quickly, with minimal specialist knowledge and with easily verifiable outcomes. For this reason the start point of each test “pack” should be clearly defined, including the state and content of any supporting database. The benefits are that the tests are repeatable with minimal effort and elapsed time, reducing the overall cost. Repeatable tests are critical to successful testing, and simplicity reduces the cost of repeatability.

Maximum Coverage with Minimum Effort

The most rigorous testing would involve exercising every step in every Business Process at least once for every possible set of data. Clearly this is impractical, so we must find a way of maximising the value of the testing effort.
With the modular programming techniques offered by modern development environments, specific routines and modules are re-used in many places. By constructing a matrix of which routines and modules are used in which business processes it may possible to select a relatively small number of business processes which exercise all routines and modules.

Risk-Based Testing

Having reduced the testing effort to a minimum, there may still be too much to do in the time/budget available. Each test component must therefore be subjected to a critical risk assessment where

  • Risk = Probability x Impact

Probability is difficult to define and predict because it usually involves the complex interaction of many factors. Reducing the impact, or cost, of failure is therefore critical to reducing risk.

The cost of fixing problems grows exponentially the later they are detected in the product lifecycle. These costs come in three forms:

  • Financial – costs such as application rework, providing short-term alternative facilities and contractual penalties, such as SLAs
  • Opportunity – missing delivery deadlines (which may affect your competitive position), market cycles and external deadlines such as legislation or B2B commitments
  • Credibility – missing customer expectations, which have usually been set well in advance of delivery

Some of these costs may be general whilst others are usage-based. Understanding who uses or will use which features can provide a weighting factor in determining the risk-cost.

The tests to be performed can then be selected based on comparing risk-cost against budget.

Owning the Outcome

Once the test plan is agreed some tests will inevitably be left out, each having a risk-cost. Someone needs to own this residual risk-cost, potentially amplified by the number of customers using the application. This can be mitigated by having a clear understand of which customers use which features of the application

Granularity

Some application components lend themselves to a smaller number of test scripts each comprising a large number of test cases in sequence. The problem this poses is that once a test case fails, the remaining test cases are potentially invalid. For this reason a larger number of tests smaller in scope should be defined.

Certain tests have prerequisites which are provided by earlier tests. These prerequisites are often defined in the context of the state of the underlying data. Where possible halt the overall process after the prerequisite has been satisfied and take a backup of the database. This will provide the entry point for repeating the dependant test.

Using PERL to regenerate a LoadRunner SOAP request

This program takes two files as parameters, an extract from a LoadRunner script and a subset of the VUGen output, and reconstructs the original XML request.

#------------------------------------------------------------------------------------------
# This routine takes two file parameters
# - LoadRunner request containing parameterised XML (web_custom_request or soap_request)
# - LoadRunner output containing log extract with actual substitutions
#
# It outputs the reconstructed XML request
#------------------------------------------------------------------------------------------
use strict; my $in;
my $request;
my $xml;
# Load request (web_custom_request or soap_request) containing XML
open $in,  '<',  $ARGV[0]      or die "Can't read request file: $!";
$request = do { local $/; <$in> };
chomp $request;
# Extract xml and clean it up
{ # Remove intervening linefeeds and quotes
   local $/;
   $request =~ s/",?\\n\\s+"//g;
}
$request =~ /(<\\?xml.*>)/; # Extract the XML...
$xml = $1; # ...into a local variable
$xml =~ s/\\\\"/"/g; # De-escape the quotes
$xml =~ s/\\\\r\\\\n/\\n/g; # Convert "\\r\\n" into newline
# Substitute parameters
open(PARAMS,  $ARGV[1]) or die "Can't read params file: $!";
# For every line in the parameter file
for (<PARAMS>) {
   if (/"(.+)" =  "(.*)"/) { # Is it a real parameter line?
      my $key = $1; # Save key name
      my $value = $2; # Save value to substitute
      my $count = $xml =~ s/{$key}/$value/; # Replace the {parameter}
   }
} 
print $xml;

Using PERL to generate test data

PERL is a programming language optimised for manipulating data. The PERL acronym stands for Practical Extraction and Report Language, which provides a clue to its strengths. This can be very useful for performance testers, because we all know that extracting valid test data, in the correct format, is one of tha biggest hurdles in the race to test completion.

At the heart of the language’s power is the “regex” (Regular Expression). Using regexes we can define a data format template to be applied to a list of values for pattern matching and extraction of sub-strings. PERL syntax implements regexes between slashes using tokens and modifiers, of which the following are examples:

  • /.*/ matches zero or more of any character except a newline
  • /\d+-\d+/ matches two blocks of numeric digits separated by a dash

To unpack this, the . in the first example is a token meaning “any character except a newline” and \\d in the second is a token meaning “any decimal digit” (careful because \\D means “any non-digit”, which is different from any alphabetic character!). The * modifies its preceding value to capture “zero or more” and the + modifies it as “one or more”.

Here’s a brief example program (the unenclosed # character means that the remainder of the line is a comment):

use strict;             # strict syntax checking
for (<>) {              # for every line in the input stream
   chomp;               # trim trailing whitespace (including CRLF)
   if (/(\d+-\d+)/) {   # match two blocks of one or more digits separated by a dash "-", saving all between "()"as $1
      print "$1\n";     # output the saved item ($1) followed by a newline
   }
}

Using the following input stream:

  1. garbage1324-255more garbage
  2. garbage 1324-255 more garbage
  3. garbage 1324- 255 more garbage
  4. garbage 1324-a255 more garbage

The first and second lines would match and both print 1324-255 (surrounding stuff is ignored), but both of the others would not, as there are other characters within the pattern, a space on line 3 and the letter “a” on line 4.

Have a play. Download and install PERL and the optional Komodo Edit (you can use any text editor) from http://activestate.com.