Chapter 4: Software Testing

Revision as of 01:27, 24 August 2015 by Daniel Robbins (Talk | contribs) (Software Testing Fundamentals)

Jump to: navigation, search
Application Program Interface
Test-Driven Development
Testing and Test Control Notation Version 3
Extreme Programming

Software testing consists of the dynamic verification that a program provides expected behaviors on a finite set of test cases, suitably selected from the usually infinite execution domain. In the above definition, italicized words correspond to key issues in describing the Software Testing knowledge area (KA):

  • Dynamic: This term means that testing always implies executing the program on selected inputs. To be precise, the input value alone is not always sufficient to specify a test, since a complex, nondeterministic system might react to the same input with different behaviors, depending on the system state. In this KA, however, the term “input” will be maintained, with the implied convention that its meaning also includes a specified input state in those cases for which it is important. Static techniques are different from and complementary to dynamic testing. Static techniques are covered in the Software Quality KA. It is worth noting that terminology is not uniform among different communities and some use the term “testing” also in reference to static techniques.
  • Finite: Even in simple programs, so many test cases are theoretically possible that exhaustive testing could require months or years to execute. This is why, in practice, a complete set of tests can generally be considered infinite, and testing is conducted on a subset of all possible tests, which is determined by risk and prioritization criteria. Testing always implies a tradeoff between limited resources and schedules on the one hand and inherently unlimited test requirements on the other.
  • Selected: The many proposed test techniques differ essentially in how the test set is selected, and software engineers must be aware that different selection criteria may yield vastly different degrees of effectiveness. How to identify the most suitable selection criterion under given conditions is a complex problem; in practice, risk analysis techniques and software engineering expertise are applied.
  • Expected: It must be possible, although not always easy, to decide whether the observed outcomes of program testing are acceptable or not; otherwise, the testing effort is useless. The observed behavior may be checked against user needs (commonly referred to as testing for validation), against a specification (testing for verification), or, perhaps, against the anticipated behavior from implicit requirements or expectations (see Acceptance Tests in the Software Requirements KA).

In recent years, the view of software testing has matured into a constructive one. Testing is no longer seen as an activity that starts only after the coding phase is complete with the limited purpose of detecting failures. Software testing is, or should be, pervasive throughout the entire development and maintenance life cycle. Indeed, planning for software testing should start with the early stages of the software requirements process, and test plans and procedures should be systematically and continuously developed—and possibly refined—as software development proceeds. These test planning and test designing activities provide useful input for software designers and help to highlight potential weaknesses, such as design oversights/contradictions, or omissions/ambiguities in the documentation.

For many organizations, the approach to software quality is one of prevention: it is obviously much better to prevent problems than to correct them. Testing can be seen, then, as a means for providing information about the functionality and quality attributes of the software and also for identifying faults in those cases where error prevention has not been effective. It is perhaps obvious but worth recognizing that software can still contain faults, even after completion of an extensive testing activity. Software failures experienced after delivery are addressed by corrective maintenance. Software maintenance topics are covered in the Software Maintenance KA.

In the Software Quality KA (see Software Quality Management Techniques), software quality management techniques are notably categorized into static techniques (no code execution) and dynamic techniques (code execution). Both categories are useful. This KA focuses on dynamic techniques.

Software testing is also related to software construction (see Construction Testing in the Software Construction KA). In particular, unit and integration testing are intimately related to software construction, if not part of it.

Breakdown of Topics for Software Testing

The breakdown of topics for the Software Testing KA is shown in Figure 4.1. A more detailed breakdown is provided in the Matrix of Topics vs. Reference Material at the end of this KA.

The first topic describes Software Testing Fundamentals. It covers the basic definitions in the field of software testing, the basic terminology and key issues, and software testing’s relationship with other activities.

The second topic, Test Levels, consists of two (orthogonal) subtopics: the first subtopic lists the levels in which the testing of large software is traditionally subdivided, and the second subtopic considers testing for specific conditions or properties and is referred to as Objectives of Testing. Not all types of testing apply to every software product, nor has every possible type been listed.

The test target and test objective together determine how the test set is identified, both with regard to its consistency—how much testing is enough for achieving the stated objective—and to its composition—which test cases should be selected for achieving the stated objective (although usually “for achieving the stated objective” remains implicit and only the first part of the two italicized questions above is posed). Criteria for addressing the first question are referred to as test adequacy criteria, while those addressing the second question are the test selection criteria.

Several Test Techniques have been developed in the past few decades, and new ones are still being proposed. Generally accepted techniques are covered in the third topic.

Test-Related Measures are dealt with in the fourth topic, while the issues relative to Test Process are covered in the fifth. Finally, Software Testing Tools are presented in topic six.

1 Software Testing Fundamentals

1.1 Testing-Related Terminology

1.1.1 Definitions of Testing and Related Terminology

1.1.2 Faults vs. Failures

1.2 Key Issues

1.2.1 Test Selection Criteria / Test Adequacy Criteria (Stopping Rules)

1.2.2 Test Effectiveness / Objectives for Testing

1.2.3 Testing for Defect Discovery

1.2.4 The Oracle Problem

1.2.5 Theoretical and Practical Limitations of Testing

1.2.6 The Problem of Infeasible Paths

1.2.7 Testability

1.3 Relationship of Testing to Other Activities

2 Test Levels

2.1 The Target of the Test

2.1.1 Unit Testing

2.1.2 Integration Testing

2.1.3 System Testing

2.2 Objectives of Testing

2.2.1 Acceptance / Qualification Testing

2.2.2 Installation Testing

2.2.3 Alpha and Beta Testing

2.2.4 Reliability Achievement and Evaluation

2.2.5 Regression Testing

2.2.6 Performance Testing

2.2.7 Security Testing

2.2.8 Stress Testing

2.2.9 Back-to-Back Testing

2.2.10 Recovery Testing

2.2.11 Interface Testing

2.2.12 Configuration Testing

2.2.13 Usability and Human Computer Interaction Testing

3 Test Techniques

3.1 Based on the Software Engineer's Intuition and Experience

3.1.1 Ad Hoc

3.1.2 Exploratory Testing

3.2 Input Domain-Based Techniques

3.2.1 Equivalence Partitioning

3.2.2 Pairwise Testing

3.2.3 Boundary-Value Analysis

3.2.4 Random Testing

3.3 Code-Based Techniques