Difference between revisions of "Chapter 4: Software Testing"

From SWEBOK
Jump to: navigation, search
(Software Testing Fundamentals)
(Test Levels)
Line 243: Line 243:
  
 
== Test Levels ==
 
== Test Levels ==
 +
 +
Software testing is usually performed at different
 +
levels throughout the development and maintenance
 +
processes. Levels can be distinguished
 +
based on the object of testing, which is called
 +
the target, or on the purpose, which is called the
 +
objective (of the test level).
  
 
=== The Target of the Test ===
 
=== The Target of the Test ===
 +
 +
The target of the test can vary: a single module, a
 +
group of such modules (related by purpose, use,
 +
behavior, or structure), or an entire system. Three
 +
test stages can be distinguished: unit, integration,
 +
and system. These three test stages do not
 +
imply any process model, nor is any one of them
 +
assumed to be more important than the other two.
  
 
==== Unit Testing ====
 
==== Unit Testing ====
 +
 +
Unit testing verifies the functioning in isolation
 +
of software elements that are separately testable.
 +
Depending on the context, these could be the
 +
individual subprograms or a larger component
 +
made of highly cohesive units. Typically, unit
 +
testing occurs with access to the code being tested
 +
and with the support of debugging tools. The programmers
 +
who wrote the code typically, but not
 +
always, conduct unit testing.
  
 
==== Integration Testing ====
 
==== Integration Testing ====
 +
 +
Integration testing is the process of verifying the
 +
interactions among software components. Classical
 +
integration testing strategies, such as topdown
 +
and bottom-up, are often used with hierarchically
 +
structured software.
 +
 +
Modern, systematic integration strategies are
 +
typically architecture-driven, which involves
 +
incrementally integrating the software components
 +
or subsystems based on identified
 +
functional threads. Integration testing is often an
 +
ongoing activity at each stage of development
 +
during which software engineers abstract away
 +
lower-level perspectives and concentrate on the
 +
perspectives of the level at which they are integrating.
 +
For other than small, simple software,
 +
incremental integration testing strategies are usually
 +
preferred to putting all of the components
 +
together at once—which is often called “big
 +
bang” testing.
  
 
==== System Testing ====
 
==== System Testing ====
 +
 +
System testing is concerned with testing the
 +
behavior of an entire system. Effective unit and
 +
integration testing will have identified many of
 +
the software defects. System testing is usually
 +
considered appropriate for assessing the nonfunctional
 +
system requirements—such as security,
 +
speed, accuracy, and reliability (see Functional
 +
and Non-Functional Requirements in the
 +
Software Requirements KA and Software Quality
 +
Requirements in the Software Quality KA).
 +
External interfaces to other applications, utilities,
 +
hardware devices, or the operating environments
 +
are also usually evaluated at this level.
  
 
=== Objectives of Testing ===
 
=== Objectives of Testing ===
 +
 +
Testing is conducted in view of specific objectives,
 +
which are stated more or less explicitly
 +
and with varying degrees of precision. Stating
 +
the objectives of testing in precise, quantitative
 +
terms supports measurement and control of the
 +
test process.
 +
 +
Testing can be aimed at verifying different properties.
 +
Test cases can be designed to check that
 +
the functional specifications are correctly implemented,
 +
which is variously referred to in the literature
 +
as conformance testing, correctness testing,
 +
or functional testing. However, several other
 +
nonfunctional properties may be tested as well—
 +
including performance, reliability, and usability,
 +
among many others (see Models and Quality
 +
Characteristics in the Software Quality KA).
 +
 +
Other important objectives for testing include
 +
but are not limited to reliability measurement,
 +
identification of security vulnerabilities, usability
 +
evaluation, and software acceptance, for which
 +
different approaches would be taken. Note that,
 +
in general, the test objectives vary with the test
 +
target; different purposes are addressed at different
 +
levels of testing.
 +
 +
The subtopics listed below are those most
 +
often cited in the literature. Note that some kinds
 +
of testing are more appropriate for custom-made
 +
software packages—installation testing, for
 +
example—and others for consumer products, like
 +
beta testing.
  
 
==== Acceptance / Qualification Testing ====
 
==== Acceptance / Qualification Testing ====
 +
 +
Acceptance / qualification testing determines
 +
whether a system satisfies its acceptance criteria,
 +
usually by checking desired system behaviors
 +
against the customer’s requirements. The customer
 +
or a customer’s representative thus specifies
 +
or directly undertakes activities to check
 +
that their requirements have been met, or in the
 +
case of a consumer product, that the organization
 +
has satisfied the stated requirements for the target
 +
market. This testing activity may or may not
 +
involve the developers of the system.
  
 
==== Installation Testing ====
 
==== Installation Testing ====
 +
 +
Often, after completion of system and acceptance
 +
testing, the software is verified upon installation
 +
in the target environment. Installation testing can
 +
be viewed as system testing conducted in the
 +
operational environment of hardware configurations
 +
and other operational constraints. Installation
 +
procedures may also be verified.
  
 
==== Alpha and Beta Testing ====
 
==== Alpha and Beta Testing ====
 +
 +
Before software is released, it is sometimes given
 +
to a small, selected group of potential users for
 +
trial use (alpha testing) and/or to a larger set of
 +
representative users (beta testing). These users
 +
report problems with the product. Alpha and beta
 +
testing are often uncontrolled and are not always
 +
referred to in a test plan.
  
 
==== Reliability Achievement and Evaluation ====
 
==== Reliability Achievement and Evaluation ====
 +
 +
Testing improves reliability by identifying and
 +
correcting faults. In addition, statistical measures
 +
of reliability can be derived by randomly generating
 +
test cases according to the operational profile of
 +
the software (see Operational Profile in section 3.5,
 +
Usage-Based Techniques). The latter approach is
 +
called operational testing. Using reliability growth
 +
models, both objectives can be pursued together
 +
[3] (see Life Test, Reliability Evaluation in section
 +
4.1, Evaluation of the Program under Test).
  
 
==== Regression Testing ====
 
==== Regression Testing ====
 +
 +
According to [7], regression testing is the “selective
 +
retesting of a system or component to verify
 +
that modifications have not caused unintended
 +
effects and that the system or component still
 +
complies with its specified requirements.” In
 +
practice, the approach is to show that software
 +
still passes previously passed tests in a test suite
 +
(in fact, it is also sometimes referred to as nonregression
 +
testing). For incremental development,
 +
the purpose of regression testing is to show that
 +
software behavior is unchanged by incremental
 +
changes to the software, except insofar as it
 +
should. In some cases, a tradeoff must be made
 +
between the assurance given by regression testing
 +
every time a change is made and the resources
 +
required to perform the regression tests, which
 +
can be quite time consuming due to the large
 +
number of tests that may be executed. Regression
 +
testing involves selecting, minimizing, and/or
 +
prioritizing a subset of the test cases in an existing
 +
test suite [8]. Regression testing can be conducted
 +
at each of the test levels described in section
 +
2.1, The Target of the Test, and may apply to
 +
functional and nonfunctional testing.
  
 
==== Performance Testing ====
 
==== Performance Testing ====
 +
 +
Performance testing verifies that the software
 +
meets the specified performance requirements
 +
and assesses performance characteristics—for
 +
instance, capacity and response time.
  
 
==== Security Testing ====
 
==== Security Testing ====
 +
 +
Security testing is focused on the verification that
 +
the software is protected from external attacks. In
 +
particular, security testing verifies the confidentiality,
 +
integrity, and availability of the systems
 +
and its data. Usually, security testing includes
 +
verification against misuse and abuse of the software
 +
or system (negative testing).
  
 
==== Stress Testing ====
 
==== Stress Testing ====
 +
 +
Stress testing exercises software at the maximum
 +
design load, as well as beyond it, with the goal
 +
of determining the behavioral limits, and to test
 +
defense mechanisms in critical systems.
  
 
==== Back-to-Back Testing ====
 
==== Back-to-Back Testing ====
 +
 +
IEEE/ISO/IEC Standard 24765 defines back-toback
 +
testing as “testing in which two or more
 +
variants of a program are executed with the same
 +
inputs, the outputs are compared, and errors are
 +
analyzed in case of discrepancies.”
  
 
==== Recovery Testing ====
 
==== Recovery Testing ====
 +
 +
Recovery testing is aimed at verifying software
 +
restart capabilities after a system crash or other
 +
“disaster.”
  
 
==== Interface Testing ====
 
==== Interface Testing ====
 +
 +
Interface defects are common in complex systems.
 +
Interface testing aims at verifying whether
 +
the components interface correctly to provide the
 +
correct exchange of data and control information.
 +
Usually the test cases are generated from
 +
the interface specification. A specific objective of
 +
interface testing is to simulate the use of APIs by
 +
end-user applications. This involves the generation
 +
of parameters of the API calls, the setting of
 +
external environment conditions, and the definition
 +
of internal data that affect the API.
  
 
==== Configuration Testing ====
 
==== Configuration Testing ====
 +
 +
In cases where software is built to serve different
 +
users, configuration testing verifies the software
 +
under different specified configurations.
  
 
==== Usability and Human Computer Interaction Testing ====
 
==== Usability and Human Computer Interaction Testing ====
 +
 +
The main task of usability and human computer
 +
interaction testing is to evaluate how easy it is
 +
for end users to learn and to use the software. In
 +
general, it may involve testing the software functions
 +
that supports user tasks, documentation that
 +
aids users, and the ability of the system to recover
 +
from user errors (see User Interface Design in the
 +
Software Design KA).
  
 
== Test Techniques ==
 
== Test Techniques ==

Revision as of 01:46, 24 August 2015

Acronyms
API
Application Program Interface
TDD
Test-Driven Development
TTCN3
Testing and Test Control Notation Version 3
XP
Extreme Programming
Introduction

Software testing consists of the dynamic verification that a program provides expected behaviors on a finite set of test cases, suitably selected from the usually infinite execution domain. In the above definition, italicized words correspond to key issues in describing the Software Testing knowledge area (KA):

  • Dynamic: This term means that testing always implies executing the program on selected inputs. To be precise, the input value alone is not always sufficient to specify a test, since a complex, nondeterministic system might react to the same input with different behaviors, depending on the system state. In this KA, however, the term “input” will be maintained, with the implied convention that its meaning also includes a specified input state in those cases for which it is important. Static techniques are different from and complementary to dynamic testing. Static techniques are covered in the Software Quality KA. It is worth noting that terminology is not uniform among different communities and some use the term “testing” also in reference to static techniques.
  • Finite: Even in simple programs, so many test cases are theoretically possible that exhaustive testing could require months or years to execute. This is why, in practice, a complete set of tests can generally be considered infinite, and testing is conducted on a subset of all possible tests, which is determined by risk and prioritization criteria. Testing always implies a tradeoff between limited resources and schedules on the one hand and inherently unlimited test requirements on the other.
  • Selected: The many proposed test techniques differ essentially in how the test set is selected, and software engineers must be aware that different selection criteria may yield vastly different degrees of effectiveness. How to identify the most suitable selection criterion under given conditions is a complex problem; in practice, risk analysis techniques and software engineering expertise are applied.
  • Expected: It must be possible, although not always easy, to decide whether the observed outcomes of program testing are acceptable or not; otherwise, the testing effort is useless. The observed behavior may be checked against user needs (commonly referred to as testing for validation), against a specification (testing for verification), or, perhaps, against the anticipated behavior from implicit requirements or expectations (see Acceptance Tests in the Software Requirements KA).

In recent years, the view of software testing has matured into a constructive one. Testing is no longer seen as an activity that starts only after the coding phase is complete with the limited purpose of detecting failures. Software testing is, or should be, pervasive throughout the entire development and maintenance life cycle. Indeed, planning for software testing should start with the early stages of the software requirements process, and test plans and procedures should be systematically and continuously developed—and possibly refined—as software development proceeds. These test planning and test designing activities provide useful input for software designers and help to highlight potential weaknesses, such as design oversights/contradictions, or omissions/ambiguities in the documentation.

For many organizations, the approach to software quality is one of prevention: it is obviously much better to prevent problems than to correct them. Testing can be seen, then, as a means for providing information about the functionality and quality attributes of the software and also for identifying faults in those cases where error prevention has not been effective. It is perhaps obvious but worth recognizing that software can still contain faults, even after completion of an extensive testing activity. Software failures experienced after delivery are addressed by corrective maintenance. Software maintenance topics are covered in the Software Maintenance KA.

In the Software Quality KA (see Software Quality Management Techniques), software quality management techniques are notably categorized into static techniques (no code execution) and dynamic techniques (code execution). Both categories are useful. This KA focuses on dynamic techniques.

Software testing is also related to software construction (see Construction Testing in the Software Construction KA). In particular, unit and integration testing are intimately related to software construction, if not part of it.

Breakdown of Topics for Software Testing

The breakdown of topics for the Software Testing KA is shown in Figure 4.1. A more detailed breakdown is provided in the Matrix of Topics vs. Reference Material at the end of this KA.

The first topic describes Software Testing Fundamentals. It covers the basic definitions in the field of software testing, the basic terminology and key issues, and software testing’s relationship with other activities.

The second topic, Test Levels, consists of two (orthogonal) subtopics: the first subtopic lists the levels in which the testing of large software is traditionally subdivided, and the second subtopic considers testing for specific conditions or properties and is referred to as Objectives of Testing. Not all types of testing apply to every software product, nor has every possible type been listed.

The test target and test objective together determine how the test set is identified, both with regard to its consistency—how much testing is enough for achieving the stated objective—and to its composition—which test cases should be selected for achieving the stated objective (although usually “for achieving the stated objective” remains implicit and only the first part of the two italicized questions above is posed). Criteria for addressing the first question are referred to as test adequacy criteria, while those addressing the second question are the test selection criteria.

Several Test Techniques have been developed in the past few decades, and new ones are still being proposed. Generally accepted techniques are covered in the third topic.

Test-Related Measures are dealt with in the fourth topic, while the issues relative to Test Process are covered in the fifth. Finally, Software Testing Tools are presented in topic six.

1 Software Testing Fundamentals

1.1 Testing-Related Terminology

1.1.1 Definitions of Testing and Related Terminology

Definitions of testing and testing-related terminology are provided in the cited references and summarized as follows.

1.1.2 Faults vs. Failures

Many terms are used in the software engineering literature to describe a malfunction: notably fault, failure, and error, among others. This terminology is precisely defined in [3, c2]. It is essential to clearly distinguish between the cause of a malfunction (for which the term fault will be used here) and an undesired effect observed in the system’s delivered service (which will be called a failure). Indeed there may well be faults in the software that never manifest themselves as failures (see Theoretical and Practical Limitations of Testing in section 1.2, Key Issues). Thus testing can reveal failures, but it is the faults that can and must be removed [3]. The more generic term defect can be used to refer to either a fault or a failure, when the distinction is not important [3].

However, it should be recognized that the cause of a failure cannot always be unequivocally identified. No theoretical criteria exist to definitively determine, in general, the fault that caused an observed failure. It might be said that it was the fault that had to be modified to remove the failure, but other modifications might have worked just as well. To avoid ambiguity, one could refer to failure-causing inputs instead of faults—that is, those sets of inputs that cause a failure to appear.

1.2 Key Issues

1.2.1 Test Selection Criteria / Test Adequacy Criteria (Stopping Rules)

A test selection criterion is a means of selecting test cases or determining that a set of test cases is sufficient for a specified purpose. Test adequacy criteria can be used to decide when sufficient testing will be, or has been accomplished [4] (see Termination in section 5.1, Practical Considerations).

1.2.2 Test Effectiveness / Objectives for Testing

Testing effectiveness is determined by analyzing a set of program executions. Selection of tests to be executed can be guided by different objectives: it is only in light of the objective pursued that the effectiveness of the test set can be evaluated.

1.2.3 Testing for Defect Discovery

In testing for defect discovery, a successful test is one that causes the system to fail. This is quite different from testing to demonstrate that the software meets its specifications or other desired properties, in which case testing is successful if no failures are observed under realistic test cases and test environments.

1.2.4 The Oracle Problem

An oracle is any human or mechanical agent that decides whether a program behaved correctly in a given test and accordingly results in a verdict of “pass” or “fail.” There exist many different kinds of oracles; for example, unambiguous requirements specifications, behavioral models, and code annotations. Automation of mechanized oracles can be difficult and expensive.

1.2.5 Theoretical and Practical Limitations of Testing

Testing theory warns against ascribing an unjustified level of confidence to a series of successful tests. Unfortunately, most established results of testing theory are negative ones, in that they state what testing can never achieve as opposed to what is actually achieved. The most famous quotation in this regard is the Dijkstra aphorism that “program testing can be used to show the presence of bugs, but never to show their absence” [5]. The obvious reason for this is that complete testing is not feasible in realistic software. Because of this, testing must be driven based on risk [6, part 1] and can be seen as a risk management strategy.

1.2.6 The Problem of Infeasible Paths

Infeasible paths are control flow paths that cannot be exercised by any input data. They are a significant problem in path-based testing, particularly in automated derivation of test inputs to exercise control flow paths.

1.2.7 Testability

The term “software testability” has two related but different meanings: on the one hand, it refers to the ease with which a given test coverage criterion can be satisfied; on the other hand, it is defined as the likelihood, possibly measured statistically, that a set of test cases will expose a failure if the software is faulty. Both meanings are important.

1.3 Relationship of Testing to Other Activities

Software testing is related to, but different from, static software quality management techniques, proofs of correctness, debugging, and program construction. However, it is informative to consider testing from the point of view of software quality analysts and of certifiers.

  • Testing vs. Static Software Quality Management Techniques (see Software Quality Management Techniques in the Software Quality KA [1*, c12]).
  • Testing vs. Correctness Proofs and Formal Verification (see the Software Engineering Models and Methods KA [1*, c17s2]).
  • Testing vs. Debugging (see Construction Testing in the Software Construction KA and Debugging Tools and Techniques in the Computing Foundations KA [1*, c3s6]).
  • Testing vs. Program Construction (see Construction Testing in the Software Construction KA [1*, c3s2]).

2 Test Levels

Software testing is usually performed at different levels throughout the development and maintenance processes. Levels can be distinguished based on the object of testing, which is called the target, or on the purpose, which is called the objective (of the test level).

2.1 The Target of the Test

The target of the test can vary: a single module, a group of such modules (related by purpose, use, behavior, or structure), or an entire system. Three test stages can be distinguished: unit, integration, and system. These three test stages do not imply any process model, nor is any one of them assumed to be more important than the other two.

2.1.1 Unit Testing

Unit testing verifies the functioning in isolation of software elements that are separately testable. Depending on the context, these could be the individual subprograms or a larger component made of highly cohesive units. Typically, unit testing occurs with access to the code being tested and with the support of debugging tools. The programmers who wrote the code typically, but not always, conduct unit testing.

2.1.2 Integration Testing

Integration testing is the process of verifying the interactions among software components. Classical integration testing strategies, such as topdown and bottom-up, are often used with hierarchically structured software.

Modern, systematic integration strategies are typically architecture-driven, which involves incrementally integrating the software components or subsystems based on identified functional threads. Integration testing is often an ongoing activity at each stage of development during which software engineers abstract away lower-level perspectives and concentrate on the perspectives of the level at which they are integrating. For other than small, simple software, incremental integration testing strategies are usually preferred to putting all of the components together at once—which is often called “big bang” testing.

2.1.3 System Testing

System testing is concerned with testing the behavior of an entire system. Effective unit and integration testing will have identified many of the software defects. System testing is usually considered appropriate for assessing the nonfunctional system requirements—such as security, speed, accuracy, and reliability (see Functional and Non-Functional Requirements in the Software Requirements KA and Software Quality Requirements in the Software Quality KA). External interfaces to other applications, utilities, hardware devices, or the operating environments are also usually evaluated at this level.

2.2 Objectives of Testing

Testing is conducted in view of specific objectives, which are stated more or less explicitly and with varying degrees of precision. Stating the objectives of testing in precise, quantitative terms supports measurement and control of the test process.

Testing can be aimed at verifying different properties. Test cases can be designed to check that the functional specifications are correctly implemented, which is variously referred to in the literature as conformance testing, correctness testing, or functional testing. However, several other nonfunctional properties may be tested as well— including performance, reliability, and usability, among many others (see Models and Quality Characteristics in the Software Quality KA).

Other important objectives for testing include but are not limited to reliability measurement, identification of security vulnerabilities, usability evaluation, and software acceptance, for which different approaches would be taken. Note that, in general, the test objectives vary with the test target; different purposes are addressed at different levels of testing.

The subtopics listed below are those most often cited in the literature. Note that some kinds of testing are more appropriate for custom-made software packages—installation testing, for example—and others for consumer products, like beta testing.

2.2.1 Acceptance / Qualification Testing

Acceptance / qualification testing determines whether a system satisfies its acceptance criteria, usually by checking desired system behaviors against the customer’s requirements. The customer or a customer’s representative thus specifies or directly undertakes activities to check that their requirements have been met, or in the case of a consumer product, that the organization has satisfied the stated requirements for the target market. This testing activity may or may not involve the developers of the system.

2.2.2 Installation Testing

Often, after completion of system and acceptance testing, the software is verified upon installation in the target environment. Installation testing can be viewed as system testing conducted in the operational environment of hardware configurations and other operational constraints. Installation procedures may also be verified.

2.2.3 Alpha and Beta Testing

Before software is released, it is sometimes given to a small, selected group of potential users for trial use (alpha testing) and/or to a larger set of representative users (beta testing). These users report problems with the product. Alpha and beta testing are often uncontrolled and are not always referred to in a test plan.

2.2.4 Reliability Achievement and Evaluation

Testing improves reliability by identifying and correcting faults. In addition, statistical measures of reliability can be derived by randomly generating test cases according to the operational profile of the software (see Operational Profile in section 3.5, Usage-Based Techniques). The latter approach is called operational testing. Using reliability growth models, both objectives can be pursued together [3] (see Life Test, Reliability Evaluation in section 4.1, Evaluation of the Program under Test).

2.2.5 Regression Testing

According to [7], regression testing is the “selective retesting of a system or component to verify that modifications have not caused unintended effects and that the system or component still complies with its specified requirements.” In practice, the approach is to show that software still passes previously passed tests in a test suite (in fact, it is also sometimes referred to as nonregression testing). For incremental development, the purpose of regression testing is to show that software behavior is unchanged by incremental changes to the software, except insofar as it should. In some cases, a tradeoff must be made between the assurance given by regression testing every time a change is made and the resources required to perform the regression tests, which can be quite time consuming due to the large number of tests that may be executed. Regression testing involves selecting, minimizing, and/or prioritizing a subset of the test cases in an existing test suite [8]. Regression testing can be conducted at each of the test levels described in section 2.1, The Target of the Test, and may apply to functional and nonfunctional testing.

2.2.6 Performance Testing

Performance testing verifies that the software meets the specified performance requirements and assesses performance characteristics—for instance, capacity and response time.

2.2.7 Security Testing

Security testing is focused on the verification that the software is protected from external attacks. In particular, security testing verifies the confidentiality, integrity, and availability of the systems and its data. Usually, security testing includes verification against misuse and abuse of the software or system (negative testing).

2.2.8 Stress Testing

Stress testing exercises software at the maximum design load, as well as beyond it, with the goal of determining the behavioral limits, and to test defense mechanisms in critical systems.

2.2.9 Back-to-Back Testing

IEEE/ISO/IEC Standard 24765 defines back-toback testing as “testing in which two or more variants of a program are executed with the same inputs, the outputs are compared, and errors are analyzed in case of discrepancies.”

2.2.10 Recovery Testing

Recovery testing is aimed at verifying software restart capabilities after a system crash or other “disaster.”

2.2.11 Interface Testing

Interface defects are common in complex systems. Interface testing aims at verifying whether the components interface correctly to provide the correct exchange of data and control information. Usually the test cases are generated from the interface specification. A specific objective of interface testing is to simulate the use of APIs by end-user applications. This involves the generation of parameters of the API calls, the setting of external environment conditions, and the definition of internal data that affect the API.

2.2.12 Configuration Testing

In cases where software is built to serve different users, configuration testing verifies the software under different specified configurations.

2.2.13 Usability and Human Computer Interaction Testing

The main task of usability and human computer interaction testing is to evaluate how easy it is for end users to learn and to use the software. In general, it may involve testing the software functions that supports user tasks, documentation that aids users, and the ability of the system to recover from user errors (see User Interface Design in the Software Design KA).

3 Test Techniques

3.1 Based on the Software Engineer's Intuition and Experience

3.1.1 Ad Hoc

3.1.2 Exploratory Testing

3.2 Input Domain-Based Techniques

3.2.1 Equivalence Partitioning

3.2.2 Pairwise Testing

3.2.3 Boundary-Value Analysis

3.2.4 Random Testing

3.3 Code-Based Techniques

3.3.1 Control Flow-Based Criteria

3.3.2 Data Flow-Based Criteria

3.3.3 Reference Models for Code-Based Testing

3.4 Fault-Based Techniques

3.4.1 Error Guessing

3.4.2 Mutation Testing

3.5 Usage-Based Techniques

3.5.1 Operational Profile

3.5.2 User Observation Heuristics

3.6 Model-Based Testing Techniques

3.6.1 Decision Tables

3.6.2 Finite-State Machines

3.6.3 Formal Specifications

3.6.4 Workflow Models

3.7 Techniques Based on the Nature of the Application