I Fundamentals of Test and Analysis
1 Software Test and Analysis in a Nutshell
Before considering individual aspects and techniques of software analysis and testing, it is useful to view the “big picture” of software quality in the context of a software development project and organization. The objective of this chapter is to introduce the range of software verification and validation (V&V) activities and a rationale for selecting and combining them within a software development process.
2 A Framework for Test and Analysis
There are no perfect test or analysis techniques, nor a single “best” technique for all circumstances. Rather, techniques exist in a complex space of trade-offs, and often have complementary strengths and weaknesses. This chapter describes the nature of those trade-offs and some of their consequences, and thereby a conceptual framework for understanding and better integrating material from later chapters on individual techniques.
3 Basic Principles
Mature engineering disciplines are characterized by basic principles. Principles provide a rationale for defining, selecting, and applying techniques and methods. They are valid beyond a single technique and over a time span in which techniques come and go, and can help engineers study, define, evaluate, and apply new techniques. This chapter advocates six principles that characterize various approaches and techniques for analysis and testing: sensitivity, redundancy, restriction, partition, visibility, and feedback. Some of these principles, such as partition, visibility, and feedback, are quite general in engineering. Others, notably sensitivity, redundancy, and restriction, are specific to A&T and contribute to characterizing A&T as a discipline.
4 Test and Analysis ActivitiesWithin a Software Process
Testing and analysis activities occur throughout the development and evolution of software systems, from early in requirements engineering through delivery and subsequent evolution. Quality depends on every part of the software process, not only on software analysis and testing; no amount of testing and analysis can make up for poor quality arising from other activities. On the other hand, an essential feature of software processes that produce high-quality products is that software test and analysis is thoroughly integrated and not an afterthought.
II Basic Techniques
5 Finite Models
This chapter presents some basic concepts in models of software and some families of models that are used in a wide variety of testing and analysis techniques. Several of the analysis and testing techniques described in subsequent chapters use and specialize these basic models. The fundamental concepts and trade-offs in the design of models is necessary for a full understanding of those test and analysis techniques, and is a foundation for devising new techniques and models to solve domain-specific problems.
6 Dependence and Data Flow Models
The control flow graph and state machine models introduced in the previous chapter capture one aspect of the dependencies among parts of a program. They explicitly represent control flow but deemphasize transmission of information through program variables. Data flow models provide a complementary view, emphasizing and making explicit relations involving transmission of information. Models of data flow and dependence in software have many applications in software engineering, from testing to refactoring to reverse engineering. In test and analysis, applications range from selecting test cases based on dependence information to detecting anomalous patterns that indicate probable programming errors, such as uses of potentially uninitialized values. Moreover, the basic algorithms used to construct data flow models have even wider application and are of particular interest because they can often be quite efficient in time and space.
7 Symbolic Execution and Proof of Properties
Symbolic execution builds predicates that characterize the conditions under which execution paths can be taken and the effect of the execution on program state. Extracting predicates through symbolic execution is the essential bridge from the complexity of program behavior to the simpler and more orderly world of logic. It finds important applications in program analysis, in generating test data, and in formal verification (proofs) of program correctness. It is fundamental to generating test data to execute particular parts and paths in a program.
8 Finite State Verification
Finite state verification techniques are intermediate in power and
cost between construction of simple control and data flow models, on
the one hand, and reasoning with the full strength of symbolic
execution and theorem proving on the other. They automatically
explore finite but potentially very large representations of program
behavior to address important properties. They are particularly useful
for checking properties for which testing is inadequate. For example,
synchronization faults in multi-threaded programs may trigger failures
very rarely, or under conditions that are nearly impossible to
re-create in testing, but finite state verification techniques can
detect them by exhaustively considering
III Problems and Methods
9 Test Case Selection and Adequacy
A key problem in software testing is selecting and evaluating test cases. This chapter introduces basic approaches to test case selection and corresponding adequacy criteria. It serves as a general introduction to the problem and provides a conceptual framework for functional and structural approaches described in subsequent chapters.
10 Functional Testing
Functional testing, or more precisely, functional test case design, attempts to answer the question “What test cases shall I use to exercise my program?” considering only the specification of a program and not its design or implementation structure. Being based on program specifications and not on the internals of the code, functional testing is also called specification-based or black-box testing. Functional testing is typically the base-line technique for designing test cases.
11 Combinatorial Testing
Combinatorial approaches to functional testing consist of a manual step of structuring the specification statement into a set of properties or attributes that can be systematically varied and an automatizable step of producing combinations of choices. They identify the variability of elements involved in the execution of a given functionality and select representative combinations of relevant values for test cases. Repetitive activities such as the combination of different values can be easily automated, allowing test designers to focus on more creative and difficult activities.
12 Structural Testing
The structure of the software itself is a valuable source of information for selecting test cases and determining whether a set of test cases has been sufficiently thorough. We can ask whether a test suite has “covered” a control flow graph or other model of the program. Structural information should not be used as the primary answer to the question, “How shall I choose tests,” but it is useful in combination with other test selection criteria (particularly functional testing) to help answer the question “What additional test cases are needed to reveal faults that may not become apparent through black-box testing alone.”
13 Data Flow Testing
Exercising every statement or branch with test cases is a practical goal, but exercising every path is impossible. Even the number of simple (that is, loop-free) paths can be exponential in the size of the program. Path-oriented selection and adequacy criteria must therefore select a tiny fraction of control flow paths. Data flow test adequacy criteria improve over pure control flow criteria by selecting paths based on how one syntactic element can affect the computation of another.
14 Model-Based Testing
Models are often used to express requirements, and embed both structure and fault information that can help generate test case specifications. Control flow and data flow testing are based on models extracted from program code. Models can also be extracted from specifications and design, allowing us to make use of additional information about intended behavior. Model-based testing consists in using or deriving models of expected behavior to produce test case specifications that can reveal discrepancies between actual program behavior and the model.
15 Testing Object-Oriented Software
Systematic testing of object-oriented software is fundamentally similar to systematic testing approaches for procedural software: We begin with functional tests based on specification of intended behavior, add selected structural test cases based on the software structure, and work from unit testing and small-scale integration testing toward larger integration and then system testing. Nonetheless, the differences between procedural software and object-oriented software are sufficient to make specialized techniques appropriate. For example, methods in object-oriented software are typically shorter than procedures in other software, so faults in complex intraprocedural logic and control flow occur less often and merit less attention in testing. On the other hand, short methods together with encapsulation of object state suggest greater attention to interactions among method calls, while polymorphism, dynamic binding, generics, and increased use of exception handling introduce new classes of fault that require attention.
16 Fault-Based Testing
A model of potential program faults is a valuable source of information for evaluating and designing test suites. Some fault knowledge is commonly used in functional and structural testing, for example when identifying singleton and error values for parameter characteristics in category-partition testing or when populating catalogs with erroneous values, but a fault model can also be used more directly. Fault-based testing uses a fault model directly to hypothesize potential faults in a program under test, as well as to create or evaluate test suites based on its efficacy in detecting those hypothetical faults.
17 Test Execution
Whereas test design, even when supported by tools, requires insight and ingenuity in similar measure to other facets of software design, test execution must be sufficiently automated for frequent reexecution without little human involvement. This chapter describes approaches for creating the run-time support for generating and managing test data, creating scaffolding for test execution, and automatically distinguishing between correct and incorrect test case executions.
18 Inspection
Software inspections are manual, collaborative reviews that can be applied to any software artifact from requirements documents to source code to test plans. Inspection complements testing by helping check many properties that are hard or impossible to verify dynamically. Their flexibility makes inspection particularly valuable when other, more automated analyses are not applicable.
19 Program Analysis
A number of automated analyses can be applied to software specifications and program source code. None of them are capable of showing that the specifications and the code are functionally correct, but they can cost-effectively reveal some common defects, as well as produce auxiliary information useful in inspections and testing.
IV Process
20 Planning and Monitoring the Process
Any complex process requires planning and monitoring. The quality process requires coordination of many different activities over a period that spans a full development cycle and beyond. Planning is necessary to order, provision, and coordinate all the activities that support a quality goal, and monitoring of actual status against a plan is required to steer and adjust the process.
21 Integration and Component-based Software Testing
Problems arise in integration even of well-designed modules and components. Integration testing aims to uncover interaction and compatibility problems as early as possible. This chapter presents integration testing strategies, including the increasingly important problem of testing integration with commercial off-the-shelf (COTS) components, libraries, and frameworks.
22 System, Acceptance, and Regression Testing
The essential characteristics of system testing are that it is comprehensive, based on a specification of observable behavior, and independent of design and implementation decisions. Independence in system testing avoids repeating software design errors in test design. Acceptance testing abandons specifications in favor of users, and measures how the final system meets users' expectations. Regression testing checks for faults introduced during evolution.
23 Automating Analysis and Test
Automation can improve the efficiency of some quality activities and is a necessity for implementing others. While a greater degree of automation can never substitute for a rational, well-organized quality process, considerations of what can and should be automated play an important part in devising and incrementally improving a process that makes the best use of human resources. This chapter discusses some of the ways that automation can be employed, as well as its costs and limitations, and the maturity of the required technology. The focus is not on choosing one particular set of “best” tools for all times and situations, but on a continuing rational process of identifying and deploying automation to best effect as the organization, process, and available technology evolve.
24 Documenting Analysis and Test
Mature software processes include documentation standards for all the activities of the software process, including test and analysis activities. Documentation can be inspected to verify progress against schedule and quality goals and to identify problems, supporting process visibility, monitoring, and replicability.
Bibliography
Index