IEEE Computer Society

Home Home About Wiley-IEEE Computer Society Press Contact Us
Print this page Share
Textbook

Software Quality Engineering: Testing, Quality Assurance, and Quantifiable Improvement

ISBN: 978-0-471-71345-6
440 pages
February 2005, ©2005, Wiley-IEEE Computer Society Press
Software Quality Engineering: Testing, Quality Assurance, and Quantifiable Improvement (0471713457) cover image
The one resource needed to create reliable software

This text offers a comprehensive and integrated approach to software quality engineering. By following the author's clear guidance, readers learn how to master the techniques to produce high-quality, reliable software, regardless of the software system's level of complexity.

The first part of the publication introduces major topics in software quality engineering and presents quality planning as an integral part of the process. Providing readers with a solid foundation in key concepts and practices, the book moves on to offer in-depth coverage of software testing as a primary means to ensure software quality; alternatives for quality assurance, including defect prevention, process improvement, inspection, formal verification, fault tolerance, safety assurance, and damage control; and measurement and analysis to close the feedback loop for quality assessment and quantifiable improvement.

The text's approach and style evolved from the author's hands-on experience in the classroom. All the pedagogical tools needed to facilitate quick learning are provided:
* Figures and tables that clarify concepts and provide quick topic summaries
* Examples that illustrate how theory is applied in real-world situations
* Comprehensive bibliography that leads to in-depth discussion of specialized topics
* Problem sets at the end of each chapter that test readers' knowledge

This is a superior textbook for software engineering, computer science, information systems, and electrical engineering students, and a dependable reference for software and computer professionals and engineers.
See More
List of Figures.

List of Tables.

Preface.

PART I OVERVIEW AND BASICS.

1 Overview.

1.1 Meeting People's Quality Expectations.

1.2 Book Organization and Chapter Overview.

1.3 Dependency and Suggested Usage.

1.4 Reader Preparation and Background Knowledge.

Problems.

2 What Is Software Quality?

2.1 Quality: Perspectives and Expectations.

2.2 Quality Frameworks and ISO-9126.

2.3 Correctness and Defects: Definitions, Properties, and Measurements.

2.4 A Historical Perspective of Quality.

2.5 So, What Is Software Quality?

Problems.

3 Quality Assurance.

3.1 Classification: QA as Dealing with Defects.

3.2 Defect Prevention.

3.2.1 Education and training.

3.2.2 Formal method.

3.2.3 Other defect prevention techniques.

3.3 Defect Reduction.

3.3.1 Inspection: Direct fault detection and removal.

3.3.2 Testing: Failure observation and fault removal.

3.3.3 Other techniques and risk identification.

3.4 Defect Containment.

3.4.1 Software fault tolerance.

3.4.2 Safety assurance and failure containment.

3.5 Concluding Remarks.

Problems.

4 Quality Assurance in Context.

4.1 Handling Discovered Defect During QA Activities.

4.2 QA Activities in Software Processes.

4.3 Verification and Validation Perspectives.

4.4 Reconciling the Two Views.

4.5 Concluding Remarks.

Problems.

5 Quality Engineering.

5.1 Quality Engineering: Activities and Process.

5.2 Quality Planning: Goal Setting and Strategy Formation.

5.3 Quality Assessment and Improvement.

5.4 Quality Engineering in Software Processes.

5.5 Concluding Remarks.

Problems.

PART II SOFTWARE'TESTING.

6 Testing: Concepts, Issues, and Techniques.

6.1 Purposes, Activities, Processes, and Context.

6.2 Questions About Testing.

6.3 Functional vs. Structural Testing: What to Test?

6.4 Coverage-Based vs. Usage-Based Testing: When to Stop Testing?

6.5 Concluding Remarks.

Problems.

7 Test Activities, Management, and Automation.

7.1 Test Planning and Preparation.

7.1.1 Test planning: Goals, strategies, and techniques.

7.1.2 Testing models and test cases.

7.1.3 Test suite preparation and management.

7.1.4 Preparation of test procedure.

7.2 Test Execution, Result Checking, and Measurement.

7.3 Analysis and Follow-up.

7.4 Activities, People, and Management.

7.5 Test Automation.

7.6 Concluding Remarks.

Problems.

8 Coverage and Usage Testing Based on Checklists and Partitions.

8.1 Checklist-Based Testing and Its Limitations.

8.2 Testing for Partition Coverage.

8.2.1 Some motivational examples.

8.2.2 Partition: Concepts and definitions.

8.2.3 Testing decisions and predicates for partition coverage.

8.3 Usage-Based Statistical Testing with Musa's Operational Profiles.

8.3.1 The cases for usage-based statistical testing.

8.3.2 Musa OP: Basic ideas.

8.3.3 Using OPs for statistical testing and other purposes.

8.4 Constructing Operational Profiles.

8.4.1 Generic methods and participants.

8.4.2 OP development procedure: Musa-1.

8.4.3 OP development procedure: Musa-2.

8.5 Case Study: OP for the Cartridge Support Software.

8.5.1 Background and participants.

8.5.2 OP development in five steps.

8.5.3 Metrics collection, result validation, and lessons learned.

8.6 Concluding Remarks.

Problems.

9 Input Domain Partitioning and Boundary Testing.

9.1 Input Domain Partitioning and Testing.

9.1.1 Basic concepts, definitions, and terminology.

9.1.2 Input domain testing for partition and boundary problems.

9.2 Simple Domain Analysis and the Extreme Point Combination Strategy.

9.3 Testing Strategies Based on Boundary Analysis.

9.3.1 Weak N x 1 strategy.

9.3.2 Weak 1 x 1 strategy.

9.4 Other Boundary Test Strategies and Applications.

9.4.1 Strong and approximate strategies.

9.4.2 Other types of boundaries and extensions.

9.4.3 Queuing testing as boundary testing.

9.5 Concluding Remarks.

Problems.

10 Coverage and Usage Testing Based on Finite-State Machines and Markov Chains.

10.1 Finite-State Machines and Testing.

10.1.1 Overcoming limitations of simple processing models.

10.1.2 FSMs: Basic concepts and examples.

10.1.3 Representations of FSMs.

10.2 FSM Testing: State and Transition Coverage.

10.2.1 Some typical problems with systems modeled by FSMs.

10.2.2 Model construction and validation.

10.2.3 Testing for correct states and transitions.

10.2.4 Applications and limitations.

10.3 Case Study: FSM-Based Testing of Web-Based Applications.

10.3.1 Characteristics of web-based applications.

10.3.2 What to test: Characteristics of web problems.

10.3.3 FSMs for web testing.

10.4 Markov Chains and Unified Markov Models for Testing.

10.4.1 Markov chains and operational profiles.

10.4.2 From individual Markov chains to unified Markov models.

10.4.3 UMM construction.

10.5 Using UMMs for Usage-Based Statistical Testing.

10.5.1 Testing based on usage frequencies in UMMs.

10.5.2 Testing based on other criteria and UMM hierarchies.

10.5.3 Implementation, application, and other issues.

10.6 Case Study Continued: Testing Based on Web Usages.

10.6.1 Usage-based web testing: Motivations and basic approach.

10.6.2 Constructing UMMs for statistical web testing.

10.6.3 Statistical web testing: Details and examples.

10.7 Concluding Remarks.

Problems.

11 Control Flow, Data Dependency, and Interaction Testing.

11.1 Basic Control Flow Testing.

1 1.1.1 General concepts.

1 1.1.2 Model construction.

11.1.3 Path selection.

1 1.1.4 Path sensitization and other activities.

11.2 Loop Testing, CFT Usage, and Other Issues.

11.2.1 Different types of loops and corresponding CFGs.

11.2.2 Loop testing: Difficulties ant1 a heuristic strategy.

11.2.3 CFT Usage and Other Issues.

11.3 Data Dependency and Data Flow Testing.

11 .3.1 Basic concepts: Operations C Id~at a and data dependencies.

11.3.2 Basics of DFT and DDG.

11.3.3 DDG elements and characteristics.

11.3.4 Information sources and generic procedure for DDG construction.

11.3.5 Building DDG indirectly.

11.3.6 Dealing with loops.

1 1.4 DFT: Coverage and Applications.

11.4.1 Achieving slice and other coverage.

1 1.4.2 DFT: Applications and other issues.

11.4.3 DFT application in synchronization testing.

1 1.5 Concluding Remarks.

Problems.

12 Testing Techniques: Adaptation, Splecialization, and Integration.

12.1 Testing Sub-phases and Applicable 'Testing Techniques.

12.2 Specialized Test Tasks and Techniques.

12.3 Test Integration.

12.4 Case Study: Hierarchical Web Testing.

12.5 Concluding Remarks.

Problems.

PART Ill QUALITY ASSURANCE BEYOND TESTING.

13 Defect Prevention and Process lmp~rovement.

13.1 Basic Concepts and Generic Approaches.

13.2 Root Cause Analysis for Defect Prevention.

13.3 Education and Training for Defect Prevention.

13.4 Other Techniques for Defect Prevention.

13.4.1 Analysis and modeling for Defect Prevention.

13.4.2 Technologies, standards, and methodologies for defect prevention.

13.4.3 Software tools to block defect injection.

13.5 Focusing on Software Processes.

13.5.1 Process selection, definition, and conformance.

13.5.2 Process maturity.

13.5.3 Process and quality improvement.

13.6 Concluding Remarks.

Problems.

14 Software Inspection.

14.1 Basic Concepts and Generic Process.

14.2 Fagan inspection.

14.3 Other Inspections and Related Activities.

14.3.1 Inspections of reduced scope or team size.

14.3.2 Inspections of enlarged scope or team size.

14.3.3 Informal desk checks, reviews, and walkthroughs.

14.3.4 Code reading.

14.3.5 Other formal reviews and static analyses.

14.4 Defect Detection Techniques, Tool/Process Support, and Effectiveness.

14.5 Concluding Remarks.

Problems.

15 Formal Verification.

15.1 Basic Concepts: Formal Verification and Formal Specification.

15.2 Formal Verification: Axiomatic Approach.

15.2.1 Formal logic specifications.

15.2.2 Axioms.

15.2.3 Axiomatic proofs and a comprehensive example.

15.3 Other Approaches.

15.3.1 Weakest pre-conditions and backward chaining.

15.3.2 Functional approach and symbolic execution.

15.3.3 Seeking alternatives: Model checking and other approaches.

15.4 Applications, Effectiveness, and Integration Issues.

15.5 Concluding Remarks.

Problems.

16 Fault Tolerance and Failure Containment.

16.1 Basic Ideas and Concepts.

16.2 Fault Tolerance with Recovery Blocks.

16.3 Fault Tolerance with N-Version Programming.

16.3.1 NVP: Basic technique and implementation.

16.3.2 Ensuring version independence.

16.3.3 Applying NVP ideas in other QA activities.

16.4 Failure Containment: Safety Assurance and Damage Control.

16.4.1 Hazard analysis using fault-trees and event-trees.

16.4.2 Hazard resolution for accident prevention.

16.4.3 Accident analysis and post-accident damage control.

16.5 Application in Heterogeneous Systerns.

16.5.1 Modeling and analyzing heterogeneous systems.

16.5.2 Prescriptive specifications for safety.

16.6 Concluding Remarks.

Problems.

17 Comparing Quality Assurance Techniques and Activities.

17.1 General Questions: Cost, Benefit, and Environment.

17.2 Applicability to Different Environments.

17.3 Effectiveness Comparison.

17.3.1 Defect perspective.

17.3.2 Problem types.

17.3.3 Defect level and pervasive level.

17.3.4 Result interpretation and constructive information.

17.4 Cost Comparison.

17.5 Comparison Summary and Recommendations.

Problems.

PART IV QUANTIFIABLE QUALITY IMPROVEMENT.

18 Feedback Loop and Activities for Quantifiable Quality Improvement.

18.1 QA Monitoring and Measurement.

18.1.1 Direct vs. indirect quality me:asurements.

18.1.2 Direct quality measurements Result and defect measurements.

18.1.3 Indirect quality measurements: Environmental, product internal, and activity measurements.

18.2 Immediate Follow-up Actions and Feedback.

18.3 Analyses and Follow-up Actions.

18.3.1 Analyses for product release decisions.

18.3.2 Analyses for other project management decisions.

18.3.3 Other feedback and follow-up actions.

18.4 Implementation, Integration, and Tool Support.

18.4.1 Feedback loop: Implementation and integration.

18.4.2 A refined quality engineering, process.

18.4.3 Tool support: Strategy, implementation, and integration.

18.5 Concluding Remarks.

Problems.

19 Quality Models and Measurements.

19.1 Models for Quality Assessment.

19.2 Generalized Models.

19.3 Product-Specific Models.

19.4 Model Comparison and Interconnections.

19.5 Data Requirements and Measurement.

19.6 Selecting Measurements and Models.

19.7 Concluding Remarks.

Problems.

20 Defect Classification and Analysis.

20.1 General Types of Defect Analyses.

20.1.1 Defect distribution analysis.

20.1.2 Defect trend analysis and defect dynamics model.

20.1.3 Defect causal analysis.

20.2 Defect Classification and ODC.

20.2.1 ODC concepts.

20.2.2 Defect classification using ODC: A comprehensive example.

20.2.3 Adapting ODC to analyze web errors.

20.3 Defect Analysis for Classified Data.

20.3.1 One-way analysis: Analyzing a single defect attribute.

20.3.2 Two-way and multi-way analysis: Examining cross-interactions.

20.4 Concluding Remarks.

Problems.

21 Risk Identification for Quantifiable Quality Improvement.

21.1 Basic Ideas and Concepts.

21.2 Traditional Statistical Analysis Techniques.

21.3 New Techniques for Risk Identification.

2 1.3.1 Principal component and discriminant analyses.

21.3.2 Artificial neural networks and learning algorithms.

21.3.3 Data partitions and tree-based modeling.

21.3.4 Pattern matching and optimal set reduction.

2 1.4 Comparisons and Integration.

21.5 Risk Identification for Classified Defect Data.

21.6 Concluding Remarks.

Problems.

22 Software Reliability Engineering.

22.1 SRE: Basic Concepts and General Approaches.

22.2 Large Software Systems and Reliability Analyses.

22.3 Reliability Snapshots Using IDRMs.

22.4 Longer-Term Reliability Analyses Using SRGMs.

22.5 TBRMs for Reliability Analysis and Improvement.

22.5.1 Constructing and using TBRMs.

22.5.2 TBRM Applications..

22.5.3 TBRM's impacts on reliability improvement.

22.6 Implementation and Software Tool Support.

22.7 SRE: Summary and Perspectives.

Problems.

Bibliography.

Index.

See More
JEFF TIAN, PHD, is Associate Professor in the Department of Computer Science and Engineering at Southern Methodist University in Dallas, Texas. He previously worked at IBM SWS Toronto Lab. He has an MS from Harvard University and a PhD in computer science from the University of Maryland. Dr. Tian was awarded the title of "Asian American Engineer of the Year" at National Engineers Week 2002.
See More

Related Titles