Skip to main content

Combining Pattern Classifiers: Methods and Algorithms

Combining Pattern Classifiers: Methods and Algorithms

Ludmila I. Kuncheva

ISBN: 978-0-471-66026-2 August 2004 300 Pages


Covering pattern classification methods, Combining Classifiers: Ideas and Methods focuses on the important and widely studied issue of how to combine several classifiers together in order to achieve improved recognition performance. It is one of the first books to provide unified, coherent, and expansive coverage of the topic and as such will be welcomed by those involved in the area. With case studies that bring the text alive and demonstrate 'real-world' applications it is destined to become essential reading.


Notations and Acronyms.

1. Fundamentals of Pattern Recognition.

1.1 Basic Concepts: Class, Feature, Data Set.

1.2 Classifier, Discriminant Functions, Classification Regions.

1.3 Classification Error and Classification Accuracy.

1.4 Experimental Comparison of Classifiers.

1.5 Bayes Decision Theory.

1.6 A Taxonomy of Classifier Design Methods.

1.7 Clustering.


2. Base Classifiers.

2.1 Linear and Quadratic Classifiers.

2.2 Nonparametric Classifiers.

2.3 The k-nearest Neighbor Rule.

2.4 Tree Classifiers.

2.5 Neural Networks.


3. Multiple Classifier Systems.

3.1 Philosophy.

3.2 Terminologies and Taxonomies.

3.3 To Train or Not to Train?

3.4 Remarks.

4. Fusion of Label Outputs.

4.1 Types of Classifier Outputs.

4.2 Majority Vote.

4.3 Weighted Majority Vote.

4.4 “Naïve”-Bayes Combination.

4.5 Multinomial Methods.

4.6 Probabilistic Approximation.

4.7 SVD Combination.

4.8 Conclusions.


5. Fusion of Continuous-Valued Outputs.

5.1 How Do We Get Probability Outputs?

5.2 Class-Conscious Combiners.

5.3 Class-Indifferent Combiners.

5.4 Where Do the Simple Combiners Come From?

5.5 Appendix.

6. Classifier Selection.

6.1 Preliminaries.

6.2 Why Classifier Selection Works.

6.3 Estimating Local Competence Dynamically.

6.4 Pre-estimation of the Competence Regions.

6.5 Selection or Fusion?

6.6 Base Classifiers and Mixture of Experts.

7. Bagging and Boosting.

7.1 Bagging.

7.2 Boosting.

7.3 Bias-Variance Decomposition.

7.1 Which is Better: Bagging or Boosting?


8. Miscellanea.

8.1 Feature Selection.

8.2 Error Correcting Output Codes (ECOC).

8.3 Combining Clustering Results.


9. Theoretical Views and Results.

9.1 Equivalence of Simple Combination Rules.

9.2 Added Error for the Mean Combination Rule.

9.3 Added Error for the Weighted Mean Combination.

9.4 Ensemble Error for Normal and Uniform Distributions.

10. Diversity in Classifier Ensembles.

10.1 What is Diversity?

10.2 Measuring Diversity in Classifier Ensembles.

10.3 Relationship Between Diversity and Accuracy.

10.4 Using Diversity.

10.5 Conclusions: Diversity of Diversity.

Appendix A: Equivalence Between the Averaged Disagreement Measure Dav and Kohavi—Wolpert KW.

Appendix B: Matlab Code for Some Overproduce and Select Algorithms.



The well written 'Combining Pattern Classifiers'…is all about how patterns are to be recognized and interpreted." (Journal of Statistical Computation and Simulation, March 2006)

"In a clear and straightforward manner, the author provides a much-needed road map through a multifaceted and often controversial subject…" (International Journal of General Systems, June 2005)

"The book is very interesting and is written on a high scientific level…" (Mathematical Reviews, 2006c)

"...a unique and perfect book for researchers and practitioners who want to get a grasp of this new exciting area." (Journal of Intelligent & Fuzzy Systems, Vol. 16, No. 2, 2005)

"...a well-written and timely book, the first of its kind...a welcome contribution to pattern recognition...I would keep this book in my library." (Technometrics, November 2005)

"…destined to become essential reading…" (Zentralblatt MATH, Vol. 1066 (17), 2005)