Skip to main content

Data Analysis: What Can Be Learned From the Past 50 Years



Data Analysis: What Can Be Learned From the Past 50 Years

Peter J. Huber

ISBN: 978-1-118-01826-2 January 2012 234 Pages


This book explores the many provocative questions concerning the fundamentals of data analysis. It is based on the time-tested experience of one of the gurus of the subject matter. Why should one study data analysis? How should it be taught? What techniques work best, and for whom? How valid are the results? How much data should be tested? Which machine languages should be used, if used at all? Emphasis on apprenticeship (through hands-on case studies) and anecdotes (through real-life applications) are the tools that Peter J. Huber uses in this volume. Concern with specific statistical techniques is not of immediate value; rather, questions of strategy – when to use which technique – are employed. Central to the discussion is an understanding of the significance of massive (or robust) data sets, the implementation of languages, and the use of models. Each is sprinkled with an ample number of examples and case studies. Personal practices, various pitfalls, and existing controversies are presented when applicable. The book serves as an excellent philosophical and historical companion to any present-day text in data analysis, robust statistics, data mining, statistical learning, or computational statistics.

1 What is Data Analysis?

1.1 Tukey's 1962 paper.

1.2 The Path of Statistics.

2 Strategy Issues in Data Analysis.

2.1 Strategy in Data Analysis.

2.2 Philosophical issues.

2.3 Issues of size.

2.4 Strategic planning.

2.5 The stages of data analysis.

2.6 Tools required for strategy reasons.

3 Massive Data Sets.

3.1 Introduction.

3.2 Disclosure: Personal experiences.

3.3 What is i massive? A classification of size.

3.4 Obstacles to scaling.

3.5 On the structure of large data sets.

3.6 Data base management and related issues.

3.7 The stages of a data analysis.

3.8 Examples and some thoughts on strategy.

3.9 Volume reduction.

3.10 Supercomputers and software challenges.

3.11 Summary of conclusions.

4 Languages for Data Analysis.

4.1 Goals and purposes.

4.2 Natural languages and computing languages.

4.3 Interface issues.

4.4 Miscellaneous issues.

4.5 Requirements for a general purpose immediate language.

5 Approximate Models.

5.1 Models.

5.2 Bayesian modeling.

5.3 Mathematical statistics and approximate models.

5.4 Statistical significance and physical relevance.

5.5 Judicious use of a wrong model.

5.6 Composite models.

5.7 Modeling the length of day.

5.8 The role of simulation.

5.9 Summary of conclusions.

6 Pitfalls.

6.1 Simpson's paradox.

6.2 Missing data.

6.3 Regression of Y on X or of X on Y.

7 Create order in data.

7.1 General considerations.

7.2 Principal component methods.

7.3 Multidimensional scaling.

7.4 Correspondence analysis.

7.5 Multidimensional scaling vs. Correspondence analysis.

8 More case studies.

8.1 A nutshell example.

8.2 Shape invariant modeling.

8.3 Comparison of point configurations.

8.4 Notes on numerical optimization.