Textbook
Fundamentals of Statistical Reasoning in Education, 4th EditionDecember 2013, ©2014

Description
Fundamentals of Statistical Reasoning in Education 4th
Edition, like the first three editions, is written largely with
students of education in mind. Accordingly, Theodore Coladarci and
Casey D. Cobb have drawn primarily on examples and issues found in
school settings, such as those having to do with instruction,
learning, motivation, and assessment. The emphasis on educational
applications notwithstanding, the authors are confident that
readers will find Fundamentals of Statistical Reasoning in
Education 4th Edition of general relevance to other disciplines
in the behavioral sciences as well.
The 4th Edition of Fundamentals is still designed as a
“one semester” book. The authors intentionally sidestep
topics that few introductory courses cover (e.g., factorial
analysis of variance, repeated measures analysis of variance,
multiple regression). At the same time, effect size and confidence
intervals are incorporated throughout, which today are regarded as
essential to good statistical practice.
Table of Contents
Chapter 1 Introduction 1
1.1 Why Statistics? 1
1.2 Descriptive Statistics 2
1.3 Inferential Statistics 3
1.4 The Role of Statistics in Educational Research 4
1.5 Variables and Their Measurement 5
1.6 Some Tips on Studying Statistics 8
PART 1 DESCRIPTIVE STATISTICS 13
Chapter 2 Frequency Distributions 14
2.1 Why Organize Data? 14
2.2 Frequency Distributions for Quantitative Variables 14
2.3 Grouped Scores 15
2.4 Some Guidelines for Forming Class Intervals 17
2.5 Constructing a GroupedData Frequency Distribution 18
2.6 The Relative Frequency Distribution 19
2.7 Exact Limits 21
2.8 The Cumulative Percentage Frequency Distribution 22
2.9 Percentile Ranks 23
2.10 Frequency Distributions for Qualitative Variables 25
2.11 Summary 26
Chapter 3 Graphic Representation 34
3.1 Why Graph Data? 34
3.2 Graphing Qualitative Data: The Bar Chart 34
3.3 Graphing Quantitative Data: The Histogram 35
3.4 Relative Frequency and Proportional Area 39
3.5 Characteristics of Frequency Distributions 41
3.6 The Box Plot 44
3.7 Summary 45
Chapter 4 Central Tendency 52
4.1 The Concept of Central Tendency 52
4.2 The Mode 52
4.3 The Median 53
4.4 The Arithmetic Mean 54
4.5 Central Tendency and
Distribution Symmetry 57
4.6 Which Measure of Central Tendency to Use? 59
4.7 Summary 59
Chapter 5 Variability 66
5.1 Central Tendency Is Not Enough: The Importance of Variability 66
5.2 The Range 67
5.3 Variability and Deviations From the Mean 68
5.4 The Variance 69
5.5 The Standard Deviation 70
5.6 The Predominance of the Variance and Standard Deviation 71
5.7 The Standard Deviation and the Normal Distribution 72
5.8 Comparing Means of Two Distributions: The Relevance of Variability 73
5.9 In the Denominator: n Versus n −1 75
5.10 Summary 76
Chapter 6 Normal Distributions and Standard Scores 81
6.1 A Little History: Sir Francis Galton and the Normal Curve 81
6.2 Properties of the Normal Curve 82
6.3 More on the Standard Deviation and the Normal Distribution 82
6.4 z Scores 84
6.5 The Normal Curve Table 87
6.6 Finding Area When the Score Is Known 88
6.7 Reversing the Process: Finding Scores When the Area Is Known 91
6.8 Comparing Scores From Different Distributions 93
6.9 Interpreting Effect Size 94
6.10 Percentile Ranks and the Normal Distribution 96
6.11 Other Standard Scores 97
6.12 Standard Scores Do Not “Normalize” a Distribution 98
6.13 The Normal Curve and Probability 98
6.14 Summary 99
Chapter 7 Correlation 106
7.1 The Concept of Association 106
7.2 Bivariate Distributions and Scatterplots 106
7.3 The Covariance 111
7.4 The Pearson r 117
7.5 Computation of r: The Calculating Formula 118
7.6 Correlation and Causation 120
7.7 Factors Influencing Pearson r 122
7.8 Judging the Strength of Association: r 2 125
7.9 Other Correlation Coefficients 127
7.10 Summary 127
Chapter 8 Regression and Prediction 134
8.1 Correlation Versus Prediction 134
8.2 Determining the Line of Best Fit 135
8.3 The Regression Equation in Terms of Raw Scores 138
8.4 Interpreting the RawScore Slope 141
8.5 The Regression Equation in Terms of z Scores 141
8.6 Some Insights Regarding Correlation and Prediction 142
8.7 Regression and Sums of Squares 145
8.8 Residuals and Unexplained Variation 147
8.9 Measuring the Margin of Prediction Error: The Standard Error of Estimate 148
8.10 Correlation and Causality (Revisited) 152
8.11 Summary 153
PART 2 INFERENTIAL STATISTICS 163
Chapter 9 Probability and Probability Distributions 164
9.1 Statistical Inference: Accounting for Chance in Sample Results 164
9.2 Probability: The Study of Chance 165
9.3 Definition of Probability 166
9.4 Probability Distributions 168
9.5 The OR/addition Rule 169
9.6 The AND/Multiplication Rule 171
9.7 The Normal Curve as a Probability Distribution 172
9.8 “So What?”—Probability Distributions as the Basis for Statistical Inference 174
9.9 Summary 175
Chapter 10 Sampling Distributions 179
10.1 From Coins to Means 179
10.2 Samples and Populations 180
10.3 Statistics and Parameters 181
10.4 Random Sampling Model 181
10.5 Random Sampling in Practice 183
10.6 Sampling Distributions of Means 184
10.7 Characteristics of a Sampling Distribution of Means 185
10.8 Using a Sampling Distribution of Means to Determine Probabilities 188
10.9 The Importance of Sample Size (n) 191
10.10 Generality of the Concept of a Sampling Distribution 193
10.11 Summary 193
Chapter 11 Testing Statistical Hypotheses About μ When σ Is Known: The OneSample z Test 199
11.1 Testing a Hypothesis About μ: Does “Homeschooling” Make a Difference? 199
11.2 Dr. Meyer’s Problem in a Nutshell 200
11.3 The Statistical Hypotheses: H0 and H1 201
11.4 The Test Statistic z 202
11.5 The Probability of the Test Statistic: The p Value 203
11.6 The Decision Criterion: Level of Significance (α) 204
11.7 The Level of Significance and Decision Error 207
11.8 The Nature and Role of H0 and H1 209
11.9 Rejection Versus Retention of H0 209
11.10 Statistical Significance Versus Importance 210
11.11 Directional and Nondirectional Alternative Hypotheses 212
11.12 The Substantive Versus the Statistical 214
11.13 Summary 215
Chapter 12 Estimation 222
12.1 Hypothesis Testing Versus Estimation 222
12.2 Point Estimation Versus Interval Estimation 223
12.3 Constructing an Interval Estimate of μ 224
12.4 Interval Width and Level of Confidence 226
12.5 Interval Width and Sample Size 227
12.6 Interval Estimation and Hypothesis Testing 228
12.7 Advantages of Interval Estimation 230
12.8 Summary 230
Chapter 13 Testing Statistical Hypotheses About μ When σ Is Not Known: The OneSample t Test 235
13.1 Reality: σ Often Is Unknown 235
13.2 Estimating the Standard Error of the Mean 236
13.3 The Test Statistic t 237
13.4 Degrees of Freedom 238
13.5 The Sampling Distribution of Student’s t 239
13.6 An Application of Student’s t 242
13.7 Assumption of Population Normality 244
13.8 Levels of Significance Versus p Values 244
13.9 Constructing a Confidence Interval for μ When σ Is Not Known 246
13.10 Summary 247
Chapter 14 Comparing the Means of Two Populations: Independent Samples 253
14.1 From One Mu (μ) to Two 253
14.2 Statistical Hypotheses 254
14.3 The Sampling Distribution of Differences Between Means 255
14.4 Estimating σX1 X2 257
14.5 The t Test for Two Independent Samples 259
14.6 Testing Hypotheses About Two Independent Means: An Example 260
14.7 Interval Estimation of μ1 − μ2 262
14.8 Appraising the Magnitude of a Difference: Measures of Effect Size for X1−X2 264
14.9 How Were Groups Formed? The Role of Randomization 268
14.10 Statistical Inferences and Nonstatistical Generalizations 269
14.11 Summary 270
Chapter 15 Comparing the Means of Dependent Samples 278
15.1 The Meaning of “Dependent” 278
15.2 Standard Error of the Difference Between Dependent Means 279
15.3 Degrees of Freedom 281
15.4 The t Test for Two Dependent Samples 281
15.5 Testing Hypotheses About Two Dependent Means: An Example 283
15.6 Interval Estimation of μD 286
15.7 Summary 287
Chapter 16 Comparing the Means of Three or More Independent Samples: OneWay Analysis of Variance 294
16.1 Comparing More Than Two Groups: Why Not Multiple t Tests? 294
16.2 The Statistical Hypotheses in OneWay ANOVA 295
16.3 The Logic of OneWay ANOVA: An Overview 296
16.4 Alison’s Reply to Gregory 299
16.5 Partitioning the Sums of Squares 300
16.6 WithinGroups and BetweenGroups Variance Estimates 303
16.7 The F Test 304
16.8 Tukey’s “HSD” Test 306
16.9 Interval Estimation of μi − μj 308
16.10 OneWay ANOVA: Summarizing the Steps 309
16.11 Estimating the Strength of the Treatment Effect: Effect Size (ωˆ 2) 311
16.12 ANOVA Assumptions (and Other Considerations) 312
16.13 Summary 313
Chapter 17 Inferences About the Pearson Correlation Coefficient 322
17.1 From μ to ρ 322
17.2 The Sampling Distribution of r When ρ 0 322
17.3 Testing the Statistical Hypothesis That ρ 0 324
17.4 An Example 324
17.5 In Brief: Student’s t Distribution and Regression Slope (b) 326
17.6 Table E 326
17.7 The Role of n in the Statistical Significance of r 328
17.8 Statistical Significance Versus Importance (Again) 329
17.9 Testing Hypotheses Other Than ρ 0 329
17.10 Interval Estimation of ρ 330
17.11 Summary 332
Chapter 18 Making Inferences From Frequency Data 338
18.1 Frequency Data Versus Score Data 338
18.2 A Problem Involving Frequencies: The OneVariable Case 339
18.3 χ2: A Measure of Discrepancy Between Expected and Observed Frequencies 340
18.4 The Sampling Distribution of χ2 341
18.5 Completion of the Voter Survey Problem: The χ2 GoodnessofFit Test 343
18.6 The χ2 Test of a Single Proportion 344
18.7 Interval Estimate of a Single Proportion 345
18.8 When There Are Two Variables: The χ2 Test of Independence 347
18.9 Finding Expected Frequencies in the TwoVariable Case 348
18.10 Calculating the TwoVariable χ2 350
18.11 The χ2 Test of Independence: Summarizing the Steps 351
18.12 The 2 × 2 Contingency Table 352
18.13 Testing a Difference Between Two Proportions 353
18.14 The Independence of Observations 353
18.15 χ2 and Quantitative Variables 354
18.16 Other Considerations 355
18.17 Summary 355
Chapter 19 Statistical “Power” (and How to Increase It) 363
19.1 The Power of a Statistical Test 363
19.2 Power and Type II Error 364
19.3 Effect Size (Revisited) 365
19.4 Factors Affecting Power: The Effect Size 366
19.5 Factors Affecting Power: Sample Size 367
19.6 Additional Factors Affecting Power 368
19.7 Significance Versus Importance 369
19.8 Selecting an Appropriate Sample Size 370
19.9 Summary 373
Epilogue A Note on (Almost) AssumptionFree Tests 379
References 380
Appendix A Review of Basic Mathematics 382
A.1 Introduction 382
A.2 Symbols and Their Meaning 382
A.3 Arithmetic Operations Involving Positive and Negative Numbers 383
A.4 Squares and Square Roots 383
A.5 Fractions 384
A.6 Operations Involving Parentheses 385
A.7 Approximate Numbers, Computational Accuracy, and Rounding 386
Appendix B Answers to Selected EndofChapter Problems 387
Appendix C Statistical Tables 408
Glossary 421
Index 427
Useful Formulas 433
New To This Edition
 Guided by instructor feedback, ^Y instead of Y0 is now used as the symbol for the predicted value of Y in our treatment of regression analysis. Further, the meaning, and importance, of residuals in regression analysis is explicitly addressed
 SPSS is no longer included in the text, but sample output is now provided with commentary on the supporting website, all within the context of the statistical procedures and tests covered in the text. For the student who has access to SPSS and wishes to replicate the results (or simply explore this software further), a link to data on which our applications are based is provided.
 All chapters have benefited from careful editing, along with the occasional clarification or elaboration, that one should expect of a new edition.
The Wiley Advantage
 Incorporates a case study approach, which models the process of data analysis, conceptualizes the learning of challenging statistical concepts, and addresses one of the more controversial policy issues in education today, high stakes testing.
 Includes a focus on conceptual development; frequently reinforcing concepts/principles/procedures in earlier chapters and foreshadowing concepts/principles/procedures to come.
 A clear, stepbystep illustration of mathematical operations.
 Integrates statistical hypothesis testing with effect, size and confidence intervals throughout.
 Examples, problems, and applications largely focus on the discipline of education.
Reviews
 Wiley ETexts are powered by VitalSource and accessed via the VitalSource Bookshelf reader, available online and via a downloadable app.
 Wiley ETexts are accessible online and offline, and can be read on a variety of devices, including smartphones and tablets.
 Wiley ETexts are nonreturnable and nonrefundable.
 Wiley ETexts are protected by DRM. For specific DRM policies, please refer to our FAQ.
 WileyPLUS registration codes are NOT included with any Wiley EText. For informationon WileyPLUS, click here .
 To learn more about Wiley ETexts, please refer to our FAQ.
 Ebooks are offered as ePubs or PDFs. To download and read them, users must install Adobe Digital Editions (ADE) on their PC.
 Ebooks have DRM protection on them, which means only the person who purchases and downloads the ebook can access it.
 Ebooks are nonreturnable and nonrefundable.
 To learn more about our ebooks, please refer to our FAQ.