Ebook
Advanced Kalman Filtering, LeastSquares and Modeling: A Practical HandbookISBN: 9781118003169
640 pages
March 2011

Description
Its primary goal is to discuss model development in sufficient detail so that the reader may design an estimator that meets all application requirements and is robust to modeling assumptions. Since it is sometimes difficult to a priori determine the best model structure, use of exploratory data analysis to define model structure is discussed. Methods for deciding on the “best” model are also presented.
A second goal is to present little known extensions of least squares estimation or Kalman filtering that provide guidance on model structure and parameters, or make the estimator more robust to changes in realworld behavior.
A third goal is discussion of implementation issues that make the estimator more accurate or efficient, or that make it flexible so that model alternatives can be easily compared.
The fourth goal is to provide the designer/analyst with guidance in evaluating estimator performance and in determining/correcting problems.
The final goal is to provide a subroutine library that
simplifies implementation, and flexible general purpose highlevel
drivers that allow both easy analysis of alternative models and
access to extensions of the basic filtering.
Supplemental materials and uptodate errata are downloadable at http://booksupport.wiley.com.
Table of Contents
PREFACE xv
1 INTRODUCTION 1
1.1 The Forward and Inverse Modeling Problem 2
1.2 A Brief History of Estimation 4
1.3 Filtering, Smoothing, and Prediction 8
1.4 Prerequisites 9
1.5 Notation 9
1.6 Summary 11
2 SYSTEM DYNAMICS AND MODELS 13
2.1 DiscreteTime Models 14
2.2 ContinuousTime Dynamic Models 17
2.2.1 State Transition and Process Noise Covariance Matrices 19
2.2.2 Dynamic Models Using Basic Function Expansions 22
2.2.3 Dynamic Models Derived from First Principles 25
2.2.4 Stochastic (Random) Process Models 31
2.2.5 Linear Regression Models 42
2.2.6 ReducedOrder Modeling 44
2.3 Computation of State Transition and Process Noise Matrices 45
2.3.1 Numeric Computation of Φ 45
2.3.2 Numeric Computation of QD 57
2.4 Measurement Models 58
2.5 Simulating Stochastic Systems 60
2.6 Common Modeling Errors and System Biases 62
2.7 Summary 65
3 MODELING EXAMPLES 67
3.1 AngleOnly Tracking of Linear Target Motion 67
3.2 Maneuvering Vehicle Tracking 69
3.2.1 Maneuvering Tank Tracking Using Multiple Models 69
3.2.2 Aircraft Tracking 73
3.3 Strapdown Inertial Navigation System (INS) Error Model 74
3.4 Spacecraft Orbit Determination (OD) 80
3.4.1 Geopotential Forces 83
3.4.2 Other Gravitational Attractions 86
3.4.3 Solar Radiation Pressure 87
3.4.4 Aerodynamic Drag 88
3.4.5 Thrust Forces 89
3.4.6 Earth Motion 89
3.4.7 Numerical Integration and Computation of Φ 90
3.4.8 Measurements 92
3.4.9 GOES IP Satellites 96
3.4.10 Global Positioning System (GPS) 97
3.5 FossilFueled Power Plant 99
3.6 Summary 99
4 LINEAR LEASTSQUARES ESTIMATION: FUNDAMENTALS 101
4.1 LeastSquares Data Fitting 101
4.2 Weighted Least Squares 108
4.3 Bayesian Estimation 115
4.3.1 Bayesian Least Squares 115
4.3.2 Bayes’ Theorem 117
4.3.3 Minimum Variance or Minimum MeanSquared Error (MMSE) 121
4.3.4 Orthogonal Projections 124
4.4 Probabilistic Approaches—Maximum Likelihood and Maximum A Posteriori 125
4.4.1 Gaussian Random Variables 126
4.4.2 Maximum Likelihood Estimation 128
4.4.3 Maximum A Posteriori 133
4.5 Summary of Linear Estimation Approaches 137
5 LINEAR LEASTSQUARES ESTIMATION: SOLUTION TECHNIQUES 139
5.1 Matrix Norms, Condition Number, Observability, and the PseudoInverse 139
5.1.1 VectorMatrix Norms 139
5.1.2 Matrix PseudoInverse 141
5.1.3 Condition Number 141
5.1.4 Observability 145
5.2 Normal Equation Formation and Solution 145
5.2.1 Computation of the Normal Equations 145
5.2.2 Cholesky Decomposition of the Normal Equations 149
5.3 Orthogonal Transformations and the QR Method 156
5.3.1 Givens Rotations 158
5.3.2 Householder Transformations 159
5.3.3 Modified GramSchmidt (MGS) Orthogonalization 162
5.3.4 QR Numerical Accuracy 165
5.4 LeastSquares Solution Using the SVD 165
5.5 Iterative Techniques 167
5.5.1 Sparse Array Storage 167
5.5.2 Linear Iteration 168
5.5.3 LeastSquares Solution for Large Sparse Problems Using Krylov Space Methods 169
5.6 Comparison of Methods 175
5.6.1 Solution Accuracy for Polynomial Problem 175
5.6.2 Algorithm Timing 181
5.7 Solution Uniqueness, Observability, and Condition Number 183
5.8 PseudoInverses and the Singular Value Transformation (SVD) 185
5.9 Summary 190
6 LEASTSQUARES ESTIMATION: MODEL ERRORS AND MODEL ORDER 193
6.1 Assessing the Validity of the Solution 194
6.1.1 Residual SumofSquares (SOS) 194
6.1.2 Residual Patterns 195
6.1.3 Subsets of Residuals 196
6.1.4 Measurement Prediction 196
6.1.5 Estimate Comparison 197
6.2 Solution Error Analysis 208
6.2.1 State Error Covariance and Confidence Bounds 208
6.2.2 Model Error Analysis 212
6.3 Regression Analysis for Weighted Least Squares 237
6.3.1 Analysis of Variance 238
6.3.2 Stepwise Regression 239
6.3.3 Prediction and Optimal Data Span 244
6.4 Summary 245
7 LEASTSQUARES ESTIMATION: CONSTRAINTS, NONLINEAR MODELS, AND ROBUST TECHNIQUES 249
7.1 Constrained Estimates 249
7.1.1 LeastSquares with Linear Equality Constraints (Problem LSE) 249
7.1.2 LeastSquares with Linear Inequality Constraints (Problem LSI) 256
7.2 Recursive Least Squares 257
7.3 Nonlinear Least Squares 259
7.3.1 1D Nonlinear LeastSquares Solutions 263
7.3.2 Optimization for Multidimensional Unconstrained Nonlinear Least Squares 264
7.3.3 Stopping Criteria and Convergence Tests 269
7.4 Robust Estimation 282
7.4.1 DeWeighting Large Residuals 282
7.4.2 Data Editing 283
7.5 Measurement Preprocessing 285
7.6 Summary 286
8 KALMAN FILTERING 289
8.1 DiscreteTime Kalman Filter 290
8.1.1 Truth Model 290
8.1.2 DiscreteTime Kalman Filter Algorithm 291
8.2 Extensions of the Discrete Filter 303
8.2.1 Correlation between Measurement and Process Noise 303
8.2.2 TimeCorrelated (Colored) Measurement Noise 305
8.2.3 Innovations, Model Validation, and Editing 311
8.3 ContinousTime KalmanBucy Filter 314
8.4 Modifications of the Discrete Kalman Filter 321
8.4.1 Friedland BiasFreeBiasRestoring Filter 321
8.4.2 KalmanSchmidt Consider Filter 325
8.5 SteadyState Solution 328
8.6 Wiener Filter 332
8.6.1 WienerHopf Equation 333
8.6.2 Solution for the Optimal Weighting Function 335
8.6.3 Filter Input Covariances 336
8.6.4 Equivalence of Weiner and SteadyState KalmanBucy Filters 337
8.7 Summary 341
9 FILTERING FOR NONLINEAR SYSTEMS, SMOOTHING, ERROR ANALYSISMODEL DESIGN, AND
MEASUREMENT PREPROCESSING 343
9.1 Nonlinear Filtering 344
9.1.1 Linearized and Extended Kalman Filters 344
9.1.2 Iterated Extended Kalman Filter 349
9.2 Smoothing 352
9.2.1 FixedPoint Smoother 353
9.2.2 FixedLag Smoother 356
9.2.3 FixedInterval Smoother 357
9.3 Filter Error Analysis and ReducedOrder Modeling 370
9.3.1 Linear Analysis of Independent Error Sources 372
9.3.2 Error Analysis for ROM Defi ned as a Transformed Detailed Model 380
9.3.3 Error Analysis for Different Truth and Filter Models 382
9.4 Measurement Preprocessing 385
9.5 Summary 385
10 FACTORED (SQUAREROOT) FILTERING 389
10.1 Filter Numerical Accuracy 390
10.2 UD Filter 392
10.2.1 UD Filter Measurement Update 394
10.2.2 UD Filter Time Update 396
10.2.3 RTS Smoother for UD Filter 401
10.2.4 UD Error Analysis 403
10.3 Square Root Information Filter (SRIF) 404
10.3.1 SRIF Time Update 405
10.3.2 SRIF Measurement Update 407
10.3.3 Square Root Information Smoother (SRIS) 408
10.3.4 DyerMcReynolds Covariance Smoother (DMCS) 410
10.3.5 SRIF Error Analysis 410
10.4 Inertial Navigation System (INS) Example Using Factored Filters 412
10.5 Large Sparse Systems and the SRIF 417
10.6 Spatial Continuity Constraints and the SRIF Data Equation 419
10.6.1 Flow Model 421
10.6.2 Log Conductivity Spatial Continuity Model 422
10.6.3 Measurement Models 424
10.6.4 SRIF Processing 424
10.6.5 SteadyState Flow Constrained Iterative Solution 425
10.7 Summary 427
11 ADVANCED FILTERING TOPICS 431
11.1 Maximum Likelihood Parameter Estimation 432
11.1.1 Calculation of the State Transition Partial Derivatives 434
11.1.2 Derivatives of the Filter Time Update 438
11.1.3 Derivatives of the Filter Measurement Update 439
11.1.4 Partial Derivatives for Initial Condition Errors 440
11.1.5 Computation of the Log Likelihood and Scoring Step 441
11.2 Adaptive Filtering 449
11.3 Jump Detection and Estimation 450
11.3.1 JumpFree Filter Equations 452
11.3.2 Stepwise Regression 454
11.3.3 Correction of JumpFree Filter State 455
11.3.4 RealTime Jump Detection Using Stepwise Regression 456
11.4 Adaptive Target Tracking Using Multiple Model Hypotheses 461
11.4.1 Weighted Sum of Filter Estimates 462
11.4.2 Maximum Likelihood Filter Selection 463
11.4.3 Dynamic and Interactive Multiple Models 464
11.5 Constrained Estimation 471
11.6 Robust Estimation: HInfi nity Filters 471
11.7 Unscented Kalman Filter (UKF) 474
11.7.1 Unscented Transform 475
11.7.2 UKF Algorithm 478
11.8 Particle Filters 485
11.9 Summary 490
12 EMPIRICAL MODELING 493
12.1 Exploratory Time Series Analysis and System Identification 494
12.2 Spectral Analysis Based on the Fourier Transform 495
12.2.1 Fourier Series for Periodic Functions 497
12.2.2 Fourier Transform of Continuous Energy Signals 498
12.2.3 Fourier Transform of Power Signals 502
12.2.4 Power Spectrum of Stochastic Signals 504
12.2.5 TimeLimiting Window Functions 506
12.2.6 Discrete Fourier Transform 509
12.2.7 Periodogram Computation of Power Spectra 512
12.2.8 BlackmanTukey (Correlogram) Computation of Power Spectra 514
12.3 Autoregressive Modeling 522
12.3.1 Maximum Entropy Method (MEM) 524
12.3.2 Burg MEM 525
12.3.3 Final Prediction Error (FPE) and Akaike Information Criteria (AIC) 526
12.3.4 Marple AR Spectral Analysis 528
12.3.5 Summary of MEM Modeling Approaches 529
12.4 ARMA Modeling 531
12.4.1 ARMA Parameter Estimation 532
12.5 Canonical Variate Analysis 534
12.5.1 CVA Derivation and Overview 536
12.5.2 Summary of CVA Steps 539
12.5.3 Sample Correlation Matrices 540
12.5.4 Order Selection Using the AIC 541
12.5.5 StateSpace Model 543
12.5.6 Measurement Power Spectrum Using the StateSpace Model 544
12.6 Conversion from Discrete to Continuous Models 548
12.7 Summary 551
APPENDIX A SUMMARY OF VECTORMATRIX OPERATIONS 555
A.1 Definition 555
A.1.1 Vectors 555
A.1.2 Matrices 555
A.2 Elementary Vector Matrix Operations 557
A.2.1 Transpose 557
A.2.2 Addition 557
A.2.3 Inner (Dot) Product of Vectors 557
A.2.4 Outer Product of Vectors 558
A.2.5 Multiplication 558
A.3 Matrix Functions 558
A.3.1 Matrix Inverse 558
A.3.2 Partitioned Matrix Inversion 559
A.3.3 Matrix Inversion Identity 560
A.3.4 Determinant 561
A.3.5 Matrix Trace 562
A.3.6 Derivatives of Matrix Functions 563
A.3.7 Norms 564
A.4 Matrix Transformations and Factorization 565
A.4.1 LU Decomposition 565
A.4.2 Cholesky Factorization 565
A.4.3 Similarity Transformation 566
A.4.4 Eigen Decomposition 566
A.4.5 Singular Value Decomposition (SVD) 566
A.4.6 PseudoInverse 567
A.4.7 Condition Number 568
APPENDIX B PROBABILITY AND RANDOM VARIABLES 569
B.1 Probability 569
B.1.1 Definitions 569
B.1.2 Joint and Conditional Probability, and Independence 570
B.2 Random Variable 571
B.2.1 Distribution and Density Functions 571
B.2.2 Bayes’ Theorem for Density Functions 572
B.2.3 Moments of Random Variables 573
B.2.4 Gaussian Distribution 574
B.2.5 ChiSquared Distribution 574
B.3 Stochastic Processes 575
B.3.1 Wiener or Brownian Motion Process 576
B.3.2 Markov Process 576
B.3.3 Differential and Integral Equations with White Noise Inputs 577
BIBLIOGRAPHY 579
INDEX 599
Author Information
The Wiley Advantage

Discusses model development in sufficient detail so that the reader may design an estimator that meets all application requirements and is robust to modeling assumptions.

Presents methods for deciding on the "best" model

Presents little known extensions of least squares estimation or Kalman filtering that provide guidance on model structure and parameters

Discusses implementation issues that make the estimator more accurate or efficient, or that make it flexible so that model alternatives can be easily compared.

Provides a subroutine library that simplifies implementation, and flexible general purpose highlevel drivers that allow both easy analysis of alternative models and access to extensions of the basic filtering