Handbook of multisensor data fusion [Book Review]

A book which includes “dirty secrets in multisensor data fusion, ” the “unscented filter” and the “Bayesian iceberg” cannot be all bad. This handbook consists of 26 chapters, many of which are excellent, including well-written solid quantitative state-of-the-art surveys by Bar-Shalom, Poore, Uhlmann, Julier, Stone, Mahler, and Kinibarajan, among others, The chapters in this handbook range from tutorial to cutting-edge research, so there is something for everyone. Mary Nichols gives a broad and rapid survey of military multisensor data fusion systems in Chapter 22. A plethora of websites about data fusion and related subjects is included at the end of this handbook. The most ambitious and interesting chapter, by far, is on “Random Set Theory for Target Tracking and Identification, ’’ by Ron Mahler. Random set theory is clearly a good idea for data fusion, owing to the uncertainty in the number of targets and the uncertain origin of measurements in many practical applications. I n fact, random sets are essentially required, implicitly or explicitly, to describe the problem of tracking in a dense multiple target environment. Mahler’s chapter is edifying but sometimes obscure. In particular, what Mahler calls “turn-the-crank” formulas require derivatives and integrals with respect to sets: these are not the bread and butter of normal engineers, and hence, a few simple explicit examples of these would have been helpful, and the sooner the better. Mahler’s chapter is written with much authority and enthusiasm, as witnessed by his “short history of multitarget filtering” (page 24, Chapter 14) which is rather fun to read and should be compared with Larry Stone’s version of history (page 22, Chapter 10). Two chapters by Julier and Uhlmann, on “covariance intersection ” and “nonlinear systems” are particularly well done, being lucid, practical, and innovative; there is no doubt about how to use this new theory, as the MATLAB source code is included. The second chapter describes a novel algorithm for nonlinear filtering that has vastly superior performance compared with the extended Kalman filter (EKF) for certain applications. The EKF uses a simple linearization of the nonlinear equations to approximate the propagation of uncertainty in nonlinear transformations, whereas the new filter uses a more accurate approximation, based on sampling the probability density at carefully chosen points to approximate an n-dimensional integral, similar to Gauss-Hermite quadrature. This is called the “unscented filter” for obvious reasons. Covariance intersection (CI) is a simple method of combining covariance matrices for fusion, when there is uncertainty about the statistical correlations involved. Roughly speaking, CI computes the largest one-sigma error ellipsoid that fits within the intersection of the two error ellipsoids being fused. This is a more conservative approach than using the standard Kalman filter equations, which assume perfect knowledge of the statistical correlations. However, when the two error ellipsoids are equal, the intersection is the same as the original ellipsoids, and, hence, using CI there is no apparent benefit from fusion; some engineers might view this as being much too conservative. One of the longest and most provocative chapters in this handbook is by Joseph Carl on Bayesian vs. Dempster-Shafer (D-S) decision algorithms, which have been the subject of heated debates over the last several decades. Carl says that “many would argue that probability theory is not suitable for practical implementation on complex real-world problems, ’’ which is a very interesting assertion, but, Carl does not explain how D-S theory improves the situation. In particular, it is well-known and not subject to debate that given a decision problem with completely defined probability distribution functions (pdfs), that Bayesian decision rules are optimal. Therefore, the only way in which D-S could improve performance is in the case in which the pdfs are not completely specified. The problem of incomplete pdfs was the original motivation for Dempster’s seminal work published in 1967 and 1968, and it has been the subject of extensive research since then, but without any practical results so far (see [6] and pages 57-61 in [7] for a survey of this research). The best work that I know of on this subject is Chapter 5 of Kharin’s recent book [8], which basically concludes, that for most sensible models of uncertainty in pdfs, the standard Bayesian decision rule, or a minor modification of it, is the most robust approach. There is no formula or theorem in Carl’s chapter that shows how much D-S improves performance relative to Bayesian methods, nor does Carl assert that D-S is better than Bayes, but rather in the detailed numerical example that is worked out, the D-S and Bayesian decisions are “qualitatively the same. ” This agrees with other comparisons of D-S with Bayes reported in the