University College London, 2000.
A number of procedures have been developed for brain morphometry, many of which are essential in functional imaging applications. The theme developed in the thesis is computational neuro-anatomy, which relies upon a series of image registration methods, image segmentation and statistical methods for characterising brain structure.
The simplest registration is for rigid bodies, and is normally applied within subject. Methods are described for rigid registration of both inter- and intra-modality images. More complex models are required for registering images of different subjects into the same stereotactic space. A coarse, but fast method is described, which begins with a 12-parameter affine registration, followed by nonlinear warps modelled by a linear combination of spatial basis functions. The registration proceeds within a Bayesian framework, which is used for penalising unlikely shape changes. A high-dimensional warping method follows for refining the initial registration. In order to estimate more accurate warps, this method emphasises consistency of the deformations, by considering that warping brain A to brain B should not be different, probablistically, from warping B to A. A Bayesian framework is used again, whereby a prior probability distribution for the warps is assumed that embodies this symmetry.
A new scheme for segmenting grey and white matter from MR images has been developed, which is based on Mixture Model cluster analysis. Registered prior probability maps of different tissue classes are used to make the classification more robust, and MRI intensity nonuniformity correction is also incorporated into this model.
Finally, a taxonomy of morphometric methods is described, which characterise the regional distribution of different tissue types, or shape differences inherent in the deformations computed by the warping methods.
The initial motivation for this work was to develop improved methods of image registration for functional imaging. Much of it has now been incorporated into the SPM99 package, and is used by several hundred researchers around the world for analysing functional imaging data. The second motivation was to facilitate the development of methods for studying brain shape among different populations. Recently, the term computational neuroanatomy has been coined for this area of research.
Rigid body registration is one of the simplest forms of image registration, so this chapter provides an ideal framework for introducing some of the concepts that will be used by the more complex registration methods described in later chapters. The shape of a human brain changes very little with head movement, so rigid body transformations can be used to model different head positions of the same subject. Registration methods described in this chapter include within modality, or between different modalities such as PET and MRI. Matching of two images is performed by finding the rotations and translations that optimise some mutual function of the images. Within modality registration generally involves matching the images by minimising the sum of squared difference between them. For between modality registration, the matching criterion needs to be more complex. A method for co-registering brain images of the same subject that have been acquired in different modalities is presented. The basic idea is that instead of matching two images directly, one performs intermediate within modality registrations to two template images that are already in register. One can use a least squares minimisation to determine the affine transformations that map between the templates and the images. By incorporating suitable constraints, a rigid body transformation that directly maps between the images can be extracted from these more general affine transformations. A further refinement capitalises on the implicit normalisation of both images into a standard space. This facilitates partitioning both original images into homologous tissue classes. Once extracted, the partitions are jointly matched further increasing the accuracy of the co-registration.
This chapter describes the steps involved in registering images of different subjects into roughly the same co-ordinate system, where the co-ordinate system is defined by a template image (or series of images). The method only uses up to a few hundred parameters, so can only model global brain shape. It works by estimating the optimum coefficients for a set of bases, by minimising the sum of squared differences between the template and source image, while simultaneously maximising the smoothness of the transformation using a maximum a posteriori (MAP) approach. In order to adopt the MAP approach, it is necessary to have estimates of the likelihood of obtaining the fit given the data, which requires prior knowledge of spatial variability, and also knowledge of the variance associated with each observation. True Bayesian approaches assume that the variance associated with each voxel is already known, whereas the approach developed here is a type of Empirical Bayesian method, which attempts to estimate this variance from the residual errors. Because the registration is based on smooth images, correlations between neighbouring voxels are considered when estimating the variance. This makes the same approach suitable for the spatial normalisation of both high quality MR images, and low resolution noisy PET images. A fast algorithm has been developed that utilises Taylor's Theorem and the separable nature of the basis functions, meaning that most of the nonlinear spatial variability between images can be automatically corrected within a few minutes. The approach begins by matching the images using an affine transformation. Unlike Chapter 2 - where the images to be matched together are from the same subject - zooms and shears are needed to register heads of different shapes and sizes. Knowledge of the variability of head sizes is included within a Bayesian framework in order to increase the robustness and accuracy of the method. %Affine registrations with and without incorporating the prior knowledge are compared, showing that the affine transformations derived using the Bayesian scheme are much more robust, and that the rate of convergence is greater. Following this step, gross differences in head shapes, that can not be accounted for by affine normalisation alone, are corrected by a nonlinear spatial normalisation procedure. In order to reduce the number of parameters to be estimated, the nonlinear warps are described by a linear combination of low spatial frequency discrete cosine transform basis functions. Regularisation of the problem involves biasing the warps to be smooth by simultaneously minimising their membrane energy.
This chapter is also about warping brain images of different subjects to the same stereotactic space. However, unlike Chapter 3, this method uses thousands or millions of parameters, so is potentially able to obtain much more precision. A high dimensional model is used, whereby a finite element approach is employed to estimate translations at the location of each voxel in the template image. Bayesian statistics are used to obtain a maximum a posteriori (MAP) estimate of the deformation field. The validity of any registration method is largely based upon the prior knowledge about the variability of the estimated parameters. In this approach it is assumed that the priors should have some form of symmetry, in that priors describing the probability distribution of the deformations should be identical to those for the inverses (i.e., warping brain A to brain B should not be different probablistically from warping B to A). The fundamental assumption is that the probability of stretching a voxel by a factor of n is considered to be the same as the probability of shrinking n voxels by a factor of 1/n. The penalty function of choice is based upon the singular values of the Jacobian matrices having log-normal distributions, which enforces a continuous one-to-one mapping. A gradient descent algorithm is presented that incorporates the above priors in order to obtain a MAP estimate of the deformations. Further consistency is achieved by registering images to their ``averages'', where this average is one of both intensity and shape.
A tissue classification method was originally developed to be part of the between modality registration procedure described in Chapter 2, but the classification results are also useful for various types of morphometry, as well as having potential applications in other registration techniques. This chapter describes a method of segmenting MR images into different tissue classes, using a modified Gaussian Mixture Model. By knowing the prior spatial probability of each voxel being gray matter, white matter or cerebro-spinal fluid, it is possible to obtain a more robust classification. In addition, a step for correcting intensity non-uniformity is also included, which makes the method more applicable to images corrupted by smooth intensity variations. Evaluations of the method show that the non-uniformity correction improves the segmentation of images containing this artifact.
The chapter on morphometry covers three principle morphometric methods, that will be called voxel-based, deformation-based and tensor-based morphometry. At its simplest, voxel-based morphometry (VBM) involves a voxel-wise comparison of the local concentration of grey matter between two groups of subjects. The procedure is relatively straight-forward, and involves spatially normalising high resolution MR images from all the subjects in the study into the same stereotactic space. This is followed by segmenting the grey matter from the spatially normalised images, and smoothing these grey-matter segments. Voxel-wise parametric statistical tests are performed which compare the smoothed grey-matter images from the groups. Corrections for multiple comparisons are made using the theory of Gaussian random fields. This chapter describes the steps involved in VBM, and provides evaluations of the assumptions made about the statistical distribution of the data.
Deformation-based morphometry (DBM) is a method for identifying macroscopic anatomical differences among the brains of different populations of subjects. The method involves spatially normalising the structural MR images of a number of subjects so that they all conform to the same stereotactic space. Multivariate statistics are then applied to the parameters describing the estimated nonlinear deformations that ensue. To illustrate the method, the gross morphometry of male and female subjects are compared. Brain asymmetry, the effect of handedness, and the interactions among these effects are also assessed.
Tensor-based morphometry (TBM) is introduced as a method of identifying regional structural differences from the gradients of deformations fields. Deformation fields encode the relative positions of different brain structures, but local shapes (such as volumes, lengths and areas) are encoded in their gradients (Jacobian matrix field). Various functions of these tensor-fields can be used to characterise shape differences.