Although the look and feel and SPM2 is very similar to SPM99 there have been a number of implementation improvements. The general strategy is to ensure the code is as simple and accessible as possible and yet maintain a degree of robustness and efficiency. Specific changes are detailed below:

A number of the revisions detailed above emphasise that modularity has been
preserved and that we have tried to make the code as simple and readable as
possible. A key change in terms of the architecture is the consolidation of
the .mat files that contain the analysis variables and parameters. These
files have been consolidated in such a way that to use SPM in batch mode
should be much easier. We have done this as a prelude to planned work using
mark-up languages (XML) to facilitate the review, specification and
implementation of SPM procedures. In brief, SPM2 sets up a single structure
(SPM) at the beginning of each analysis and, as the analysis proceeds,
fills in sub-fields in a hierarchical fashion. This enables one to fill in
early fields automatically and bypass the user interface prompting. After
the design specification fields have been filled in the design matrix is
computed and placed in a design matrix structure. This, along with a data
structure and non-sphericity structure is used by SPM to compute the
parameters and hyperparameters. These are saved (as handles to the
parameter and hyperparameter images) as sub-fields of SPM. A contrast
sub-structure is generated automatically and can be augmented at any stage.
This structure array can be filled in automatically after estimation using
`spm_contrasts.m`

. The hierarchical organisation of the sub-function calls
and the SPM structure means that, after a few specification fields are set
in the SPM structure, an entire analysis, complete with contrasts can be
implemented automatically. An example of a text file that fills in the
initial fields of SPM and then calls the required functions can be found in
`spm_batch.m`

. Key fields include:

- SPM.xY - data structure
- SPM.nscan - vector of scans per session
- SPM.xBF - Basis function structure
- SPM.Sess - Session structure
- SPM.xX - Design matrix structure
- SPM.xGX - Global variate structure
- SPM.xVi - Non-sphericity structure
- SPM.xM - Masking structure
- SPM.xsDes - Design description structure
- SPM.xCon - Contrast Structure

Currently, the Neuroimaging Information Technology Initiative (NIfTI) is deciding on a strategy to allow different software to share data more easily. This may involve the development of a new file format for possible adoption by the neuro-imaging community. No decisions have yet been made, but eventually, the recommendations by the group should be incorporated into SPM. Unfortunately, this will not be SPM2. There will however be slight changes to the SPM file format. By default, images are currently flipped in the left-right direction during spatial normalisation. This is for historical reasons and relates to the co-ordinate system of Talairach and Tournoux being a right-handed co-ordinate system, whereas the co-ordinate system adopted for storing Analyze images is left-handed. This will be streamlined, but will require each site to adopt a consistent co-ordinates system handedness for their images (specified in a file in the SPM2 distribution). One side effect of this is that images spatially normalised using SPM99 or earlier will need to be spatially normalised again.

Sinc interpolation is a classical interpolation method that locally convolves the image with some form of interpolant. Much more efficient re-sampling can be performed using generalised interpolation (Thévenaz, 2000), where an image is modelled by a linear combination of basis functions. A continuous representation of the image intensities can be obtained by fitting the basis functions through the original intensities of the image. The matrix of basis functions can be considered as square and Toeplitz, and the bases usually have compact support. In particular, spatial interpolation in the realignment and spatial normalisation modules of SPM2 will use B-splines, which are a family of functions of varying degree. Interpolation using B-splines of degree 0 or 1 is almost identical to nearest neighbour or linear interpolation respectively. Higher degree interpolation (2 and above) begins with a very efficient deconvolution of the basis functions from the original image, to produce an image of basis function coefficients. Image intensities at new positions can then be reconstructed using the appropriate linear combination the basis functions. Much of the interpolation code was based on (with permission from Philippe Thévenaz) algorithms released by the Biomedical Imaging Group (http://bigwww.epfl.ch/) at the Swiss Federal Institute of Technology Lausanne (EPFL).

Image realignment involves estimating a set of 6 rigid-body transformation parameters for each image in the time series. For each image, this is done by finding the parameters that minimise the mean squared difference between it and a reference image. It is not possible to exhaustively search through the whole 6-dimensional (7 if the intensity scaling is included) parameter space, so the algorithm makes an initial parameter estimate (zero rotations and translations), and begins to iteratively search from there. At each iteration, the model is evaluated using the current parameter estimates, and the cost function re-computed. A judgment is then made about how the parameter estimates should be modified, before continuing on to the next iteration. This optimisation is terminated when the cost function stops decreasing.

In order to be successful, the cost function needs to be smooth. Interpolation artefacts are one reason why the cost function may not be smooth. Using trilinear interpolation, sampling in the centre of 8 voxels effectively involves taking the weighted average of these voxels, which introduces smoothing. Therefore, an image translated by half of a voxel in three directions will be smoother than an image that has been translated by a whole number of voxels. The mean squared difference between smoothed images tends to be slightly lower than that for un-smoothed ones, which has the effect of introducing unwanted "texture" into the cost function landscape. Dithering the way that the images are sampled has the effect of reducing this texture. This has been done for the SPM2 realignment, which means that less spatial smoothing is necessary for the algorithm to work. Re-sampling the images uses B-spline interpolation, which is more efficient than the windowed sinc interpolation of SPM99 and earlier.

The optional adjustment step has been removed, mostly because it is more correct to include the estimated motion parameters as confounds in the statistical model than it is to remove them at the stage of image realignment. This also means that each image can be re-sliced one at a time, which allows more efficient image I/O to be used. This extra efficiency should be seen throughout SPM2.

The old default three-step coregistration procedure of SPM99 is now obsolete. The approach in SPM2 involves optimising an information theoretic objective function. The original mutual information coregistration in the original SPM99 release exhibited occasional problems due to interpolation artefact, but these have now been largely resolved using spatial dithering (see above). Information theoretic measures allow much more flexible image registration as they make fewer assumptions about the images. A whole range of different types of images can now be more easily coregistered in SPM.

Estimating the spatial transformation that best warps an image to match one of the SPM template images is a two step procedure, beginning with a local optimisation of the 12-parameters of an affine transformation. This step has been made more robust by making the procedure more internally consistent. Affine registering image A to match image B should now produce a result that is much closer to the inverse of the affine transformation that matches image B to image A. The regularisation (a procedure for increasing stability) of the affine registration has also changed. The penalty for unlikely warps is now based on the matrix log of the affine transformation matrix (after removing rotation and translation components) being multivariate and normal.

The non-linear component has also changed slightly, in that the bending energy of the warps is used to regularise the procedure, rather than the membrane energy. The bending-energy model seems to produce more realistic looking distortions.

The segmentation model has been updated in order to improve the bias correction component. The version in SPM99 tended to produce a dip in the middle of the bias corrected images. This is because the bias correction had a tendency towards scaling the image uniformly by zero (lowest entropy solution), but was prevented from doing so because the bias field was constrained to have an average value of one. Because the bias was only determined from grey and white matter, it tended to push down the intensity in these regions, and compensated by increasing the intensity of other regions. This is now fixed with a new objective function.

The segmentation procedure also includes a "cleanup" procedure whereby small regions of non-brain tissue that are misclassified as brain are removed. Also, the prior probability images are scaled such that they have less influence on the segmentation. Previously, abnormal brains could be extremely problematic if there was (e.g.) CSF where the prior probability images suggested that there was 0% probability of obtaining it. The modified code may be able to cope slightly better with these abnormalities.

`(spm_fmri_spm_ui.m)`

fMRI design specification is now simpler. The distinction between event- and epoch-related designs has been effectively removed by allowing the user to specify stimulus functions that comprise a series of variable-length boxcars. If the length reduces to zero one is implicitly modelling an event. This contrasts with SPM99 where the distinction between event- and epoch-related responses was made at the level of the basis functions (a hemodynamic basis set or a boxcar epoch set). The main reason for putting all the design-specific effects in the input functions, as opposed to the basis functions, is to finesse the specification of inputs to DCM and hemodynamic models. From the point of view of dynamical modelling the basis functions simply serve to approximate the impulse response function of the input-state-output model that each voxel represents. Consequently, the basis functions you specify pertain to and only to the hemodynamic response function and these basis functions are assumed to be the same for all sessions. Onsets and durations of various trials or conditions can now be specified in scans or seconds. The option to perform a second-order or generalised convolution analysis using Volterra kernels is now assumed to be the same for all sessions. You can now also specify negative onset times up to 32 time bins (that default to 1/16 inter-scan intervals).

`(spm_spm.m)`

`spm_spm.m`

has been considerably simplified. Firstly, the `Y.mad`

file, which
used to store the time-series of voxels surviving an F test for all the
effects of interest, has been removed. This means that you have to keep the
original data in order to plot the responses. Although this decreases
robustness it very much simplifies the software and allows you to
interrogate any brain region using `spm_graph.m`

at the point of inference
and reviewing your results. The second simplification is that smoothness
estimation is now performed on the residual fields after estimation. This
means that `spm_spm.m`

calls separate subroutines after the estimation has
completed. This smoothness estimation is slower but is much more accurate
because it computes the partial derivatives of the residual fields using a
more finessed interpolation scheme.

As mentioned above `spm_spm.m`

now performs WLS that defaults to ML
estimation using the estimated of specified non-sphericity in SPM.xVi.V.
The weighting matrix is in SPM.xX.W. If SPM.xVi.V is not specified ReML
hyperparameters are estimated in a first pass of the data according to the
covariance components in SPM.xVi.Vi.

`(spm_results_ui.m)`

The main revision to the results section has been in terms of plotting where the standard error bars have now been replaced by 90 confidence intervals. The plotting of event-related responses has been upgraded to provide true fixed impulse response (FIR) estimates, for both event- and epoch-related designs. This is equivalent to a peri-stimulus time histogram of hemodynamic responses and is estimated by refitting an FIR model at the selected voxel.

Users can now simultaneously display a table of results and the functional activations. After displaying results, click on the Results-Fig button to the right of the menu on the graphic figure. This will open a new window. Clicking on the volume or cluster buttons will now display the results table in this figure. Slices and sections will still display in the main graphics figure. The Results-Fig is fully active and clicking on individual lines in the table will move the cursor to the appropriate points on the displayed sections.