Modelling and analysis
Acquisition: How we obtain and represent data
We use neuroimaging technologies to obtain data in different forms for different purposes:
- Neuroimaging produces three-dimensional ‘maps’ of the brain, composed of multiple volumes (‘voxels’), similar to pixels that form a photograph, with coordinates to indicate where in the map each voxel lies.
- Maps of tissue properties are generated using in-vivo histology models.
- For functional imaging, data at each voxel are acquired at multiple time points.
Analysis: How we process the data
The analytical methods we invent, develop, distribute and use are incorporated within our statistical parametric mapping (SPM) software. This software includes multiple functions that are needed to test hypotheses about functional anatomy. These include:
Registration is a technique where imaging data (e.g. MRI scans) are placed in the same space (i.e. aligned) to reliably compare one or more image(s). The technique involves estimating a set of parameters that describe the relative positions and orientations of several images from the same person, subsequently enabling those images to be overlaid for comparison.
The sizes and shapes of brains differ among individuals. To enable brain scans from different people to be compared, they are usually warped to fit a common template. Spatial normalisation is a process that estimates an image’s shape relative to the template, subsequently correcting the image to fit a standard template brain.
Spatial normalisation allows:
- Signal averaging across groups of people
- Identification of commonalities and differences between groups (e.g. patients vs. healthy individuals)
Segmentation is the process of separating images (e.g. brain scans) into different structures or types of tissue – for instance grey matter, white matter, and cerebrospinal fluid (CSF). Unified segmentation is a technique that combines image registration, tissue classification, and correction for certain image anomalies (bias correction) within the same generative model.
Source localisation in MEG
Source localisation refers to an image processing technique to enable the localisation of electrical activity in the brain using a map of recorded electrical activity. It does this by adjusting model parameters (such as dipole location, orientation, magnitude, and time-course) until the difference between the data and the model are minimised.
General linear models (GLM)
The GLM is a statistical method for assessing the contribution of different possible causes of a measured signal. GLM is commonly used in neuroimaging research to distinguish interesting changes in signals relating to our experiments, from uninteresting changes (e.g. noise). The term GLM is a generalisation of a multiple linear regression, where there are several variables. By applying a GLM at each point in the brain, we create SPM images which illustrate where in the brain there is evidence for experimental effects.
Dynamic Causal Modelling (DCM)
Dynamic Causal Modelling (DCM) is a method used to interpret the underlying causes of functional neuroimaging data. DCM involves creating realistic models of how neuroimaging data are generated from distributed brain responses – and then using Bayesian statistics to estimate the underlying architecture in terms of connections among distributed brain regions.
Voxel Based Morphometry (VBM) and Voxel Based Quantification (VBQ)
Voxel Based Morphometry (VBM) allows focal differences in brain anatomy and tissue types to be investigated, using the same type of analyses as with functional imaging data.
VBQ allows differences in brain microstructure between individuals (e.g. with different ages) or groups (e.g. patients and healthy individuals) to be identified using SPM’s statistical framework. VBQ provides complementary information to VBM, but with greater sensitivity and specificity to tissue microstructure.
Statistical Parametric Mapping
Our leading software for analysing neuroimaging data was developed at FIL.