Multi-modal Example Dataset =========================== R. Henson 15/2/05 rik.henson@mrc-cbu.cam.ac.uk Last modified 11/5/05 This dataset contains EEG, MEG and fMRI data on the same subject within the same paradigm. It can be used to examine how various measures of face perception, such as the "N170" ERP (EEG), the "M170" ERF (MEG) and fusiform activation (fMRI), are related. For example, the localisation of the generator(s) of the N170 and/or M170 can be constrained by the fMRI activations. It also includes a high resolution anatomical MRI image (aMRI) for construction of a head-model for the EEG and MEG data, together with data from a Polhemus digitizer that can be used to coregister the EEG and MEG data with the aMRI. Paradigm ======== The basic paradigm involves randomised presentation of 86 faces and 86 scrambled faces. Half of the faces belong to famous people, half are novel, creating 3 event-types (conditions) in total, though only the basic contrast of faces vs scrambled faces is described here. In all analyses below: event-type 1 = unfamiliar (novel) faces (U) event-type 2 = familiar (famous) faces (F) event-type 3 = scrambled faces (S) The scrambled faces were created by 2D fourier transformation, random phase permutation, inverse transformation and outline-masking of each face. Thus faces and scrambled faces are closely matched for low-level visual properties such spatial frequency power density. The subject judged the left-right symmetry of each stimulus (face and scrambled) around an imaginary vertical line through the centre of the image (mean RTs over a second; judgments roughly orthogonal to conditions). Faces were presented for 600ms, every 3600ms. More details about the paradigm are described in: Henson et al (2003). Electrophysiological and haemodynamic correlates of face perception, recognition and priming. Cerebral Cortex, 13, 793-805. (a PDF, "henson-cc-2003.pdf", is included with this dataset). Note though that this example dataset is not from one of the subjects in that paper, and corresponds to data from Phase 1 in the paper. The above paper describes an fMRI and ERP study. Subsequently, a group of 10 subjects were tested on a very similar paradigm using MEG (of which the present subject was one). Two abstracts for HBM05 describing some preliminary analyses of the MEG data (including localisation of the M170 ERF) are also included in this dataset (hbm05-source.pdf and hbm05-timefreq.pdf). EEG === The EEG data were acquired on a 128-channel ActiveTwo system, sampled at 2048 Hz, plus electrodes on left earlobe, right earlobe, and two each to measure HEOG and VEOG. The 128 scalp channels are named: 32 A (Back), 32 B (Right), 32 C (Front) and 32 D (Left). The original continuous data were converted from BDF format to SPMEEG format (using SPM5's "bdf_setup.mat" channel template), referencing to the average of the left and right earlobe electrodes (for consistency with Henson et al, 2003), and epoched from -200ms to +600ms to produce: e_eeg.mat e_eeg.dat These epochs were then examined for artifacts, defined as timepoints that exceeded an absolute threshold of 120 microvolts (mainly in the VEOG). A total of 29 of the 172 trials were rejected (note that the subject was instructed not to blink during a 1600ms period covering the stimulus presentation, and so blinks were rare, but some other eye movements nonetheless occurred). The epochs were then averaged according to the three trial types to produce: mae_eeg.mat mae_eeg.dat The first clear difference between faces (U+F) and scrambled faces (S) is maximal around 170ms, appearing as an enhancement of a negative component (peak) at occipito-temporal channels - the "N170" (eg channel "B8") - or enhancement of a positive peak at Cz (eg channel "A1"). Polhemus -------- The Polhemus directory contains the sensor positions and a headshape for coregistration with the aMRI: sensor.pol headshape.pol The former starts with twice the position of each fiducial (note that these positions are quite arbitrary and might be slightly different from those in used for the headshape), then locations of the Left and Right earlobes, then positions of the CDR and DRL electrodes of the Biosemi system, and finally the 128 electrodes: 32 A (Back), 32 B (Right), 32 C (Front) and 32 D (Left). The latter starts with the location of each fiducial points, Left Ear, Right Ear and NaZion, followed by coordinates of all the pin-point locations acquired continuously over the scalp. The simplest way to coregister the sensor positions with the aMRI is to use a rigid transformation based on the location of the three fiducials. Since the exact location of the fiducials within the aMRI was not measured, one needs to indicate their approximate location and check a posteriori the quality of the coregistration. A better way to coregister would use all the points in the headshape and preform a surface matching to the aMRI. MEG === The MEG data were acquired on a 151-channel CTF Omega system, sampled at 625 Hz. The original CTF format data were converted into SPMEEG format and epoched from -200ms to 600ms to produce: e_meg.mat e_meg.dat (the channel template "rik_meg_ctf.mat" is also provided). These data were then averaged according to the three trial types to produce: me_meg.mat me_meg.dat The first clear difference between faces (U+F) and scrambled faces (S) is maximal around 170ms, appearing as an enhancement of a negative peak at RIGHT occipito-temporal channels - the "M170" (eg channel "MRT15") - (or enhancement of a positive peak at LEFT occipito-temporal channels, eg channel "MLT15"). Note that, owing to the presence of high frequency noise, these effects are more easily seen if the data are lowpass filtered to, eg, 20Hz. Polhemus -------- The Polhemus directory contains the headshape and fiducial positions in channel space for coregistration with the aMRI: headshape_ctf.pol The simplest way to coregister the sensor positions with the aMRI is to use a rigid transformation based on the location of the three fiducials. Since the exact location of the fiducials within the aMRI was not measured, one needs to indicate their approximate location and check a posteriori the quality of the coregistration. A better way to coregister would use all the points in the headshape and preform a surface matching to the aMRI. fMRI ==== In Scans directory: 215 EPI images, 32 slices, TR=2.88s (1.5T): fM*.img fM*.hdr These were coregistered to the first image (scan 6) using rigid-body realignment in SPM2, which updated: fM*.mat (resulting movement parameters in rp*.txt file, and graphical output in the spm2.ps file). A mean image was also created: meanfM*.* This mean EPI image was manually translated/rotated to approximate Talairach space (ie origin at AC). The mean EPI was then coregistered with the anatomical MRI (see below) using mutual information (between-modality Coregistration in SPM2), and the fM*.mat files of all EPI images updated accordingly. The EPI images were then resliced and smoothed with a 10mm FWHM Gaussian kernel to create the files (used in analysis below): srfM*.* In Stats directory: SPM.mat file contains analysis parameters, basically: Three event-types (U,F,S) modelled with three basis functions (canonical HRF and two of its partial derivatives). Event onsets are coded in SPM.Sess.U.ons field. The results can be displayed on the coregistered structural image (in the aMRI directory; see below). Notice right midfusiform and medial orbitofrontal more active for faces (U,F) than scrambled faces (S) at p<.05 corrected (little lateral temporal (STS, MTG) in this subject, unlike the analysis of a larger group in Henson et al, 2003). aMRI ==== 1mm x 1mm x 1mm T1 image (1.5T): aMRI.img aMRI.hdr The image was then manually translated/rotated to approximate Talairach space (ie origin at AC), creating the additional file aMRI.mat (the fMRI data were coregistered with this aMRI image; see above)