What is the problem with diffusion MR data - and what do we do about it?
21 May 2020, NDCN Seminar
Report by Michiel Cottaar
In this talk, Jesper gave an overview of the FSL tools he developed to register and clean up diffusion MRI data, and illustrated how crucial this preprocessing is before any downstream analysis should be run. Diffusion MRI enables the study the structural connectivity of the brain in vivo. It is possible because it is sensitive to the random motion (i.e. diffusion) of water within the brain. This diffusion is decreased in the brain tissue due to the cell membranes and other obstacles hindering the random movement of water. This is of particular interest in the white matter, where the diffusion of water is much larger along the axons than perpendicular to the axons. This diffusion anisotropy allows one to estimate the fibre orientations within each white matter voxel and, by ‘connecting the dots’, to reconstruct the major white matter tracts.
Each diffusion-weighted image is only sensitive to the diffusion of water in a single direction, which is determined by the orientation of the diffusion-weighted gradients. So, to observe the diffusion anisotropy required to estimate the fibre orientations, many diffusion-weighted images will need to be acquired with different gradient orientations. To make this feasible requires quick scans, which is made possible by using echo-planar imaging (EPI). Unfortunately, EPI images are sensitive to distortions caused by an off-resonance field (i.e. any variation in the magnetic field strength), which will be induced both by the susceptibility of the human head and by eddy currents caused by the diffusion-weighting gradients themselves. Jesper’s talk focused on the FSL tools he developed to correct these distortions, to align the diffusion-weighted images, and to correct for any other movement-induced artefacts in the data.
The main difficulty in aligning the diffusion-weighted images is that each image encodes different information and hence looks different. This means that a simple cost function such as mean squared error cannot be used to determine whether two images are well aligned with each other. Furthermore, because the eddy currents and hence the eddy-induced distortion depend on the diffusion-weighting gradients, each diffusion image is distorted in a different way. This is why Jesper developed Zoltar, the prediction maker which combines information from all the diffusion-weighted images to give predictions of what the undistorted image for each gradient orientation might look like. By registering each observed image to this predicted image, the motion parameters and eddy current field for each image can be estimated, which can then be used to feed less distorted images into Zoltar leading to improved predicted images. After several iterations of improving the predictions and registering to these predicted images, Jesper showed that this results in well-aligned diffusion images, even at high b-values where each diffusion-weighted image looks very different.
In the final part of his talk, Jesper looked at how to correct for other artefacts in diffusion MRI data, which might arise if the subject moves a lot. In particular, he discussed slice dropout and intra-volume movement. Slice dropouts occur due to movement during the diffusion weighting and can be identified as a significant reduction of intensity across a slice compared to the prediction made by Zoltar. The data within such a slice is lost; however, by replacing this data with that predicted by Zoltar at least it can be prevented from biasing any downstream analysis. Intra-volume motion artefacts arise when a subject moves between the acquisition of multiple slices forming a single volume. In that case, the individual slices forming the volume will be misaligned. This can be corrected by estimating the motion parameters not for the whole volume, but for each individual slice within the volume. Finally, Jesper showed us how by combining all of these corrections even very distorted data from a subject that moved a lot could be rescued to produce a clean dataset suitable for later analysis.