Background and Objective As deep learning faces a reproducibility crisis and studies on deep lear... more Background and Objective As deep learning faces a reproducibility crisis and studies on deep learning applied to neuroimaging are contaminated by methodological flaws, there is an urgent need to provide a safe environment for deep learning users to help them avoid common pitfalls that will bias and discredit their results. Several tools have been proposed to help deep learning users design their framework for neuroimaging data sets. Methods We present here ClinicaDL, one of these software tools. Clini-caDL interacts with BIDS, a standard format in the neuroimaging field, and its derivatives, so it can be used with a large variety of data sets. Moreover, it checks the absence of data leakage when inferring the results of new data with trained networks, and saves all necessary information to guarantee the reproducibility of results. Results The combination of ClinicaDL and its companion project Clinica allows performing an end-to-end neuroimaging analysis, from the download of raw data sets to the interpretation of trained networks, including neuroimaging Preprint submitted to Computer Methods and Programs in BiomedicineSeptember 7, 2021 preprocessing, quality check, label definition, architecture search, and network training and evaluation. Conclusions We implemented ClinicaDL to bring answers to three common issues encountered by deep learning users who are not always familiar with neuroimaging data: (1) the format and preprocessing of neuroimaging data sets, (2) the contamination of the evaluation procedure by data leakage and (3) a lack of reproducibility. We hope that its use by researchers will allow producing more reliable and thus valuable scientific studies in our field.
This repository is the official implementation of Data Augmentation in High Dimensional Low Sampl... more This repository is the official implementation of Data Augmentation in High Dimensional Low Sample Size Setting Using a Geometry-Based Variational Autoencoder.
We introduce a pipeline for the individual analysis of positron emission tomography (PET) data on... more We introduce a pipeline for the individual analysis of positron emission tomography (PET) data on large cohorts of patients. This pipeline consists for each individual of generating a subject-specific model of healthy PET appearance and comparing the individual's PET image to the model via a novel regularised Z-score. The resulting voxel-wise Z-score map can be interpreted as a subject-specific abnormality map that summarises the pathology's topographical distribution in the brain. We then propose a strategy to validate the abnormality maps on several PET tracers and automatically detect the underlying pathology by using the abnormality maps as features to feed a linear support vector machine (SVM)-based classifier. We applied the pipeline to a large dataset comprising 298 subjects selected from the ADNI2 database (103 cognitively normal, 105 late MCI and 90 Alzheimer's disease subjects). The high classification accuracy obtained when using the abnormality maps as features demonstrates that the proposed pipeline is able to extract for each individual the signal characteristic of dementia from both FDG and Florbetapir PET data.
Positron Emission Tomography (PET) with pharmacokinetic (PK) modelling is a quantitative molecula... more Positron Emission Tomography (PET) with pharmacokinetic (PK) modelling is a quantitative molecular imaging technique, however the long data acquisition time is prohibitive in clinical practice. An approach has been proposed to incorporate blood flow information from Arterial Spin Labelling (ASL) Magnetic Resonance Imaging (MRI) into PET PK modelling to reduce the acquisition time. This requires the conversion of cerebral blood flow (CBF) maps, measured by ASL, into the relative tracer delivery parameter (R1) used in the PET PK model. This was performed regionally using linear regression between population R1 and ASL values. In this paper we propose a novel technique to synthesise R1 maps from ASL data using a database with both R1 and CBF maps. The local similarity between the candidate ASL image and those in the database is used to weight the propagation of R1 values to obtain the optimal patient specific R1 map. Structural MRI data is also included to provide information within common regions of artefact in ASL data. This methodology is compared to the linear regression technique using leave one out analysis on 32 subjects. The proposed method significantly improves regional R1 estimation (p < 0.001), reducing the error in the pharmacokinetic modelling. Furthermore, it allows this technique to be extended to a voxel level, increasing the clinical utility of the images.
Amyloid PET is a robust biomarker of cortical β‐amyloid accumulation and a candidate endpoint for... more Amyloid PET is a robust biomarker of cortical β‐amyloid accumulation and a candidate endpoint for Alzheimer’s prevention trials. Quantification typically uses Standard Uptake Value Ratio (SUVR) measures acquired over 10‐20 minutes at steady‐state. SUVR may be more susceptible to altered blood flow than modelling of dynamic uptake data from injection to steady state. This may influence quantification, particularly change over time. Here we compared cross‐sectional measures and longitudinal rates of β‐amyloid change from static and dynamic analyses in individuals free of dementia.
This work proposes a new way to publicly distribute image analysis methods and software. This app... more This work proposes a new way to publicly distribute image analysis methods and software. This approach is particularly useful when the software code and the datasets cannot be made open source. We leverage the use of Internet and emerging web technologies to develop a system where anyone can upload their image datasets and run any of the proposed algorithms without the need of any specific installation or configuration. This service has been named NiftyWeb (http://cmictig.cs.ucl.ac.uk/niftyweb).
Alzheimer’s disease (AD) is characterized by the progressive alterations seen in brain images whi... more Alzheimer’s disease (AD) is characterized by the progressive alterations seen in brain images which give rise to the onset of various sets of symptoms. The variability in the dynamics of changes in both brain images and cognitive impairments remains poorly understood. This paper introduces AD Course Map a spatiotemporal atlas of Alzheimer’s disease progression. It summarizes the variability in the progression of a series of neuropsychological assessments, the propagation of hypometabolism and cortical thinning across brain regions and the deformation of the shape of the hippocampus. The analysis of these variations highlights strong genetic determinants for the progression, like possible compensatory mechanisms at play during disease progression. AD Course Map also predicts the patient’s cognitive decline with a better accuracy than the 56 methods benchmarked in the open challenge TADPOLE. Finally, AD Course Map is used to simulate cohorts of virtual patients developing Alzheimer’s ...
In order to reach precision medicine and improve patients’ quality of life, machine learning is i... more In order to reach precision medicine and improve patients’ quality of life, machine learning is increasingly used in medicine. Brain disorders are often complex and heterogeneous, and several modalities such as demographic, clinical, imaging, genetics and environmental data have been studied to improve their understanding. Deep learning, a subpart of machine learning, provides complex algorithms that can learn from such various data. It has become state of the art in numerous fields, including computer vision and natural language processing, and is also growingly applied in medicine. In this article, we review the use of deep learning for brain disorders. More specifically, we identify the main applications, the concerned disorders and the types of architectures and data used. Finally, we provide guidelines to bridge the gap between research studies and clinical routine.
Background and purpose: Computed tomography (CT) imaging is the current gold standard for radioth... more Background and purpose: Computed tomography (CT) imaging is the current gold standard for radiotherapy treatment planning (RTP). The establishment of a magnetic resonance imaging (MRI) only RTP workflow requires the generation of a synthetic CT (sCT) for dose calculation. This study evaluates the feasibility of using a multi-atlas sCT synthesis approach (sCT a) for head and neck and prostate patients. Material and methods: The multi-atlas method was based on pairs of non-rigidly aligned MR and CT images. The sCT a was obtained by registering the MRI atlases to the patient's MRI and by fusing the mapped atlases according to morphological similarity to the patient. For comparison, a bulk density assignment approach (sCT bda) was also evaluated. The sCT bda was obtained by assigning density values to MRI tissue classes (air, bone and soft-tissue). After evaluating the synthesis accuracy of the sCTs (mean absolute error), sCT-based delineations were geometrically compared to the CT-based delineations. Clinical plans were recalculated on both sCTs and a dose-volume histogram and a gamma analysis was performed using the CT dose as ground truth. Results: Results showed that both sCTs were suitable to perform clinical dose calculations with mean dose differences less than 1% for both the planning target volume and the organs at risk. However, only the sCT a provided an accurate and automatic delineation of bone. Conclusions: Combining MR delineations with our multi-atlas CT synthesis method could enable MRIonly treatment planning and thus improve the dosimetric and geometric accuracy of the treatment, and reduce the number of imaging procedures.
We present Clinica (www.clinica.run), an open-source software platform designed to make clinical ... more We present Clinica (www.clinica.run), an open-source software platform designed to make clinical neuroscience studies easier and more reproducible. Clinica aims for researchers to (i) spend less time on data management and processing, (ii) perform reproducible evaluations of their methods, and (iii) easily share data and results within their institution and with external collaborators. The core of Clinica is a set of automatic pipelines for processing and analysis of multimodal neuroimaging data (currently, T1-weighted MRI, diffusion MRI, and PET data), as well as tools for statistics, machine learning, and deep learning. It relies on the brain imaging data structure (BIDS) for the organization of raw neuroimaging datasets and on established tools written by the community to build its pipelines. It also provides converters of public neuroimaging datasets to BIDS (currently ADNI, AIBL, OASIS, and NIFD). Processed data include image-valued scalar fields (e.g., tissue probability maps)...
Background and Objective As deep learning faces a reproducibility crisis and studies on deep lear... more Background and Objective As deep learning faces a reproducibility crisis and studies on deep learning applied to neuroimaging are contaminated by methodological flaws, there is an urgent need to provide a safe environment for deep learning users to help them avoid common pitfalls that will bias and discredit their results. Several tools have been proposed to help deep learning users design their framework for neuroimaging data sets. Methods We present here ClinicaDL, one of these software tools. Clini-caDL interacts with BIDS, a standard format in the neuroimaging field, and its derivatives, so it can be used with a large variety of data sets. Moreover, it checks the absence of data leakage when inferring the results of new data with trained networks, and saves all necessary information to guarantee the reproducibility of results. Results The combination of ClinicaDL and its companion project Clinica allows performing an end-to-end neuroimaging analysis, from the download of raw data sets to the interpretation of trained networks, including neuroimaging Preprint submitted to Computer Methods and Programs in BiomedicineSeptember 7, 2021 preprocessing, quality check, label definition, architecture search, and network training and evaluation. Conclusions We implemented ClinicaDL to bring answers to three common issues encountered by deep learning users who are not always familiar with neuroimaging data: (1) the format and preprocessing of neuroimaging data sets, (2) the contamination of the evaluation procedure by data leakage and (3) a lack of reproducibility. We hope that its use by researchers will allow producing more reliable and thus valuable scientific studies in our field.
This repository is the official implementation of Data Augmentation in High Dimensional Low Sampl... more This repository is the official implementation of Data Augmentation in High Dimensional Low Sample Size Setting Using a Geometry-Based Variational Autoencoder.
We introduce a pipeline for the individual analysis of positron emission tomography (PET) data on... more We introduce a pipeline for the individual analysis of positron emission tomography (PET) data on large cohorts of patients. This pipeline consists for each individual of generating a subject-specific model of healthy PET appearance and comparing the individual's PET image to the model via a novel regularised Z-score. The resulting voxel-wise Z-score map can be interpreted as a subject-specific abnormality map that summarises the pathology's topographical distribution in the brain. We then propose a strategy to validate the abnormality maps on several PET tracers and automatically detect the underlying pathology by using the abnormality maps as features to feed a linear support vector machine (SVM)-based classifier. We applied the pipeline to a large dataset comprising 298 subjects selected from the ADNI2 database (103 cognitively normal, 105 late MCI and 90 Alzheimer's disease subjects). The high classification accuracy obtained when using the abnormality maps as features demonstrates that the proposed pipeline is able to extract for each individual the signal characteristic of dementia from both FDG and Florbetapir PET data.
Positron Emission Tomography (PET) with pharmacokinetic (PK) modelling is a quantitative molecula... more Positron Emission Tomography (PET) with pharmacokinetic (PK) modelling is a quantitative molecular imaging technique, however the long data acquisition time is prohibitive in clinical practice. An approach has been proposed to incorporate blood flow information from Arterial Spin Labelling (ASL) Magnetic Resonance Imaging (MRI) into PET PK modelling to reduce the acquisition time. This requires the conversion of cerebral blood flow (CBF) maps, measured by ASL, into the relative tracer delivery parameter (R1) used in the PET PK model. This was performed regionally using linear regression between population R1 and ASL values. In this paper we propose a novel technique to synthesise R1 maps from ASL data using a database with both R1 and CBF maps. The local similarity between the candidate ASL image and those in the database is used to weight the propagation of R1 values to obtain the optimal patient specific R1 map. Structural MRI data is also included to provide information within common regions of artefact in ASL data. This methodology is compared to the linear regression technique using leave one out analysis on 32 subjects. The proposed method significantly improves regional R1 estimation (p < 0.001), reducing the error in the pharmacokinetic modelling. Furthermore, it allows this technique to be extended to a voxel level, increasing the clinical utility of the images.
Amyloid PET is a robust biomarker of cortical β‐amyloid accumulation and a candidate endpoint for... more Amyloid PET is a robust biomarker of cortical β‐amyloid accumulation and a candidate endpoint for Alzheimer’s prevention trials. Quantification typically uses Standard Uptake Value Ratio (SUVR) measures acquired over 10‐20 minutes at steady‐state. SUVR may be more susceptible to altered blood flow than modelling of dynamic uptake data from injection to steady state. This may influence quantification, particularly change over time. Here we compared cross‐sectional measures and longitudinal rates of β‐amyloid change from static and dynamic analyses in individuals free of dementia.
This work proposes a new way to publicly distribute image analysis methods and software. This app... more This work proposes a new way to publicly distribute image analysis methods and software. This approach is particularly useful when the software code and the datasets cannot be made open source. We leverage the use of Internet and emerging web technologies to develop a system where anyone can upload their image datasets and run any of the proposed algorithms without the need of any specific installation or configuration. This service has been named NiftyWeb (http://cmictig.cs.ucl.ac.uk/niftyweb).
Alzheimer’s disease (AD) is characterized by the progressive alterations seen in brain images whi... more Alzheimer’s disease (AD) is characterized by the progressive alterations seen in brain images which give rise to the onset of various sets of symptoms. The variability in the dynamics of changes in both brain images and cognitive impairments remains poorly understood. This paper introduces AD Course Map a spatiotemporal atlas of Alzheimer’s disease progression. It summarizes the variability in the progression of a series of neuropsychological assessments, the propagation of hypometabolism and cortical thinning across brain regions and the deformation of the shape of the hippocampus. The analysis of these variations highlights strong genetic determinants for the progression, like possible compensatory mechanisms at play during disease progression. AD Course Map also predicts the patient’s cognitive decline with a better accuracy than the 56 methods benchmarked in the open challenge TADPOLE. Finally, AD Course Map is used to simulate cohorts of virtual patients developing Alzheimer’s ...
In order to reach precision medicine and improve patients’ quality of life, machine learning is i... more In order to reach precision medicine and improve patients’ quality of life, machine learning is increasingly used in medicine. Brain disorders are often complex and heterogeneous, and several modalities such as demographic, clinical, imaging, genetics and environmental data have been studied to improve their understanding. Deep learning, a subpart of machine learning, provides complex algorithms that can learn from such various data. It has become state of the art in numerous fields, including computer vision and natural language processing, and is also growingly applied in medicine. In this article, we review the use of deep learning for brain disorders. More specifically, we identify the main applications, the concerned disorders and the types of architectures and data used. Finally, we provide guidelines to bridge the gap between research studies and clinical routine.
Background and purpose: Computed tomography (CT) imaging is the current gold standard for radioth... more Background and purpose: Computed tomography (CT) imaging is the current gold standard for radiotherapy treatment planning (RTP). The establishment of a magnetic resonance imaging (MRI) only RTP workflow requires the generation of a synthetic CT (sCT) for dose calculation. This study evaluates the feasibility of using a multi-atlas sCT synthesis approach (sCT a) for head and neck and prostate patients. Material and methods: The multi-atlas method was based on pairs of non-rigidly aligned MR and CT images. The sCT a was obtained by registering the MRI atlases to the patient's MRI and by fusing the mapped atlases according to morphological similarity to the patient. For comparison, a bulk density assignment approach (sCT bda) was also evaluated. The sCT bda was obtained by assigning density values to MRI tissue classes (air, bone and soft-tissue). After evaluating the synthesis accuracy of the sCTs (mean absolute error), sCT-based delineations were geometrically compared to the CT-based delineations. Clinical plans were recalculated on both sCTs and a dose-volume histogram and a gamma analysis was performed using the CT dose as ground truth. Results: Results showed that both sCTs were suitable to perform clinical dose calculations with mean dose differences less than 1% for both the planning target volume and the organs at risk. However, only the sCT a provided an accurate and automatic delineation of bone. Conclusions: Combining MR delineations with our multi-atlas CT synthesis method could enable MRIonly treatment planning and thus improve the dosimetric and geometric accuracy of the treatment, and reduce the number of imaging procedures.
We present Clinica (www.clinica.run), an open-source software platform designed to make clinical ... more We present Clinica (www.clinica.run), an open-source software platform designed to make clinical neuroscience studies easier and more reproducible. Clinica aims for researchers to (i) spend less time on data management and processing, (ii) perform reproducible evaluations of their methods, and (iii) easily share data and results within their institution and with external collaborators. The core of Clinica is a set of automatic pipelines for processing and analysis of multimodal neuroimaging data (currently, T1-weighted MRI, diffusion MRI, and PET data), as well as tools for statistics, machine learning, and deep learning. It relies on the brain imaging data structure (BIDS) for the organization of raw neuroimaging datasets and on established tools written by the community to build its pipelines. It also provides converters of public neuroimaging datasets to BIDS (currently ADNI, AIBL, OASIS, and NIFD). Processed data include image-valued scalar fields (e.g., tissue probability maps)...
Uploads
Papers by Ninon Burgos