Fall 2023 - CS619 - 8873 - 1

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 9

Brain Tumor Segmentation

Version 1.0

Group Id: I'd Mc210203528

Supervisor Name
Muhammad Hashir Khan
Email ID: [email protected]
Skype ID: hashir.khan9996
Revision History
Date Version Description Author
(dd/mm/yyyy)
Current date 1.0 Brain Tumor Segmentation I'd Mc210203528
Table of Contents

1. Scope (of the project)

2. Functional Requirements Non Functional requirements

3. Use Case Diagram

4. DataSet

5. Adopted Methodology

6. Work Plan (Use MS Project to create Schedule/Work Plan)


SRS Document

Scope of Project:

Image processing is a powerful field of study that has numerous real-world


applications. It involves the manipulation of digital images to extract useful
information or enhance specific features. One of the critical applications of
image processing is medical image analysis, where it plays a pivotal role in
diagnosis, treatment, and research. In this project, we aim to explore the
fascinating realm of image processing and its applications in the healthcare
domain, specifically in the context of brain tumor segmentation.
This project centers on the realm of medical imaging, specifically the
challenging task of brain tumor segmentation using deep learning
techniques. Brain tumors are abnormal growths of tissue within the brain
that can be cancerous or non-cancerous. Accurate and early diagnosis is
essential for timely treatment and improved patient outcomes.

Functional and non-Functional Requirements:

In this project, we will leverage image processing and deep learning


techniques to address the following objectives:

1. Data Collection and Preprocessing: Gather a diverse dataset of brain MRI


scans, ensuring data quality and integrity. Preprocess the images to enhance
their suitability for further analysis. Import the image dataset of Brain
MRI scans from described link.
2. Dataset Splitting: To facilitate model training and evaluation, we will split
the dataset into distinct sets for training, validation, and testing. This step
ensures that the deep learning model's performance is rigorously assessed
and prevents overfitting.
3. Deep Learning Model Development: We will design, implement, and
fine-tune deep learning models, such U-Nets, to accurately segment brain
tumors from MRI images.
4. Model Evaluation: We will establish robust evaluation metrics to assess
the performance of the deep learning models, ensuring that the
segmentation results are precise and reliable.
5. DataSet: For this project you need to use BraTs2019 Dataset.
https://www.med.upenn.edu/cbica/brats2019/data.html

Important links and Tutorials:

 Python
 https://www.w3schools.com/python/
 https://www.tutorialspoint.com/python/index.htm
 Image processing
 https://regenerativetoday.com/some-basic-image-
preprocessing-operations-for-
 beginners-in-python/
 https://www.section.io/engineering-education/image-
preprocessing-in-python/
 https://www.tensorflow.org/tutorials/load_data/images
 Deep Learning
 https://www.simplilearn.com/tutorials/deep-learning-
tutorial/guide-to-building-powerful-keras-image-
classification-models
 https://www.analyticsvidhya.com/blog/2020/02/learn-
image-classification-cnn-convolutional-neural-networks-3-
datasets/

Tools:

Language: Python (Only python language)


Framework: Anaconda
IDE: JupyterNotebook, Pycharm, Spyder, Visual Studio Code, etc.
You can also use Google colab environment or google cloud.

Use Case Diagram(s):

<Provide here the use case diagram of your system>

Dataset:

Ample multi-institutional routine clinically acquired pre-operative


multimodal MRI scans of glioblastoma (GBM/HGG) and lower grade
glioma (LGG), with pathologically confirmed diagnosis and available OS,
are provided as the training, validation and testing data for this year’s BraTS
challenge. Specifically, the datasets used in this year's challenge have been
updated, since BraTS'18, with more routine clinically acquired 3T
multimodal MRI scans, with accompanying ground truth labels by expert
board-certified neuroradiologists.
Validation data will be released on July 15, through an email pointing to the
accompanying leaderboard. This will allow participants to obtain
preliminary results in unseen data and also report it in their submitted
papers, in addition to their cross-validated results on the training data. The
ground truth of the validation data will not be provided to the participants,
but multiple submissions to the online evaluation platform (CBICA's IPP)
will be allowed.
Finally, all participants will be presented with the same test data, which will
be made available through email during 26 August-7 September and for a
limited controlled time-window (48h), before the participants are required to
upload their final results in CBICA's IPP. The top-ranked participating
teams will be invited before the end of September to prepare slides for a
short oral presentation of their method during the BraTS challenge.
Imaging Data Description
All BraTS multimodal scans are available as NIfTI files (.nii.gz) and
describe a) native (T1) and b) post-contrast T1-weighted (T1Gd), c) T2-
weighted (T2), and d) T2 Fluid Attenuated Inversion Recovery (T2-FLAIR)
volumes, and were acquired with different clinical protocols and various
scanners from multiple (n=19) institutions, mentioned as data contributors
here.
One to four raters, following the same annotation protocol, have segmented
all the imaging datasets manually, and their annotations were approved by
experienced neuro-radiologists. Annotations comprise the GD-enhancing
tumor (ET — label 4), the peritumoral edema (ED — label 2), and the
necrotic and non-enhancing tumor core (NCR/NET — label 1), as described
both in the BraTS 2012-2013 TMI paper and in the latest BraTS
summarizing paper (also see Fig.1). The provided data are distributed after
their pre-processing, i.e. co-registered to the same anatomical template,
interpolated to the same resolution (1 mm^3) and skull-stripped.
Comparison with Previous BraTS datasets
The BraTS data provided since BraTS'17 differs significantly from the data
provided during the previous BraTS challenges (i.e., 2016 and backwards).
The only data that have been previously used and are utilized again (during
BraTS'17-'19) are the images and annotations of BraTS'12-'13, which have
been manually annotated by clinical experts in the past. The data used during
BraTS'14-'16 (from TCIA) have been discarded, as they described a mixture
of pre- and post-operative scans and their ground truth labels have been
annotated by the fusion of segmentation results from algorithms that ranked
highly during BraTS'12 and '13. For BraTS'17, expert neuroradiologists have
radiologically assessed the complete original TCIA glioma collections
(TCGA-GBM, n=262 and TCGA-LGG, n=199) and categorized each scan
as pre- or post-operative. Subsequently, all the pre-operative TCIA scans
(135 GBM and 108 LGG) were annotated by experts for the various glioma
sub-regions and included in this year's BraTS datasets.
This year we provide the naming convention and direct filename mapping
between the data of BraTS'19, BraTS'18, BraTS'17, and the TCGA-GBM
and TCGA-LGG collections, available through The Cancer Imaging Archive
(TCIA).
Survival Data Description
The overall survival (OS) data, defined in days, are included in a comma-
separated value (.csv) file with correspondences to the pseudo-identifiers of
the imaging data. The .csv file also includes the age of patients, as well as
the resection status. Note that only subjects with resection status of GTR
(i.e., Gross Total Resection) will be evaluated and you are only expected to
send your predicted survival data for those subjects.

Adopted Methodology
Our methodology is evaluated on publicly available datasets, and the results
demonstrate that our approach achieves state-of-the-art performance in terms
of accuracy and robustness. Our methodology has the potential to improve
the accuracy and efficiency of brain tumor segmentation, which can lead to
better diagnosis, treatment, and research outcomes.
1 provides a comprehensive survey of brain tumor detection and
classification using machine learning, including deep learning techniques.
The survey covers the anatomy of brain tumors, publicly available datasets,
enhancement techniques, segmentation, feature extraction, classification,
and deep learning, transfer learning, and quantum machine learning for brain
tumors analysis2 presents an automatic brain tumor segmentation method
based on deep learning techniques, which uses the public and well-accepted
dataset BRATS3 proposes a novel coarse-to-fine method for brain tumor
segmentation that consists of preprocessing, deep learning network-based
classification, and post-processing4 presents an intelligent brain tumor
segmentation method using improved deep learning techniques, which
deploys a manual methodology of segmentation when diagnosing brain
tumors5 proposes a deep multi-task learning framework for brain tumor
segmentation, which is based on deep learning and can automate medical
image segmentation.

I found a research paper that proposes an efficient brain tumor segmentation


method based on adaptive moving self-organizing map and fuzzy k-mean
clustering (AMSOM-FKM) 1. The authors utilized the Brats18 MRI Tumor
Image database for this study.

Here are some steps that you can consider for an adapted methodology:
1. Data Collection and Preprocessing: Gather a diverse dataset of brain
MRI scans, ensuring data quality and integrity. Preprocess the images
to enhance their suitability for further analysis. Import the image
dataset of Brain MRI scans from the BraTs2019 dataset 2.
2. Dataset Splitting: To facilitate model training and evaluation, split
the dataset into distinct sets for training, validation, and testing. This
step ensures that the deep learning model’s performance is rigorously
assessed and prevents overfitting.
3. Deep Learning Model Development: Design, implement, and fine-
tune deep learning models, such as AMSOM-FKM, to accurately
segment brain tumors from MRI images.
4. Model Evaluation: Establish robust evaluation metrics to assess the
performance of the deep learning models, ensuring that the
segmentation results are precise and reliable.
Work Plan (Use MS Project to create Schedule/Work Plan)
<Provide Gantt chart of your final project>

References:
[1] B. H. Menze, A. Jakab, S. Bauer, J. Kalpathy-Cramer, K. Farahani, J. Kirby, et al.
"The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS)", IEEE
Transactions on Medical Imaging 34(10), 1993-2024 (2015) DOI:
10.1109/TMI.2014.2377694
[2] S. Bakas, H. Akbari, A. Sotiras, M. Bilello, M. Rozycki, J.S. Kirby, et al.,
"Advancing The Cancer Genome Atlas glioma MRI collections with expert segmentation
labels and radiomic features", Nature Scientific Data, 4:170117 (2017) DOI:
10.1038/sdata.2017.117
[3] S. Bakas, M. Reyes, A. Jakab, S. Bauer, M. Rempfler, A. Crimi, et al., "Identifying
the Best Machine Learning Algorithms for Brain Tumor Segmentation, Progression
Assessment, and Overall Survival Prediction in the BRATS Challenge", arXiv preprint
arXiv:1811.02629 (2018)
In addition, if there are no restrictions imposed from the journal/conference you submit
your paper about citing "Data Citations", please be specific and also cite the following:
[4] S. Bakas, H. Akbari, A. Sotiras, M. Bilello, M. Rozycki, J. Kirby, et al.,
"Segmentation Labels and Radiomic Features for the Pre-operative Scans of the TCGA-
GBM collection", The Cancer Imaging Archive, 2017. DOI:
10.7937/K9/TCIA.2017.KLXWJJ1Q
[5] S. Bakas, H. Akbari, A. Sotiras, M. Bilello, M. Rozycki, J. Kirby, et al.,
"Segmentation Labels and Radiomic Features for the Pre-operative Scans of the TCGA-
LGG collection", The Cancer Imaging Archive, 2017. DOI:
10.7937/K9/TCIA.2017.GJQ7R0EF.
[6]BRATS21 Dataset | Papers With Code. https://paperswithcode.com/dataset/brats21.
[7] MICCAI BraTS 2017: Data - Perelman School of Medicine.
https://www.med.upenn.edu/sbia/brats2017/data.html.
[8]Overview - NVIDIA Docs. https://docs.nvidia.com/launchpad/ai/base-command-
brats/latest/bc-brats-overview.html.
[9] Multimodal Brain Tumor Segmentation Challenge 2019: Registration.
https://www.med.upenn.edu/cbica/brats2019/registration.html.
[10] http://braintumorsegmentation.org/.
[11]https://synapse.org/.

You might also like