Intelligence Body Part Measuresystem (AIBMS)
Intelligence Body Part Measuresystem (AIBMS)
Intelligence Body Part Measuresystem (AIBMS)
SPECIALTY SECTION
This article was submitted to
Background: Sarcopenia is an aging syndrome that increases the risks of various
Computational Physiology and
Medicine, a section of the journal adverse outcomes, including falls, fractures, physical disability, and death. Sarcopenia
Frontiers in Physiology can be diagnosed through medical images-based body part analysis, which requires
RECEIVED 10 November 2022 laborious and time-consuming outlining of irregular contours of abdominal body
ACCEPTED 09 January 2023 parts. Therefore, it is critical to develop an efficient computational method for
PUBLISHED 26 January 2023
automatically segmenting body parts and predicting diseases.
CITATION
Gu S, Wang L, Han R, Liu X, Wang Y, Chen T Methods: In this study, we designed an Artificial Intelligence Body Part Measure
and Zheng Z (2023), Detection of System (AIBMS) based on deep learning to automate body parts segmentation from
sarcopenia using deep learning-based
artificial intelligence body part measure
abdominal CT scans and quantification of body part areas and volumes. The system
system (AIBMS). was developed using three network models, including SEG-NET, U-NET, and
Front. Physiol. 14:1092352. Attention U-NET, and trained on abdominal CT plain scan data.
doi: 10.3389/fphys.2023.1092352
COPYRIGHT
Results: This segmentation model was evaluated using multi-device developmental
© 2023 Gu, Wang, Han, Liu, Wang, Chen and independent test datasets and demonstrated a high level of accuracy with over
and Zheng. This is an open-access article 0.9 DSC score in segment body parts. Based on the characteristics of the three
distributed under the terms of the Creative
Commons Attribution License (CC BY).
network models, we gave recommendations for the appropriate model selection in
The use, distribution or reproduction in various clinical scenarios. We constructed a sarcopenia classification model based on
other forums is permitted, provided the cutoff values (Auto SMI model), which demonstrated high accuracy in predicting
original author(s) and the copyright
owner(s) are credited and that the original
sarcopenia with an AUC of 0.874. We used Youden index to optimize the Auto SMI
publication in this journal is cited, in model and found a better threshold of 40.69.
accordance with accepted academic
practice. No use, distribution or Conclusion: We developed an AI system to segment body parts in abdominal CT
reproduction is permitted which does not images and constructed a model based on cutoff value to achieve the prediction of
comply with these terms.
sarcopenia with high accuracy.
KEYWORDS
1 Introduction
Sarcopenia is an aging-associated disorder that is characterized by a decline in muscle mass,
strength and function. The onset of sarcopenia increases the risk of a variety of adverse
outcomes, including falls, fractures, physical disability, and death (Sayer, 2010). In Oceania and
Europe, the prevalence of sarcopenia ranged between 10% and 27% in the most recent meta-
analysis study (Petermann-Rocha et al., 2022). Various biomarkers for sarcopenia and related
diseases have been explored on the molecular, protein, and imaging requires trained personnel, but the shortage of experienced health
levels. Interleukin 6 (IL-6) and tumor necrosis factor alpha (TNF-α) professionals hinders the practical deployment of this technology.
levels may be important factors associated with frailty and sarcopenia, Meanwhile, for the diagnosis of musculoskeletal disorders, a
according to a study (Picca et al., 2022). Muscle quality was also found radiologist must mark a detailed outline of the body part,
to be associated with subclinical coronary atherosclerosis (Lee et al., which, according to some studies, may take 5–6 min in a single
2021). Patients diagnosed with sarcopenia are more likely to develop slice of CT image, even when using a professional tool called Slice-
cardiovascular disease (CVD) than those without sarcopenia (Gao O-Matic (Takahashi et al., 2017). To automate this laborious
et al., 2022). These studies have provided new perspectives for a process, we developed a computational method that can
thorough comprehension of sarcopenia. accurately and quickly perform body part outlining and obtain
Sarcopenia can be assessed through physical examinations or self- quantitative measurements for clinical practice.
reported SARC-F scores, but the diagnosis of sarcopenia requires multiple The field of CT-based body part analysis is expanding rapidly and
tests including muscle strength tests and more accurate imaging tests, in shows great potential for clinical applications. CT images have been
which muscle content is evaluated using either bioelectrical impedance used to assess muscle tissue, visceral adipose tissue (VAT), and
analysis (BIA) or Dual-energy x-ray (DXA) testing. However, bioelectrical subcutaneous adipose tissue (SAT) compartments. In particular,
impedance is affected by the humidity of the body surface environments, CT measurements of reduced skeletal muscle mass are a hallmark
which makes accurate diagnosis challenging (Horber et al., 1986). of decreased survival in many patient populations (Tolonen et al.,
Although dual-energy x-ray testing is commonly used to evaluate 2021). Typically, segmentation and measurement of skeletal muscle
sarcopenia (Shu et al., 2022), it is not yet widely available or used; tissue, VAT, and SAT are manually performed by radiologists or semi-
even in medical institutions with DXA testing capabilities, there is a automatically performed at the third lumbar vertebrae (L3). However,
high rate of missed diagnoses of sarcopenia due to inconsistency between for a wider range of abdomen such as the whole abdomen,
instrument brands (Masanés et al., 2017; Buckinx et al., 2018). segmentation and measurement of skeletal muscle tissue, VAT, and
Computed tomography (CT) is considered the gold standard SAT are lacking, limiting the clinical applications of CT-based body
for non-invasive assessment of muscle quantity and quality part analysis. In this study, we focus on developing a segmentation
(Beaudart et al., 2016; Cruz-Jentoft et al., 2019). Cross-sectional model for a wider range of abdomen.
skeletal muscle area (SMA, cm2) at the level of the third lumbar Deep learning applications in healthcare are undergoing rapid
vertebra (L3) is highly correlated with total body skeletal muscle development (Gulshan et al., 2016; Esteva et al., 2017; Smets et al.,
mass. Adjusting SMA for height provides a measure for relative 2021). In particular, deep learning-based technologies demonstrated
muscle mass called skeletal muscle index (SMI, cm2/m2), which is great potential in medical imaging diagnosis during COVID-19
commonly used clinically as an evaluation index to determine (Santosh, 2020; Mukherjee et al., 2021; Santosh and Ghosh, 2021;
sarcopenia. The SMI differs by gender; a study discovered that Santosh et al., 2022a; Santosh et al., 2022b; Mahbub et al., 2022).
an SMI <52.4 cm2/m2 for men and <38.5 cm2/m2 for women was Recently, a number of deep learning techniques for body part analysis
defined as sarcopenia (Prado et al., 2008). The calculation of SMI using CT images have been developed (Pickhardt et al., 2020; Gao
FIGURE 1
AIBMS for sarcopenia diagnosis. (A) Overview of the AI system. The system consists of three modules: a module for body part segmentation, a module for
body part quantification, and a module for sarcopenia analysis. Sm is the muscle tissue area, H is the height, and T is the threshold of the SMI. (B) U-shape
encoder-decoder structure (left) and experimental datasets (right).
2
BMI (kg/m ) 24.3 ± 4.4 23.8 ± 3.7 0.38
et al., 2021). (Weston et al. (2018) conducted a retrospective study of 2.1.2 Independent test dataset
automated abdominal segmentation for body part analysis using deep We retrospectively selected and analyzed female patients who
learning in liver cancer patients. Zhou et al. (Zhou et al. (2020) underwent abdominal CT plain scan examinations at the Department
developed a deep learning pipeline for segmenting vertebral bodies of Diagnostic Radiology, Beijing Tsinghua Changgung Hospital,
using quantitative water-fat MRI. Pickhardt et al. (Kalinkovich Beijing, China, between November 2014 and May 2021. In addition
and Livshits, 2017) developed an automated CT-based algorithm to the aforementioned inclusion and exclusion criteria, female patients
with pre-defined metrics for quantifying aortic calcification, in post-menopause with information on age at menopause, height
muscle density, and visceral/subcutaneous fat for cancer (m), and weight (kg) were extracted. Finally, 7 patients with a mean
screening. (Grainger et al. (2021) developed a deep learning- age of 60.1 ± 8.7 years (48–73 years) were included in the study. This
based algorithm using the U-NET architecture to measure dataset consists of 745 CT slides and is referred to as the “independent
abdominal fat on CT images. test dataset,” which will be used to evaluate the body part segmentation
Previous studies were often limited to two-dimensional (2D) model based on abdominal CT plain scans.
analysis of body composition using the L3 levels and did not extend
to the three-dimensional (3D) abdominal volume levels. Compared 2.1.3 Sarcopenia prediction dataset
with the L3-level 2D information, the 3D abdominal information is We retrospectively selected and analyzed female patients who
more informative and may be better associated with certain underwent DXA examinations at the Department of Diagnostic
diseases. Therefore, there is a clinical need for such a Radiology, Beijing Tsinghua Changgung Hospital, Beijing, China,
segmentation tool, which is capable of performing both between November 2014 and May 2021. Inclusion criteria were: a)
L3 single-level and even the whole volume of 3D abdominal CT patient had complete demographic information, including age, age
segmentation. In comparison to previous studies, this paper at menopause, height (m), and weight (kg); b) patient was
focuses exclusively on automatic body part segmentation using postmenopausal; c) patient’s DXA examination included the L1-
deep learning and exploring the feasibility of predicting L4 vertebrae and left femoral neck; d) patient received an
sarcopenia. abdominal CT plain scan from the top of the diaphragm to the
inferior border of the pubic symphysis; e) patient had no significant
abdominal disease; and f) patient’s abdominal CT scan and DXA
2 Materials and methods examination were taken within a 12-month interval. Exclusion
criteria included the presence of metallic implants in the scan area
2.1 Study populations of the DXA examination, poor image quality of abdominal CT scan,
or the presence of visible artifacts that interfered with muscle
2.1.1 Developmental dataset identification. As a result, 330 female patients with a mean age
We retrospectively analyzed patients who underwent of 68.5 ± 9.7 years (50–96 years) were included in the study and
abdominal CT plain scan examinations at the Department of referred to as the “Sarcopenia prediction dataset,” which will be
Diagnostic Radiology, Tsinghua ChangGung Hospital, Beijing, used to evaluate the performance of the sarcopenia prediction
China, between January 2020 and December 2020. Inclusion model.
criteria included the following patient information: a) complete
demographic information, including age and gender; b) abdominal 2.1.4 Abdominal CT image acquisition
CT plain scan examination with a scan range from the top of the All CT scans in the retrospective study were obtained using either
diaphragm to the inferior border of the pubic symphysis; and c) a GE Discovery 750 HD CT scanner (GE Healthcare, Waukesha,
absence of major abdominal diseases. Exclusion criteria included Wisconsin, United States of America) or an uCT 760 CT scanner
poor-quality abdominal CT scan images and noticeable artifacts (United-Imaging Healthcare, Shanghai, China). All scans were
that interfered with the identification of body parts. As a result, we acquired in the supine position. The parameters of the CT scan
obtained a “segmentation developmental dataset” consisting of were as follows: 120 kVp, auto-mAs, slice thickness: 5 mm, Pitch
5,583 slides from 60 cases, 45 males and 15 females with a 1.375 mm (GE Discovery 750 HD)/0.9875 mm (uCT 760), and
mean age of 32.0 ± 6.6 years (20–57 years). 512 × 512 matrix size.
TABLE 2 Statistics of agreement between manual and automatic segmentations in the developmental test dataset.
TABLE 3 Statistics of agreement between manual and automatic segmentations in the independent test dataset.
FIGURE 2
Three examples of CT images showing similar results of manual and automated segmentations. GT: Ground truth manual segmentation; Predicted:
Model predicted segmentation; Overlay: Overlay of GT and Predicted.
FIGURE 3
The bland-altman plot.
value is greater than the threshold (T). The details of each module CT scans. We then took the L4 layer images, which is commonly
are described in the following. used in the abdominal disease identification, and randomly selected
10% out of the remaining 60% images, resulting in a set of 429 CT
images. A physician with 8 years of experience in diagnostic
2.3 The body part segmentation module abdominal imaging manually segmented subcutaneous fat,
visceral fat, and muscle tissues. These 429 CT images were
2.3.1 Datasets randomly divided into training, validation, and test sets with an
For automatic abdominal body part segmentation, CT images 8:1:1 ratio at the patient level. In addition, the body part
from 60 patients in the developmental dataset were used to develop segmentation performance of this segmentation model was
a deep learning segmentation model. For each patient, we extracted evaluated using the independent test dataset comprised of
the abdomen area by truncating the top 10% and bottom 30% of the abdominal CT images from 7 patients.
TABLE 4 Accuracy of the subcutaneous fat volume calculation on the independent test set.
TABLE 5 Accuracy of the visceral fat volume calculation on the independent test set.
TABLE 6 Accuracy of the muscle volume calculation on the independent test set.
Single-core(s) Quad-core(s)
SEG-NET 0.071 4.296 1.220
FIGURE 4
Comparison of productivity and accuracy of three deep learning models.
2.3.2 Network architecture decoder deep learning models: U-NET (Ronneberger et al., 2015),
The U-shaped encoder-decoder structure (Figure 1B, left) was Attention U-NET (Oktay et al., 2018). And we also used SEG-NET for
used to construct a segmentation model. The encoder network takes a comparison (Badrinarayanan et al., 2015).
CT scan as input to extract scan features ranging from low-levels such For U-NET, the encoder’s contracting path contains four
as individual pixels, to high levels such as body parts. Then the decoder identical blocks using the standard convolutional network
network expands high-level features back to low level features to architecture. Each block comprises of two 3 × 3 convolutions
produce the pixel-level contour and area for each body part, which is (unpadded) followed by a rectified linear unit (ReLU) and a 2 ×
known as “a segmentation map”. There are feature concatenations 2 max pooling operation with stride 2 to reduce the size of the
between the corresponding layers of the encoder and the decoder. To feature map by half (downsampling). In the decoder’s expansive
train the network, the binary cross entropy loss was used as an path, there are also four blocks, and each is parallel to one block in
objective function for the pixel-level binary classification task. In the encoder. Each decoder block doubles the size of the feature map
this study, we adopted the following two classic U-shaped encoder- (upsampling) using 2 × 2 up-convolutions and concatenates it with
FIGURE 5
Confusion matrix for the prediction of sarcopenia with SMI = 38.5 as the cutoff value. “Sp” refers to sarcopenia, and “Non-Sp” refers to non-sarcopenia.
FIGURE 7
Confusion Matrix for sarcopenia prediction with SMI = 40.6 as the cutoff value. “Sp” stands for sarcopenia, and “Non-Sp” stands for non-sarcopenia.
FIGURE 8
Correlation analysis between predicted 3D volumes and 2D areas at the L3 layer for (A) total muscle, (B) subcutaneous fat, and (C) visceral fat. R is the
Pearson correlation coefficient, while R2 is the coefficient of determination.
datasets are not statistically different in terms of age, weight, height, as a surrogate for accuracy. As can be seen, the automatic
BMI, and prevalence of sarcopenia (p-value > 0.05). segmentation model based on the Attention U-NET is highly
accurate and comparable in speed to other models. Therefore,
Attention U-NET was chosen as the default model to carry out
3.2 Performance of abdominal body part the subsequent experiments.
segmentation
The segmentation models were evaluated on the developmental 3.5 Performance of quick classification model
test dataset and independent test dataset, obtaining DSC scores, mean for sarcopenia (Auto SMI)
IoU values, precisions, and recalls, shown in Tables 2, 3, respectively.
The numbers in bold represent the best performance among the three We applied the quick classification model to the Sarcopenia
models. prediction dataset. The model received input from the SMI value
These results demonstrate that the deep learning segmentation calculated from the L3 muscle area obtained by Attention U-NET-
models perform well across both test datasets, where slides are based auto-segmentation, and SMI = 38.5 was used as the cutoff value
randomly extracted either from the developmental dataset or from to make sarcopenia predictions. The accuracy was 0.815, the sensitivity
the 3D abdominal CT scans of the independent test dataset. Figure 2 was 0.718, and the specificity was 0.876. Figure 5 shows the confusion
shows three examples of the segmentation results, with the original CT matrix for the sarcopenia prediction.
images in the first column, the manual segmentations by radiologists The ROC curve for the Auto SMI model is shown in Figure 6, and
in the second column, the automatic segmentations by the Attention AUC = 0.874. In the figure, the coordinate point at the cutoff = 38.5 is
U-NET model in the third column, and the overlays between the denoted by a blue dot.
second and third columns in the fourth column. In the segmented To determine whether there is a better cutoff value for the Auto
images, subcutaneous fat is represented by dark red areas, visceral fat SMI, we calculated the optimal cutoff value for the Youden index,
by bright red areas, and muscles by white areas. In the overlay pictures, which is defined by Eq. 5. Because the Youden index is commonly
the outline represents the manual segmentations. used in laboratory medicine to represent the overall ability of a
Figure 3 shows the Bland-Altman plot of segmented muscle regions screening method to distinguish affected from non-affected
on the test set (n = 333 images) using the Attention U-NET model. Each individuals. A larger Youden index indicates better screening
dot represents an image. The horizontal axis represents the average of the efficacy. Therefore, we calculated the best cutoff value (=40.69)
manual and automatic segmentations, as measured by the number of that maximized the Youden index, indicating that the efficacy of
pixels, whereas the vertical axis represents their difference. The solid black sarcopenia screening is maximum at this cutoff value. As shown in
line represents the mean difference between two segmented muscle Figure 7, the red dot represents the point with the cutoff
regions, while the two dashed lines represent the 95% confidence value = 40.69.
interval (mean ±1.96*standard deviation). Using a new cutoff value SMI = 40.6, the accuracy for sarcopenia
prediction was 0.815, the sensitivity was 0.875, and the specificity was
0.778. Figure 7 plots the confusion matrix for the results.
3.3 Performance of abdominal body part
volume calculation on the independent
test set 3.6 Analysis of correlation between 2D and 3D
results
Based on the segmentation results from the Attention U-NET,
we calculated the volume of the abdominal body parts for each We analyzed the correlation between the predicted 3D volume
patient in the independent test set, The volumes of the features and the 2D area features at the L3 layer using the
subcutaneous fat, visceral fat, and muscle are shown in Tables Sarcopenia prediction dataset. Figure 8 shows a high degree of
4–6, respectively. correlations between 3D features and 2D features in total muscle
(R = 0.948), subcutaneous fat (R = 0.942), and visceral fat (R =
0.976). These results indicate the significance of the features
3.4 Running time of the deep learning models calculated from the L3 layer. Meanwhile, the high correlation
demonstrates the high accuracy of our model’s predictions in
Table 7 summarizes the time cost for processing a CT image using various slices.
SEG-NET, U-NET, and Attention U-NET on a single GPU card, a
single-core CPU, or a Quad-core CPU, respectively.
The results in Table 7 show that using a single-core CPU, 4 Discussion
processing a CT image using the Attention U-NET, U-NET, and
SEG-NET took an average of 6.039, 5.861, and 4.296 sec, 4.1 Analysis of the performance and efficiency
respectively. For a patient with 41 CT images, calculating the of the models
volumes of the abdominal body components takes an average of
248, 240, and 176 sec for the Attention U-NET, U-NET, and SEG- All three models utilize an encoder-decoder structure to generate
NET, respectively. Figure 4 plots the number of images per minute high-quality segmentation masks. SEG-NET is the simplest network
on a single-core CPU to better represent the model’s time with the fastest segmentation but at the cost of accuracy. U-NET uses a
utilization efficiency and the IOUs on the independent test set U-shaped structure that has been proven to perform well in medical
image segmentation. It achieves better accuracy by passing the used in a variety of clinical scenarios. We also developed two models to
corresponding feature maps from the encoder to the decoder and predict sarcopenia with a high accuracy.
fusing shallow and deep features. Attention U-NET, in comparison,
adds additional Attention blocks during up-sampling, which can
effectively filter out noise caused by edge polygons in labeling, thus Data availability statement
improving performance.
In this study, we randomly selected 10% images as the developmental The raw data supporting the conclusions of this article will be
dataset, and we demonstrated that the deep learning segmentation model made available by the authors, without undue reservation.
trained on the developmental dataset achieved good segmentation results
on both the developmental test set and the independent test set, proving
that our sampling strategy is effective. The sampling strategy has the Ethics statement
following benefits: first, it greatly reduced the workload and time
necessary for manual labeling. Second, it ensures generalizability to the The studies involving human participants were reviewed and
entire abdominal prediction. Thirdly, this strategy reduces the workload approved by ethical committee of Beijing Tsinghua Changgung
of model training. Hospital. Written informed consent for participation was not
This design allows us to train a 2D segmentation model capable of required for this study in accordance with the national legislation
segmenting 3D slices and generating 3D features, such as the volumes and the institutional requirements.
of various body parts. Compared to heavy 3D models, this 2D model is
lightweight and suitable for deployment in hospitals.
With time cost analysis of the three deep learning models under Author contributions
different computing processors, it is evident that processing 41 images
to calculate the abdominal body component volume using a CPU takes TC and ZZ designed the study, developed the theoretical
less than 1 min per patient, which is very fast. If the computing framework, and oversaw the project. SG, RH, and XL completed
processors are GPUs, the calculation time can be shortened to a few the construction of the machine learning models and wrote the
seconds for each patient. This is based on a CT scan with a slice programming code. LW and YW were responsible for processing
thickness of 5 mm; for a CT scan with a slice thickness of 0.625 mm, CT images and completing the data collection, labeling, and revision.
the volume calculation of the abdominal body components would SG, RH, and LW performed data analysis and manuscript writing. All
require more computational resources, but the time cost would still be authors contributed to the article and approved the final version of the
acceptable when using GPUs. article.
References
Badrinarayanan, V., Kendall, A., and SegNet, R. C. (2015). A deep convolutional Oktay, O., Schlemper, J., and Folgoc, L. L. (2018). Attention u-net: Learning where to
encoder-decoder architecture for image segmentation. arXiv preprint arXiv:1511.00561, 5. look for the pancreas. arXiv preprint arXiv:1804.03999.
Beaudart, C., McCloskey, E., Bruyere, O., Cesari, M., Rolland, Y., Rizzoli, R., et al. (2016). Petermann-Rocha, F., Balntzi, V., Gray, S. R., Lara, J., Ho, F. K., Pell, J. P., et al.
Sarcopenia in daily practice: Assessment and management. Bmc Geriatr. 16 (1), 170. (2022). Global prevalence of sarcopenia and severe sarcopenia: A systematic review
doi:10.1186/s12877-016-0349-4 and meta-analysis. J. cachexia, sarcopenia muscle 13 (1), 86–99. doi:10.1002/jcsm.
12783
Buckinx, F., Landi, F, Cesari, M, Fielding, R A., Visser, M, Engelke, K, et al. (2018).
Pitfalls in the measurement of muscle mass: A need for a reference standard. J. Cachexia 9, Picca, A., Coelho-Junior, H. J., Calvani, R., Marzetti, E., and Vetrano, D. L. (2022).
269. doi:10.1002/jcsm.12268 Biomarkers shared by frailty and sarcopenia in older adults: A systematic review and meta-
analysis. Ageing Res. Rev. 73, 101530. doi:10.1016/j.arr.2021.101530
Cruz-Jentoft, A. J., Bahat, G., Bauer, J., Boirie, Y., Bruyere, O., Cederholm, T., et al.
(2019). sarcopenia: revised European consensus on definition and diagnosis. Age Ageing 48 Pickhardt, P. J., Graffy, P. M., Zea, R., Lee, S. J., Liu, J., Sandfort, V., et al. (2020).
(4), 601. doi:10.1093/ageing/afz046 Automated CT biomarkers for opportunistic prediction of future cardiovascular events
and mortality in an asymptomatic screening population: A retrospective cohort study.
Esteva, A., Kuprel, B., Novoa, R. A., Ko, J., Swetter, S. M., Blau, H. M., et al. (2017).
Lancet Digital Health 2 (4), e192–e200. doi:10.1016/S2589-7500(20)30025-X
Dermatologist-level classification of skin cancer with deep neural networks. Nature 542
(7639), 115–118. doi:10.1038/nature21056 Prado, C. M. M., Lieffers, J. R., McCargar, L. J., Reiman, T., Sawyer, M. B., Martin, L.,
et al. (2008). Prevalence and clinical implications of sarcopenic obesity in patients with
Gao, K., Cao, L. F., Ma, W. Z., Gao, Y. J., Luo, M. S., Zhu, J., et al. (2022). Association
solid tumours of the respiratory and gastrointestinal tracts: a population-based study.
between sarcopenia and cardiovascular disease among middle-aged and older adults:
lancet Oncol. 9 (7), 629–635. doi:10.1016/S1470-2045(08)70153-0
Findings from the China health and retirement longitudinal study. EClinicalMedicine 44,
101264. doi:10.1016/j.eclinm.2021.101264 Ronneberger, O., Fischer, P., and Brox, T. (2015). U-net: Convolutional networks for
biomedical image segmentation[C]//International Conference on Medical image computing
Gao, L., Jiao, T., Feng, Q., and Wang, W. (2021). Application of artificial intelligence in
and computer-assisted intervention. Cham: Springer.
diagnosis of osteoporosis using medical images: A systematic review and meta-analysis.
Osteoporos. Int. 32 (7), 1279–1286. doi:10.1007/s00198-021-05887-6 Santosh, K. C. (2020). AI-Driven tools for coronavirus outbreak: Need of active learning
and cross-population train/test models on multitudinal/multimodal data[J]. J. Med. Syst.
Grainger, A. T., Krishnaraj, A., Quinones, M. H., Tustison, N. J., Epstein, S., Fuller, D.,
44 (5), 170–175. doi:10.1007/s10916-020-01645-z
et al. (2021). Deep Learning-based quantification of abdominal subcutaneous and visceral
fat volume on CT images. Acad. Radiol. 28 (11), 1481–1487. doi:10.1016/j.acra.2020. Santosh, K. C., Allu, S., Rajaraman, S., and Antani, S. (2022). Advances in deep learning
07.010 for tuberculosis screening using chest X-rays: The last 5 Years review. J. Med. Syst. 46 (11),
82–19. doi:10.1007/s10916-022-01870-8
Gulshan, V., Peng, L., Coram, M., Stumpe, M. C., Wu, D., Narayanaswamy, A., et al.
(2016). Development and validation of a deep learning algorithm for detection of diabetic Santosh, K. C., and Ghosh, S. (2021). Covid-19 imaging tools: How big data is big?
retinopathy in retinal fundus photographs. J. Am. Med. Assoc. 316 (22), 2402–2410. doi:10. J. Med. Syst. 45 (7), 71–78. doi:10.1007/s10916-021-01747-2
1001/jama.2016.17216
Santosh, K. C., Ghosh, S., and GhoshRoy, D. (2022). Deep learning for Covid-19
Horber, F. F., Zurcher, R. M., Herren, H., Crivelli, M. A., Robotti, G., and Frey, F. J. screening using chest x-rays in 2020: A systematic review. Int. J. Pattern Recognit. Artif.
(1986). Altered body fat distribution in patients with glucocorticoid treatment and in Intell. 36 (05), 2252010. doi:10.1142/s0218001422520103
patients on long-term dialysis. Am. J. Clin. Nutr. 43 (5), 758–769. doi:10.1093/ajcn/43.
Sayer, A. A. (2010). Sarcopenia. BMJ 341, c4097. doi:10.1136/bmj.c4097
5.758
Shu, X., Lin, T., Wang, H., Zhao, Y., Jiang, T., Peng, X., et al. (2022). Diagnosis,
Kalinkovich, A., and Livshits, G. (2017). Sarcopenic obesity or obese sarcopenia: A cross
prevalence, and mortality of sarcopenia in dialysis patients: A systematic review and meta-
talk between age-associated adipose tissue and skeletal muscle inflammation as a main
analysis J. Cachexia Sarcopenia Muscle 13 (1), 145–158. doi:10.1002/jcsm.12890
mechanism of the pathogenesis. Ageing Res. Rev. 35, 200–221. doi:10.1016/j.arr.2016.
09.008 Smets, J., Shevroja, E., Hugle, T., Leslie, W. D., and Hans, D. (2021). Machine learning solutions
for osteoporosis – A review. J. Bone Mineral Res. 2021 (14), 833–851. doi:10.1002/jbmr.4292
Kingma, D. P. (2022). Adam: A method for stochastic optimization. arXiv preprint
arXiv:1412.6980. Takahashi, N., Sugimoto, M., Psutka, S. P., Chen, B., Moynagh, M. R., and Carter, R. E.
(2017). Validation study of a new semi-automated software program for CT body
Lee, M. J., Kim, H. K., Kim, E. H., Bae, S. J., Kim, K. W., Kim, M. J., et al. (2021).
composition analysis. Abdom. Radiol. 42 (9), 2369–2375. doi:10.1007/s00261-017-
Association between muscle quality measured by abdominal computed tomography and
1123-6
subclinical coronary atherosclerosis. Arteriosclerosis, Thrombosis, Vasc. Biol. 41 (2),
e128–e140. doi:10.1161/ATVBAHA.120.315054 Tolonen, A., Pakarinen, T., Sassi, A., Kytta, J., Cancino, W., Rinta-Kiikka, I., et al. (2021).
Methodology, clinical applications, and future directions of body composition analysis
Mahbub, M. K., Biswas, M., Gaur, L., Alenezi, F., and Santosh, K. C. (2022). Deep
using computed tomography (CT) images: A review. Eur. J. Radiology 145, 109943. doi:10.
features to detect pulmonary abnormalities in chest X-rays due to infectious diseaseX:
1016/j.ejrad.2021.109943
Covid-19, pneumonia, and tuberculosis. Inf. Sci. 592, 389–401. doi:10.1016/j.ins.2022.
01.062 Weston, A. D., Korfiatis, P., Kline, T. L., Philbrick, K. A., Kostandy, P., Sakinis, T.,
et al. (2018). Automated abdominal segmentation of CT scans for body composition
Masanés, F., Formiga, F., Cuesta, F., López Soto, A., Ruiz, D., Cruz-Jentoft, A. J., et al.
analysis using deep learning. Radiology 290 (3), 669–679. doi:10.1148/radiol.
(2017). Cutoff points for muscle mass — Not grip strength or gait speed — Determine
2018181432
variations in sarcopenia prevalence. J. Nutr. Health & Aging 21 (7), 825–829. doi:10.1007/
s12603-016-0844-5 Zhou, J., Damasceno, P. F., Chachad, R., Cheung, J. R., Ballatori, A., Lotz, J. C., et al.
(2020). Automatic vertebral body segmentation based on deep learning of Dixon images
Mukherjee, H., Ghosh, S., Dhar, A., Obaidullah, S. M., Santosh, K. C., and Roy, K. (2021).
for bone marrow fat fraction quantification. Front. Endocrinol. 11, 612. doi:10.3389/fendo.
Deep neural network to detect COVID-19: One architecture for both CT scans and chest
2020.00612
X-rays. Appl. Intell. 51 (5), 2777–2789. doi:10.1007/s10489-020-01943-6