Project Proposal: Level 4

Download as pdf or txt
Download as pdf or txt
You are on page 1of 83

Project Proposal

Level 4

Dental X ray mapping using AR and VR dental simulator

Team Hex Clan

Faculty of Information Technology


University of Moratuwa
2022
Project Proposal
Level 4

Dental X ray mapping using AR and VR dental simulator

Team Hex Clan

Index Number Name

184151T Sandakelum P.A.K.M.

184131H Prasad Y.B.

184120A Perakotuwa H.P.N.S

184064E Jayathilaka K.G.D.H.

Supervisor’s Name: Dr. Saminda Premaratne


Co Supervisor’s Name: Mr. S.M.U.Premasiri

Faculty of Information Technology


University of Moratuwa
2022
Abstract

When it comes to available technological resources for dentists, there is a lack between
mapping dental-x ray scans with actual human faces. This research study will identify
potential possibilities of filling that gap using immersive technologies such as Augmented
Reality. Also identifying abnormalities in the x-ray before mapping with the human face
will come handy as well. On the other hand due to the current pandemic and crisis situation
many medical students who are studying to become dentists are having issues with clinical
practices. This research study will identify the feasibility of simulating practicals related to
the dentist's domain using Virtual Reality as well.

The proposed solution mainly consists of four modules. Dental radiographs which are
called dental X-rays are the most commonly used method of dental imaging. Dental X-rays
used to detect cavities, bone loss, malignant or benign tumors, hidden dental structures,
etc. There are two main users of the proposed system: dentists and dental students. First,
gather dental X-rays and identify the abnormalities in those X-rays. Further, identify from
which side of the teeth set was the X-ray taken. After processing the dental X-ray image,
map the processed dental X-ray into the human face and extract the details there. It helps
dentists during surgeries like oral and maxillofacial surgery, orthodontics, implantology,
restorative dentistry. The processed dental X-ray image is mapped into a dental mold as
well. Its purpose is to make learning easier for dental students. After analyzing the data
from the previous modules, results can be accessed by dentists and dental students. Dentists
can use those data directly for their surgeries and treatments. Using analyzed data,
implement a VR simulation to train dental students.
Table of Content

CHAPTER 1

1. INTRODUCTION

1.1. Introduction

1.2. Background and Motivation

1.3. Problem in Brief

1.4. Aim and Objectives

1.4.1. Aim

1.4.2. Objectives

1.5. Proposed Solution

1.6. Summary

CHAPTER 2

2. LITERATURE REVIEW

2.1. Introduction

2.2. Problem Justification

2.3. Existing Systems

2.4. Summary

CHAPTER 3

3. TECHNOLOGY ADAPTED

3.1. Introduction

3.2. Technologies Adapted for Implementation

3.3. Summary

CHAPTER 4

4. OUR APPROACH

4.1. Introduction
4.2. Dental X ray mapping using AR and VR dental simulator

4.3. Users

4.4. Input

4.5. Process of the system

4.6. Output

4.7. Summary

CHAPTER 5

5. ANALYSIS & DESIGN

5.1. Introduction

5.2. Analysis and design

5.2.1. Module 1 - Dental X-ray image processing

5.2.2. Module 2 - Face recognition and dental X-ray mapping

5.2.3. Module 3 - Dental Mold recognition and dental X-ray mapping

5.2.4. Module 4 - VR simulation

5.3. Summary

CHAPTER 6

6. IMPLEMENTATION

6.1. Introduction

6.2. Implementation

6.2.1. Dental X-ray image processing

6.2.2. Face recognition and dental X-ray mapping

6.2.3. Dental mold recognition and dental X-ray mapping

6.2.4. VR Simulation

6.3. Summary

CHAPTER 7
7. DISCUSSION

REFERENCES

APPENDIX A
Tables and Figures
Figure 2.1 – Comparison Between Existing Systems Related To Mosule 1
Figure 2.2 – Comparison Between Existing Systems Related To Mosule 2, Module
3 And Module 4

Figure 4.1 – High Level Architecture Of Module 1, Module 2 And Module 3


Figure 4.2 – Block Diagram Of The System Of Module 1, Module 2 And Module
3
Figure 4.3 - High Level Architecture Of Module 4

Figure 5.1 – Approach For Module 1


Figure 5.2 – The Original Radiograph Of Dental X-Ray
Figure 5.3 – Split Mask, In Which Each Tooth Is Separated From The Other By A
Narrow Gap
Figure 5.4 – On An Image Representation Sample, The Effect Of Applying Image
Transformation
Figure 5.5 – The Original Rows Represent The Original Sample Counts, While The
Enhanced Rows Demonstrate How Data Augmentation Increased The Number Of
Samples
Figure 5.6 - Result Of Enhancement In Preprocessing Stage
Figure 5.7 - The U-Net Architecture Used In The Study. Each Blue Box
Corresponds To A Multi-Channel Feature Map. The Number Of Channels Is
Denoted On Top Of The Box. The Shape Is Provided On The Edge Of The Box.
White Boxes Represent Copied Feature Maps. The Arrows Denote The Different
Operations Indicated By Colors
Figure 5.8 - Approach For Module 2
Figure 5.9 – Axes For The Face
Figure 5.10 - Approach For Module 3
Figure 5.11 - Approach For Module 4

Figure 6.1 - Source Code Of Rotate And Flipped Input Image


Figure 6.2 - Result Of Data Augmentation
Figure 6.3 - Source Code Of Adaptive Histogram Equalization Method
Figure 6.4 - Result Of Adaptive Histogram Equalization Method
Figure 6.5 - Source Code Of Median Filter
Figure 6.6 - Result Of Noise Remove Using Median Filter
Figure 6.7 - Source Code Of Global Thresholding Method, Calculate F-Score And Image
Resize
Figure 6.8 - Source Code For Calculate Total F-Score Value
Figure 6.9 - Result Of Global Thresholding Segmentation
Figure 6.10 - Source Code Of Calculate Mean Value Of F-Score
Figure 6.11 - Source Code For K Means Clustering Segmentation
Figure 6.12 - Source Code For Calculate Sum F-Score Value And Display Output IMAGE
Figure 6.13 - Source Code For Watershed Segmentation
Figure 6.14 - Result Of Watershed Segmentation
Figure 6.15 – Adding Ar Foundation And Ar Core
Figure 6.16 – Adding Xr Plugin Management
Figure 6.17 – Face Identification
Figure 6.18 – Map Teeth To Prefab
Figure 6.19 - Map Teeth Image To Face
Figure 6.20 – Code For Module 1(1)
Figure 6.21 – Code For Module 1(2)
Figure 6.22 – Code For Module 1(3)
Figure 6.23 – Code For Module 1(4)
Figure 6.24 – 3d Model Generation Of Dental Mold
Figure 6.25 – Vuforia Database Generation
Figure 6.26 – Vuforia Model Generation
Figure 6.27 – Unity Environment Of Model Detection
Figure 6.28 – Mold Detection
Figure 6.29 - Environment Creation For Module 4
Figure 6.30 - Animator Panel In Unity Vr
Figure 6.31 - Grab Animation Creation
Figure 6.32 - Pinch Animation Creation
Figure 6.33 - Code For Module 4
CHAPTER 1
1. INTRODUCTION
1.1. Introduction
The perfect execution of a treatment plan for the patient involves coordinated
physical skills along with learned knowledge in the field of dentistry [1]. The
procedure of identifying and separating the dental features from an X-ray is difficult
in this profession. The recognition process is impacted by a number of factors,
including gender, position, lighting, and facial gestures [2]. Pre-clinical learning
experiences are essential in dentistry. There is a risk in ensuring the patients safety
and problem in recreating the encountered situations, in clinical environments.
Hence, this research is focused on Dental X-ray image processing, Face recognition
and dental X-ray mapping, Dental mold recognition and dental X-ray mapping and
VR simulating practical related to the dentist's domain.

1.2. Background and Motivation

The idea of oral hygiene steadily became more essential with the growth of the
elderly population. Hence, the importance of dental care issues increased. Between
2015-2020, the growth rate of the global market for oral medical equipment was
4.7% [3]. As per World Health Organization data, nearly all adults and more than
60% of teenagers worldwide have some kind of dental caries [4]. Even though the
dental diseases are gradually increasing, the rate of the available treatments is
comparatively low [3]. Therefore, enhancing the current dental practices with
technology based methods and equipment increase the growth rate of treatments.

One of the most important functions in dental surgeries/implants is identifying and


separating the dental features from an X-ray. The recognition process is impacted
by a number of factors, including gender, position, lighting, and facial gestures.
Hence, doing this manually is time consuming and it might reduce the accuracy,
which risks the patient’s safety. These problems could be addressed by using
immersive technologies like augmented reality.
The clinical training in dental institutes is crucial due to the rising demand for dental
implants. Ensuring the patient’s safety is important in clinical environments.
Therefore, realistic and reversible pre-clinical training is essential. VR dental
simulators can be used for these pre-clinical training. Simulators offer more
practical training within a short time period compared to the traditional training
methods.

After having a discussion with professional dentists in Sri Lanka we have identified
there is a lack of technological resources to address the above problems. Hence, it
inspired and motivated the proposed solution.

1.3. Problem in Brief

After having a discussion with professional dentists in Sri Lanka we have identified
a few problems that can be taken under a research study.

When it comes to available technological resources for dentists, there is a lack


between mapping dental-x ray scans with actual human faces. This research study
will identify potential possibilities of filling that gap using immersive technologies
such as Augmented Reality. Also identifying abnormalities in the x-ray before
mapping with the human face will come handy as well.

On the other hand due to the current pandemic and crisis situation many medical
students who are studying to become dentists are having issues with clinical
practices. This research study will identify the feasibility of simulating practical
related to the dentist's domain using Virtual Reality as well.

1.4. Aim and Objectives


1.4.1. Aim
The aim of this project is to Implement a better dental x-ray
mapping system and implement a hand motion tracking
system in VR simulations as opposed to VR hand gloves
1.4.2. Objectives
1. To develop a model for digital image processing and
detect abnormalities
2. To develop model for face recognition and dental x-
ray mapping using augmented reality
3. Test and evaluate the better solution among two face
recognition and dental x-ray mapping methods
4. To develop a model for dental mold recognition and
dental x-ray mapping using augmented reality
5. Test and evaluate the better solution among two
dental mold recognition and dental x-ray mapping
methods
6. To Implement a virtual reality simulation for dental
clinical
7. Practices
8. Test and evaluate the vr clinical education procedure
alone with hand tracking

1.5. Proposed Solution


The proposed solution mainly consists of four modules. The first module is about
the processing part of the dental X-ray. Dental X-rays are the most common way
that dentists use to evaluate oral health. Because of the common usage, dental X-
rays are used for this system. First of all, gather X-rays of the teeth set. Then,
compare the collected dental X-rays with the data set and identify the abnormalities
of the teeth set and output an image which marks the abnormalities of the dental x-
ray.
Second module is about mapping dental X-ray images into the human face. After
processing the X-ray, it maps the particular human face using a mobile application
and discovers the abnormalities of the particular human teeth. This mapping
process is aided by gathered data from the first module. Dentists can directly use
the results for their treatments and surgeries.
Next module is about mapping dental X-ray images into a dental mold. With the
help of collected data from dental X-ray image processing module, the proceed X-
ray map into a dental mold using the mobile application. This module helps the
dental students to practice before they come with the actual operations. And also
the dentist can explain to patients about their surgeries using mapped X-ray.
Last module is about virtual reality dental simulation. Due to the pandemic and
crisis situations that have happened, many dental students are having issues with
clinical practices. Because of that we implement a virtual simulation to make
learning easier for dental students. Here, use hand motion tracking instead of using
VR gloves. Only the equipment we use uses a virtual reality HMD which is a head-
mounted device that provides virtual reality for the wearer.

1.6. Summary
This chapter includes an overview of the project research area, in addition to that
research's background and motivation, the problem to be solved, and the research's
aim and objectives. The rest of the report is organized as follows.
The second chapter discusses comparable work and provides a review of previous
research. Chapter 3 describes the technology used in the research implementation
process. In Chapter 4, we detail our approach to resolving the problem. The
proposed system and its submodules are analyzed and designed in Chapter 5.
Chapter 6 describes the implementation that has taken place. Chapter 7 brings the
discussion to a close by summarizing and describing the report.
CHAPTER 2
2. LITERATURE REVIEW
2.1. Introduction
The effectiveness of the suggested system will be covered in this chapter while
discussing, comparing and evaluating other approaches, solutions, and techniques
used in the addressed problem area.

2.2. Problem Justification


The conventional methods of identifying and separating the dental features from X-
rays is time consuming and it affects the accuracy of the procedure. Even Though,
many researchers have done research in automated classification and identification
of dental X-ray images as a solution for this, the existing solutions contain issues
like using a small dataset, using traditional image processing techniques for
segmentation and identifying only one type object in the panoramic tooth image.
Research of augmented reality in dentistry has been in use for some time. The
existing solutions require expensive glasses or headsets, half-silvered mirrors,
markers and external monitors. It needs a longer time required for the
implementation of such devices. Ensuring the patient’s safety is important in
clinical environments. Therefore, realistic and reversible pre-clinical training is
essential. VR dental simulators can be used for these pre-clinical training.
Simulators offer more practical training within a short time period compared to the
traditional training methods without using any gloves. Therefore this research is
focused on to overcome these limitations in the existing solutions.

2.3. Existing Systems


2.3.1. Dental Image Analysis Learning on Small Dataset

Jie Yang, yuchen Xie, Lin Lui provided the periapical dental X-ray images
acquired before and after the operations, together with the Automated
Dental Image Analysis Learning on Small Dataset using Deep datasets,
processes, and outcomes. They assisted clinicians in classifying diseases as
improving or deteriorating, with no clear change.The convolutional neural
network with a transfer learning methodology—NASNet—has been
implemented in the work (AL-Ghamdi, 2022). Cavity, filling, and implant
are the three sorts of classes that are used in the classification process. The
116 patient images in the small training dataset. Neural networks with many
max-pooling layers, dropout layers, and activation functions are used in
image processing stages.

2.3.2. Only one type of object in the panoramic tooth image was detected

The location of the restoration item in the data has been successfully located
using the object detection model that has been constructed using panoramic
x-ray pictures of teeth (D Suryani, M N Shoumi, & R Wakhidah, 2021). In
the panoramic tooth imaging, just one type of object—a tooth restoration—
was found. This is because it is challenging to locate a dataset of panoramic
dental images that includes a variety of things, like dental implants and
endodontic procedures. The findings from the panoramic tooth image's
object detection yield confidence level values between 0.91 and 0.96.
H. S. Bhadauria, and Anuj Kumar To extract the restoration component
from dental X-ray pictures, Nitin Kumar proposed integrating Fuzzy
Clustering with Iterative Level Set Active Contour Segmentation for
Detection of Dental Restoration. In this case, the median filter uses Fuzzy
clustering to preprocess the image and segments.

2.3.3. Used traditional image processing techniques for Segmentation

Jiafa Mao, Kaihui Wang, Yahong Hu, Weiguo Sheng, and Qixin Feng
proposed a Grab Cut algorithm for dental X-ray images based on full
threshold segmentation. They obtained the outline collection of Iwholen and
Crowns. The synthetic image of the contour and crown is subjected to
morphological open operation and median filtering before being used as a
mask for grab cut to create the target tooth image.Said.E.H, Dias .E,M,
Nasar .G.F used mathematical morphology to suggest a method for
segmenting teeth in digitized dental X-ray films. To enhance the
effectiveness of teeth segmentation, a gray-scale contrast stretching
transformation is used. They concluded that their approach has the lowest
failure rate and can handle bitewing and periapical dental radiograph views.

2.3.4. AR in Maxillofacial Surgery


Zhu et al [3] sought to create a novel registration and tracking technique in
order to produce an Augmented Reality navigation system. They developed
a unique registration approach that combines an occlusal splint with. A
fiducial marker is used to connect the virtual image to the real things. The
virtual image is superimposed onto the real surroundings after recognizing
the fiducial marker, resulting in the "integrated image" on the semi-
transparent glass.
● Making the splint requires specialized abilities.
● The fabrication of the splint takes time, which delays surgery.
● Loosening or inappropriate positioning of the occlusal splint during
imaging or registration can result in an unanticipated mistake.
● When oral stability is difficult, such as in edentulous patients,
occlusal splints cannot produce good results24. Nonetheless, dental
splints are useful for exact registration. Overall, the AR system
improved surgical precision and resulted in better outcomes.
● Here in our system, we do not use fiducial marker to link.

2.3.5. AR in Dental Implantology


AR was used in implant placement planning in a review of two clinical
instances described by Pellegrino [4]. The place of the implant was
practically planned before the procedure, which contributed to a dynamic
navigation system presented on AR glasses.
● When a 3D virtual layer is projected and layered over the real environment,
there is frequently a disparity between the real and virtual images due to an
overlay or positional mistake.
● Here in our system, we do not use AR glasses.
2.3.6. AR in Learning Dental Morphology
Dental morphology is a discipline of dentistry that studies anatomical
characteristics of teeth in order to understand its function, exterior shape,
position, size, structure, and development. Juan created an augmented
reality system for learning dental morphology [5]. Students that used the
AR system were very delighted with how simple it was to use and would
gladly utilize it as an additional tool in their daily practice.
• Have not determine whether or not the students' grades are
influenced by this combination

2.3.7. AR in Aesthetic Dentistry


AR is also being used in cosmetic dentistry. IvoSmile (IvoSmile® / Kapanu,
Ivoclar-Vivadent, Liechtenstein) is an iOS (App Store) application. It
employs augmented reality to visualize the options for attractive dental
makeovers right on the patient. The program converts your smartphone into
a virtual mirror. Patients can see how they would look with the potential
esthetic restoration in place, making it easier for them to decide whether or
not to dedicate their time and money to the extensive cosmetic treatment
planning procedure [6].
● The impossibility of matching the smile design with the digital cast
of the patient was identified as a limitation of the IvoSmile®
application and the incorporation of extra face and tooth recognition
markers was requested.

2.3.8. AR in Restorative Dentistry


In a study published by Llena [7] augmented reality technology was utilized
to teach cavity preparation. The experimental group that was further
educated using the AR approach showed a considerable improvement in
Class I and Class II cavity preparation skills. It should be noted that cavity
preparation and knowledge of different regions of cavities are greatly
influenced by spatial vision, and the experimental group has performed
better in both areas.
• The current study's findings revealed that just some of the evaluated
skills were improved in the experimental group. Further research
with a larger sample size is required to assess the efficacy of
augmented reality in the acquisition of information and skills among
dentistry students.
2.3.9. AR in Dental Teaching and Learning
Espejo-Trung [8] provided an AR for operative dentistry instruction that
was applied to tooth-preparation techniques for indirect restorations. It has
various key applications since it illustrates 3D information interactively and
offers a view of the prepared teeth in relation to soft tissues. When
combined with other teaching resources, it may reduce the quantity of
natural teeth used for operative dentistry instruction.
• The investigation was conducted on a small sample size. As a result,
additional research in other populations is required to broaden the
findings.
2.3.10. iDental VR Simulator
iDental, a periodontal skill teaching simulator developed by Peking University
School and Hospital of Stomatology and Beihang University, can simulate
periodontal examinations and treatment processes such as periodontal probing
and calculus identification and removal. Unlike PerioSim, the gadget
primarily use a 2D monitor. However, it comes with an odontoscope handle
and may be used to practice two-handed cooperative operation, making it
more realistic. Furthermore, iDental offers a basic periodontal knowledge
teaching module that allows students to review fundamental knowledge prior
to operation training. Combining the two training components boosts teaching
effectiveness. [9]
2.3.11. Simodont Dental Trainer
Simodont Dental Trainer is a well-known dental skill training teaching
simulator that is now available at many dental schools. It is made up of
dental modules for hand flexibility, cariology, crown and bridge
preparations, clinical situations, and full mouth simulation [10] [11]. One
of the system's features is that each individual case includes an X-ray of the
working tooth, allowing students to establish a diagnosis based on both the
appearance of the teeth and the X-ray images. There are various downsides
to the system. 3D glasses are required for watching in 3D. Excessive
practice on a single tooth does not result in a convincing impression of
manipulation. The single-jawed crown preparation training technique does
not sufficiently imitate the mouth's restricted operational space.
Furthermore, because training postures are fixed and visual angle
conversion requires manual movement of the rotary button, it cannot be
used to teach students about positioning needs. [10].

2.3.12. ProSim Simulation


PerioSim, a stereoscopic display, computer, and haptic device combination,
allows students to see and detect caries and periodontal problems in a haptic
environment [12]. Students can use the system online, and teachers can
publish multiple training programs that students can download and replay
at any time, making this system convenient and efficient [13]. Steinberg and
colleagues demonstrated that picture display and force feedback for teeth
and dental equipment were extremely realistic, but gingival tissue was not.
[14].

2.3.13. Individual Dental Education Assistant


Individual Dental Education Assistant (IDEA) is a virtual reality (VR) hand
flexibility training simulator that includes a handheld stylus that looks like
a dental handpiece and provides force feedback, as well as a computer
loaded with simulation software. IDEA, unlike other dental simulators, is
meant to assist students become flexible and proficient in the use of dental
handpieces by allowing them to practice removing pre-made virtual
materials of varied shapes (eg, straight line or circle).As a result, IDEA
strives to train dental students in hand flexibility rather than a specific
teaching component such as crown preparation or scaling. The system's key
advantage is its evaluation method. The score gained during the training
procedure is determined by two parameters: drilling speed and drilling
accuracy. Deviation from the trajectory or to an incorrect depth might result
in a drop in accuracy, which is displayed on the screen as an accuracy bar.
When the bar is completely depleted, the student fails the test [15]. Some
researchers claim that IDEA can increase students' performance on the
dental skill test; additionally, it can be used to detect students with hand
flexibility issues early on, allowing for early intervention to avert failure
[16].

Author and Aim of the study Used method Result More


publication Information

Prajapati, Dental caries, CNN and transfer Accuracy: - They tested


Nagaraj & periapical learning 0.8846 how well
Mitra [19] infection, and CNN
periodontitis performed in
diagnosing a
tiny labeled
dental
dataset.
Transfer
learning is
also utilized
to increase
accuracy.

Oktay [20] Teeth detection AlexNet Accuracy They


and classification (tooth provided a
detection):- technique
0.90 using CNN
Classificatio for
n accuracy: identifying
Molar :- teeth in
0.9432, dental
Premolar:- panoramic
0.9174, X-ray
Canine & pictures.
Incissor:- Teeth
0.9247 detection by
using a
modified
version of
AlexNet
architecture.

Tuzoff [21] Teeth detection R-CNN Tooth In this


and numbering detection project, a
(Precision:- novel
0.9945 solution
Sensitivity:- based on
0.9941) convolution
Tooth al neural
numbering networks
(Specificity:- (CNNs) is
0.9994, proposed
Sensitivity = that
0.9800) performs
this task
automaticall
y for
panoramic
radiographs

Wirtz, Mirashi Teeth detection Coupled shape Precision:- The coupled


& Wesarg [22] model + neural 0.790, shape model
network Recall:- is initialized
0.827 Dice in terms of
coefficient:- position and
0.744 scale using
the
network's
preliminary
segmentatio
n of the tooth
region.

Jader [23] Teeth detection Mask R-CNN Accuracy:- They


model 0.88, F1- proposed to
score:- 0.88, explore a
precision:- deep
0.90, Recall:- learning
0.84 method for
instance
segmentatio
n of the
teeth.
Lee [24] Teeth Mask R-CNN F1 score:- For the
segmentation for model 0.875, purpose of
diagnosis and Precision:- identifying
forensic 0.858, and
identification Recall:- localizing
0.893, dental
Mean‘IoU’:- structures,
0.877 they
proposed a
fully deep
learning
approach
employing
the mask R-
CNN model.

Banar [25] Conventional Teeth detection Dice score:- Their


CNN 0.93, proposed
accuracy:- fully
0.54, a automated
MAE:- 0.69, approach
and a linear shows using
weighted CNN
Cohen’s promising
kappa results
coefficient:- compared
0.79. with manual
staging.

Figure 2.1 – Comparison Between Existing Systems Related To Mosule 1


Author, VR/AR Participa Assessme Tested Results
Year, system nts nt tool outcome
Country

Papadopou VR (103) 4th MCQs Pediatric The VP group


los et al simulation year DS knowledge dentistry earned much
2013 in pediatric students' grasp higher scores,
Greece VP questionna of behavior and and the majority
[17] ire communication of people thought
VP the simulation
feedback was very good.

Zafar et al Oculus (71) A self- Perceptions of When compared


2021, Quest (VR Second administer dentistry to standard LA
Australia headset year DS ed students teaching
[18] plus digital questionna regarding approaches, the
3D ire was dental LAVR majority of
holograms administer simulation on a participants
and 360- ed before pediatric reported
degree and after patient enhanced LA
spatial the use of a abilities, more
sound) dental engagement in
LAVR the learning
simulator. activity,
improved
understanding of
anatomical
landmarks, and
added value.
Lieberman VR in (48) Questionn Students' When compared
n and learning Second aire approval to traditional
Erdelt. dental year DS textbooks, most
2020, morpholog pupils
Germany ies understood
[19] dental
morphology far
better.

Mladenovi AR (21) The Learning The AR group


c et al 2020 simulator Fourth amount of perception and reported much
in Serbia and fifth time it acute stress less time. There
[20] year DS takes to level was no statistical
give difference in
anesthesia. cortisol levels
Salivary across the
cortisol groups.
levels
before and
after
Salivary
cortisol
levels
before and
after
anesthesia
administra
tion

Zafar and HoloHum (88) A Perspectives on AR increased


Zachar an AR to Second questionna the AR their learning and
2020, teach head year DS ire was knowledge of
Australia and neck self- anatomical
[21] anatomy administer structures, and
ed before they felt more
and after confident, but it
the usage should not be
of AR. used in place of
traditional
cadaver training.

Mladenovi Mobile (11) 4th Simulation Student All responders


c et al AR year DS of local fulfillment (100%) felt
2020, simulator anesthetics (agree and
Serbia [22] (infiltratio strongly agree)
ns and that the
nerve application helps
blocks), them better
followed understand local
by an anesthetic
electronic techniques.
satisfactio
n survey

Llena el al. AR cavity (43) 3rd Theoretica ten speculative There were no
2018, models on year DS l ideas significant
Spain [7] computers knowledge variations in
and mobile prior to, Preparation of knowledge
devices shortly Class I and between groups,
following, Class II cavities however there
and six were substantial
months Students’ disparities in
after satisfaction cavity depth and
training extent for Class I
and Class II
Clinical cavities.
skills Computers were
favored by
Satisfactio students over
n mobile devices.
questionna
ire

Mladenovi AR (41) 4th Post- Knowledge and The experimental


c et al simulator and 5th clinical skills. group had a
2019, on mobiles year DS knowledge higher average
Serbia [23] quiz on the Heartbeat score, required
use of monitoring less time to
local during administer, and
anesthetic anesthetic achieved a
administration greater success
rate. Heart rate
increased
statistically
significantly in
both groups.

Llena et al. The of 41 questionna AR The learning


[7] learning of students ires effectiveness outcomes were
the cavity (20 outcomes equal in both
preparatio students groups, however
n design from AR the AR group
(experim performed better
ental) in the majority of
group and abilities related to
21 cavity
students preparations.
from
control
group).

Mladenovi the 21 questionna Results of AR After using the


c et al. [23] learning of students ires effectiveness Dental
blockade Simulator, the
of the students
lower demonstrated a
alveolar faster average
nerve time of
during the manipulation,
anesthesio greater success in
logical performing
procedure. anesthesiological
AR marker procedures, and a
recognitio higher average
n knowledge level
technolog score.
y with
Dental
Simulator
app for
iOS and
Android
for local
anesthesia
implement
ation in
augmented
reality
mode

Figure 2.2 – Comparison Between Existing Systems Related To Mosule 2, Module 3 And Module
4

2.4. Summary
Discusses the existing systems and the limitations of the other’s work.
CHAPTER 3
3. TECHNOLOGY ADAPTED
3.1. Introduction
This chapter focuses on the project's technologies. In this chapter, machine
learning, deep learning techniques and computer vision are briefly described.
In addition, Unity platform, Vuforia and AR Foundation plugins used in
Augmented Reality implementation and Unity and Blender platforms in Virtual
Reality are described.

3.2. Technologies Adapted for Implementation


3.2.1. Unity
The development platform of Augmented Reality and Virtual Reality is
Unity. It is a cross-platform game engine that allows developers an easy-to-
use workspace to work on and create AR applications. Unity makes it
simple to deploy applications across a variety of mobile devices which is
useful for the implementation of augmented reality and virtual reality in this
research.

3.2.2. Vuforia
A library called Vuforia is used to develop augmented reality on portable
electronics. In order to project virtual information such as text, photos,
videos, or 3D animations on the target images and objects, Vuforia analyzes
images and 3D objects to find and record features. Because the library that
comes with Vuforia contains the necessary basic code to implement AR, the
developer can totally concentrate on the final product they are creating
without having to worry about how the system will function at its most
fundamental level. This library works with the Unity Game Engine,
Android, and iOS devices.
3.2.3. AR Foundation
The AR Foundation package contains all of the GameObjects and classes
required to create interactive AR experiences in Unity rapidly. To create
comprehensive apps, AR Foundation combines cutting-edge capabilities
from ARKit, ARCore, Magic Leap, and HoloLens with special Unity
features. This framework makes it possible to utilize each of these
characteristics in a single workflow.
● ARCore - ARCore is Android's augmented reality framework.
AR Foundation collaborates with ARCore to bring AR
capabilities to Android devices.
● ARKit - ARKit is Apple's augmented reality framework. AR
Foundation collaborates with ARKit to bring AR capabilities to
Apple devices.
● XR Plugin Management - The XR Plugin Management package
provides a straightforward management tool for platform-
specific XR plug-ins like ARKit and ARCore.

3.2.4. Blender
Blendr is used to create objects which are used in Virtual Reality simulation.
Blender is a free and open-source 3D computer graphics software tool set
that can be used to create animated films, visual effects, art, 3D-printed
models, motion graphics, interactive 3D apps, virtual reality, and,
previously, video games. 3D modeling, UV mapping, texturing, digital
sketching, raster graphics editing, rigging and skinning, fluid and smoke
simulation, particle simulation, soft body simulation, sculpting, animation,
match movement, rendering, motion graphics, video editing, and
compositing are among the features of Blender.
3.2.5. Android
It is a mobile operating system intended specifically for touchscreen devices
such as smartphones and tablets. It is based on a modified version of the
Linux kernel as well as other open source applications. The Open Handset
Alliance, an Android developer alliance, created it, and Google officially
supports it.
The source code has been used to build a wide range of Android variants
for various devices, including game consoles, portable media players,
digital cameras, and computers. To create the mobile application, we use
Android.
3.2.6. Computer Vision
Computer vision is the process by which computers and systems extract
meaningful information from digital videos, pictures, and other visual
inputs.. AI enables computers to think and computer vision enables them to
see, observe and understand. Some examples of well-established activities
using computer vision are Categorization of Images, Object Detection,
Observation of Moving Objects, Retrieval of Images Based on Their
Contents. OpenCV2 is used in our project as it is based on image processing.
OpenCV2 is an open-source library that can be used on different platforms

3.2.7. Machine Learning


The study of teaching computers to program themselves through the
provision of data and relevant patterns that may be used to forecast potential
data or make judgments is known as machine learning. Representation,
evaluation, and optimization are the three main components of machine
learning. Those elements are present in all machine learning algorithms.
There are four learning types such as Supervised Learning, Unsupervised
Learning, Semi-supervised Learning, and Reinforcement Learning.

3.2.8. Deep Learning


A type of machine learning is deep learning. It is a feature of artificial
intelligence that replicates the capacity for pattern recognition and
processing of the human brain. Convolutional neural networks are a type of
artificial neural network utilized in image recognition and pixel data
processing. CNN based models are also used to divide a visual input into
segments to make image analysis easier.

3.2.9. Programming Languages


3.2.9.1. Python
Python is a widely used high-level language among programmers
since its code is easy to read and maintain. It is feasible to write
object-oriented and structured programming as well as clear, basic
code. A programming language that is free and open-source. The
use of Python to implement Computer Vision enables developers to
automate processes that need visualization. When we use python for
computer vision We can use some libraries such as Mahotas, Keras,
and SciPy which make it easy to work.

3.2.9.2. C#
The programming language used in Unity is C#. Unity's languages
are all object-oriented scripting languages. Scripting languages, like
any other language, comprise phrases or bits of speech, the most
important of which are variables, functions, and classes.
3.2.9.3. Java
JAVA is a programming language that is utilized in the development
of Android apps. It is a class-based and object-oriented
programming language with syntax influenced by C++. JAVA's
main goals are to be simple, object-oriented, robust, secure, and high
level.
JVM (JAVA Virtual Machine) is used to run JAVA applications,
however Android has its own virtual machine called Dalvik Virtual
Machine (DVM) that is tailored for mobile devices.

3.3. Summary
This chapter discusses the technologies used and their suitability for the
implementations.
CHAPTER 4
4. OUR APPROACH
4.1. Introduction
This chapter will provide a description for the proposed solution with reference to
users, inputs, outputs, process, technology that implements the solution

4.2. Dental X ray mapping using AR and VR dental simulator

Figure 4.1 – High Level Architecture Of Module 1, Module 2 And Module 3

Dentists can enter soft copies of dental x-rays using a mobile application, as shown
in above diagram. Then the predicted image is requested through an API request
.The output results are then sent to the mobile application via API, where two
Augmented Reality systems can use the output image.

Figure 4.2 – Block Diagram Of The System Of Module 1, Module 2 And Module 3
After receiving the dental X-ray images which identify abnormalities , map the
processed dental X-ray into the human face and extract the details there with the
use of Augmented Reality. The dental x-ray images map to the human face using
the camera of the mobile device. It helps dentists during surgeries like oral and
maxillofacial surgery, orthodontics, implantology, restorative dentistry.
Moreover, using the abnormalities marked x-ray is used to map into a dental mold
with the use of Augmented Reality. Here also the dental x-ray images map to the
dental mold using the camera of the mobile device. This module helps the dental
students to practice before they come with the actual operations. And also the
dentist can explain to patients about their surgeries using mapped X-ray.

Figure 4.3 - High Level Architecture Of Module 4


Due to the pandemic and crisis situation, many dental students are having issues
with clinical practices. We implement a virtual simulation to make learning
easier for dental students. Here, use hand motion tracking instead of using VR
gloves. Only the equipment will use a virtual reality HMD which is a head-
mounted device that provides virtual reality for the wearer. Finally evaluate the
system using dental students and experts.

4.2.1. Users
Main users of this system are dental doctors who are performing dental
surgeries and the dental students who are practicing pre-clinical studies.
Output of the dental X-ray image processing module is used by both
surgeons and the students. Dental mold recognition and X-ray mapping
module and the VR simulation module are used by the dental students while
the face recognition and X-ray mapping module is used by the dental
surgeons.

4.2.2. Input
Soft copies of dental X-ray images are utilized as the input for the system.

4.2.3. Process of the system


They entered soft copies of the X-rays using the mobile application. First of
all, preprocessing collected dental x-ray image data sets. Split the dataset
into training and testing dataset. Then do the data augmentation process for
training the dataset. After that, do the image segmentation process using
CNN method. Then classify teeth into missing teeth and restore teeth,
output the image for the next modules and send it back to the mobile
application.
Face detection and dental x-ray mapping system is using these predicted X-
ray images as inputs to the module. It detects the patient’s face and maps
the X-ray image to the patient’s face using augmented reality with the use
of the camera of the mobile application. Here we use two methods to map
dental x-ray into the human face that are AR Foundation and Vuforia which
are used in Unity platform. Finally compare the accuracy of the two
methods and evaluate them.
Dental mold recognition system detects the dental mold and maps the
predicted X-ray image to the dental mold using augmented reality with the
use of the camera of the mobile application. Here also use the output x-ray
images from the dental x-ray processing module as input. Finally evaluate
the accuracy of Vuforia and AR foundation by comparing results of these
two methods that are used in the Unity platform.
The Virtual reality simulation system contains a simulation which is
implemented using Unity VR. Here, we create a virtual environment for
some dental operations. Only the equipment that is used in this system is a
virtual reality HMD and here uses hand motion tracking instead of using
VR gloves. After implementing the entire VR simulation, evaluate the
system by using dental students and experts.

4.2.4. Output
X-ray mapped to the patients’ face, X-ray mapped to the dental mold and
the VR simulation
4.3. Summary
This chapter discusses the approach of this research including descriptions
about the entire system, users who are targeted, Inputs to the system, how the
inputs are processed and the final output of the system.
CHAPTER 5
5. ANALYSIS & DESIGN
5.1. Introduction
This chapter contains details of the design of the system including descriptions of
each module, what each module does and the interaction between the modules.

5.2. Analysis and Design


5.2.1. Module 1 - Dental X-ray image processing

Dental X-ray images are utilized as the input for radiography image
processing algorithms, which use mathematical processes to process
radiography images. Numerous methods are available for digital image
processing that can be used to process input steps,

Figure 5.1 – Approach For Module 1


These steps are useful for spotting cavities, tooth fractures, cysts or tumors,
the length of root canals, and the development of a child's teeth. Human
eyes quickly identify objects of interest and separate them from background
tissues, but building algorithms is quite difficult. In the feature extraction
step, shape-based features, and texture-based features will be extracted.
Finally, the extracted features are utilized to classify teeth into Damage
Tooth or Missing Tooth or Healthy Tooth using Convolutional Neural
Network.
The next stages describe how the suggested methodology functions and the
methods we will employ.

5.2.1.1. Image Acquisition


The dataset includes de-identified and anonymous panoramic dental
x-ray images of 516 patients, taken from Doctor Manjula Herath
and from Doctor Bandu Goonetilleke. Additionally, there are
manually segmented images of mandibles in the dataset, however
only the original images are used because those segmented images
are unrelated to the topic of our study. All photos range in width
from 2600 to 3138 pixels, while their heights are from 1050 to 1380
pixels. To learn how to detect key items from the entire image, this
approach requires a vast amount of training data.

(a) (b)

Figure 5.2 – The Original Radiograph Of Dental X-Ray


Figure 5.3 – Split Mask, In Which Each Tooth Is Separated From The Other By A Narrow Gap
However, finding a large training dataset during this time period is
quite challenging. We used an image augmentation technique to
enhance the number of samples to prevent this difficulty.

5.2.1.2. Data Augmentation

Data augmentation is a method for creating new training data out of


old training data. Deep learning algorithms frequently used data
augmentation techniques to increase dataset size and minimize
memorization. The original images were transformed using a variety
of methods, including shifting, rotation, and flipping, to create
multiple replicas. As a result, we used the following image
transformations to increase the number of samples.

1. Rotation - This is achieved by rotating the image in a random


direction (left or right). The pixel values of a picture are
moved left, right, up, and down according to the degree
value provided between 0–180 during this process.
2. Zoom: It makes the appearance of the image's items closer
together. It was accomplished by replacing the existing
image's pixel values with new ones. The original pixel
values were evaluated when adding the values, and the
nearest value was calculated.
3. Height Shift: It was developed by shifting the image pixels
up and down at random. Machine learning algorithms that
use this modification avoid memorizing images that are
always centered in the dataset.
4. Width Shift: The pixel values are moved to the right or left
when this operation is executed.
5. Horizontal Flip: Image pixels are transferred horizontally
from one half of the image to the other in this procedure.
Figure 5.4 – On An Image Representation Sample, The Effect Of Applying Image Transformation

Original Augmented

Missing Tooth 115 1120

Restoration Tooth 110 1100

Total 225 2220

5.5 – THE ORIGINAL ROWS REPRESENT THE ORIGINAL SAMPLE COUNTS, WHILE THE ENHANCED ROWS
DEMONSTRATE HOW DATA AUGMENTATION INCREASED THE NUMBER OF SAMPLES .

5.2.1.3. Image Preprocessing

Pre-processing techniques use pixel brightness transformation,


geometric transformation, and the local neighborhood of the
processed pixel. Image preprocessing techniques make use of the
high degree of image redundancy and brightness without taking
position into account. Contrast stretching, Grayscale stretching,
Log transformation, Gamma correction, Image negative, and
Histogram equalization methods are standard enhancement
methods to improve the quality of medical images. Low-resolution
photos are improved by pre-processing methods, which correct
spatial resolution and local modification to raise the quality of the
input image overall. Additionally, before further processing,
enhancement, and filtering techniques boost the overall image
quality characteristics. To do segmentation and feature extraction
from such images more precisely and conveniently.

A contrast stretching approach has been widely used to enhance


digital X-rays quality. Utilizing local homogeneity, adaptive local
contrast stretching addresses the issue of excess and under-
enhancement. Histogram equalization is a popular technique for
enhancing the contrast of the image. HE is a technique for increasing
an image histogram's dynamic range. It also produces artificial
effects in photographs, yet it works wonders for scientific images
like X-rays.

Figure 5.6 - Result Of Enhancement In Preprocessing Stage

On the other hand, filtering techniques used on medical images aid


in the elimination of noise to a certain extent. Different noise effects,
such as Salt & pepper noise, Gaussian noise, and speckle noises are
added to the input images.
Various filters have been used to achieve the best potential outcome
for the irregularities present in dental images like the Average filter,
Bilateral filter, Laplacian filter, Homomorphic filter,
Butterworth filter, Median Gaussian filter, and Weiner filter.

Average Filter - The effective linear filter is the average filter. The
mean value of each pixel's neighbors, including itself, is used to
replace each one in this image. It makes sense to lower the Gaussian
noise. It is practical to reduce impulsive noise and simple to design.
However, it does not override the image's edges.

Wiener Filter - The signal quality is reduced by noise, which is


eliminated by the Wiener filter. The statistical analysis is being done
by the filter. The Wiener filter is used to minimize the mean square
error. The reduction of Gaussian and speckle noise is appropriate.
Equation gives the spectral properties of the original signal and
noise.

Median Filter - A traditional non-linear filter called a median filter


is used to lessen the amount of intensity variation between adjacent
pixels. The arrangement of each pixel and its neighbors is done in
ascending order. The pixel's median value is taken and substituted
for the pixel's value. This method works best to lessen the noise from
salt and pepper.

Dental X-ray Image segmentation

DXRI segmentation is an important stage in the process of obtaining


useful data from a variety of imaging modalities. Dentistry presents
more segmentation challenges than other medical imaging
modalities, making the process more difficult or complex.
The segmentation procedure includes border tracing, localizing
artifacts, analyzing structure, etc. Although it is easy for human eyes
to recognize things of interest and eliminate them from the
background tissues, creating algorithms for this task is very difficult.

It is frequently used to extract or omit particular parts of an image.


General dental image segmentation methods are categorized as
thresholding-based, Watershed-based, level set methods,
clustering, and region growing.

● Thresholding-based - Starting with the selection of a threshold


value, the intensity threshold application in picture segmentation
makes sense. A region is created for pixels with values higher than
the threshold, while an adjacent region is created for pixels with
values lower than the threshold. Global Thresholding, based on the
image's histogram, this technique segments the image. If f (x, y)
represents the histogram of a picture, then an initial threshold (T) is
selected to distinguish items of interest from the background. Any
image pixel represented by (x, y) that is greater than T is then
designated as an object of interest; otherwise, the pixel is designated
as the background.

● Clustering method - Clustering is a method for automatically


categorizing data based on a particular level of similarity. The
problem that needs to be solved determines the similarity criterion.
In general, the algorithm's first parameter must include information
on the number of groups to be found.
● Watershed-based - The watershed transformation, which may be
seen in grayscale images, divides an image into nearby sections
using mathematical morphology.

● Region-based - The region-based method's objective is to segment


an image into areas based on differences in pixel intensity levels.
Region growth is the most popular segmentation method. Using the
technique of "region growing," pixels are grouped according to a
predetermined standard to produce larger regions. Calculations that
generate sets of pixel values, whose characteristics ensure that pixels
are clustered closer to the center (centroids) of values we are
seeking, are a conventional strategy for the region growth method
(seeds). To accomplish the segmentation procedures, region growth
requires two parameters..

The segmentation process using above methods is challenging in


panoramic X-ray images due to the characteristics that are present.
Using learning-based techniques may be one option to segment the
suggested data collection.

I selected the U-Net model to perform semantic segmentation of


teeth in dental x-ray images. U-net is convolutional network
architecture for fast and precise segmentation of images. Its
architecture matches the letter U when imagined, therefore the name
U-Net. Its architecture is divided into two sections: the contracting
path on the left and the expansive way on the right. It is a fully
convolutional neural network that is used frequently in the
segmentation of biomedical images.
Figure 5.7 - The U-Net Architecture Used In The Study. Each Blue Box Corresponds To A Multi-
Channel Feature Map. The Number Of Channels Is Denoted On Top Of The Box. The Shape Is Provided
On The Edge Of The Box. White Boxes Represent Copied Feature Maps. The Arrows Denote The
Different Operations Indicated By Colors

5.2.2. Module 2 - Face recognition and dental X-ray mapping

In this module face recognition and dental X-ray mapping to the human face
is done using Augmented Reality. The output data from the Dental X-ray
image processing module is used as the input data of this module. Since the
Dental X-ray image processing module is not completely done yet, I have
used some sample data.

Figure 5.8 - Approach For Module 2


This module mainly contains three stages.

1. AR Face Recognition

2. Dental X-ray Mapping using AR

3. Evaluation

First, I identify the area of the face and then map dental x-ray image which
is output from the Dental X-ray image processing module to the face by
using Augmented Reality. For the dental image mapping to the face, we
have developed an android mobile application. By using the camera of the
android device, we can map the input x-ray image to the particular human
face .Unity is the development platform I have chosen to develop this
module. Unity is a cross-platform game engine that provides developers
with a simple workspace in which to work on and create AR applications.
Here there are two main plugins used in Unity to develop Augmented
Reality applications. AR Foundation and Vuforia are those two main
plugins used in Unity. In this module I do face recognition and dental X-ray
mapping to the human face by using those two plugins and finally evaluate
the better solution by comparing the accuracy of each.

● AR Face Recognition

First, we need to identify the human face before the dental x-ray mapping
stage. AR foundation provides AR face to ease the face recognition process.
ARFace objects are used to represent faces and are generated, updated, and
removed by the ARFaceManager. The ARFaceManager fires a
facesChanged event once each frame, which comprises three lists, faces that
have been added, faces that have been changed, and faces that have been
removed since the last frame. When the ARFaceManager identifies a face
in the scene, it creates a Prefab with an ARFace component to track the face.
The ARFace component, which is connected to the Face Prefab, provides
access to detected faces. Vertices, indices, vertex normals, and texture
coordinates are all provided by ARFace. A central pose, three region poses,
and a 3D face mesh are all provided via the Augmented Faces API.

The origin point of the Prefab constructed by the ARFaceManager is the


center pose, which represents the center of a user's head. It is placed behind
the nose inside the skull.

The following are the axes of the center pose:

Figure 5.9 – Axes For The Face

points toward the left ear are shown by the positive X-axis (X+).

points upwards out of the face are shown by positive Y-axis (Y+).

points into the center of the head are shown by positive Z-axis (Z+).

Region poses are significant portions of a user's face that are located on the
left, right, and tip of the nose. The area postures are oriented along the same
axis as the central position. The face mesh is made up of 468 points that
represent the human face. It is also characterized in terms of the center
posture.

● Dental x-ray mapping using AR

In the first stage I have identified the area of the face and identified 468
points that represent the human face. By that, I could identify the mouth
area of the face. As the next step, Output dental x-ray from the Dental X-
ray image processing module is required to be mapped onto the human
mouth area. To map this dental x-ray, doctors requested us to develop an
android application. In this android application, we can select the particular
patient's x-ray. Mapping part is done by using the camera of the android
device. When we hold the camera of the device to the human face, the
particular x-ray is mapped to the face. Since the output image of the Dental
X-ray image processing module contains the abnormal area of the teeth set,
we can understand in which areas of the teeth set are having those
abnormalities.

● Evaluation

Here in the evaluation stage, I hope to evaluate the results of the AR


Foundation and Vuforia. AR Foundation and Vuforia are the plugins of
Unity. I hope to evaluate these two plugins by comparing the accuracy of
these two. To measure the accuracy I hope to use the external users.
Comparing the responses of users, I hope to select the better solution among
AR Foundation and Vuforia.

5.2.3. Module 3 - Dental Mold recognition and dental X-ray mapping

This module is focused on to identify the dental mold and map the X-ray to
the mold using augmented reality. The output data from the Dental X-ray
image processing module is used as the input data of this module. The
intention of this module is to improve the pre-clinical practices of dental
students.

Figure 5.10 - Approach For Module 3

This module mainly contains three stages.

1. AR Dental Mold Recognition

2. Dental X-ray Mapping using AR

3. Evaluation

The output of the dental X-ray image processing module is used as the input
for this module. The output X-ray image map to the dental mold using
augmented reality using an android application. With the use of the camera
of the android device, we can map the input x-ray image to the dental mold
.Unity platform is used to develop this module. Unity is a cross-platform
game engine that provides developers with a simple workspace in which to
work on and create AR applications. AR Foundation and Vuforia are two
main plugins that can be used in Unity to develop Augmented Reality
applications. My intention is to develop this module using both these
plugins and evaluate the accuracy of procedure by using external users.

● AR Dental Mold Recognition


AR dental mold recognition is the first step of this module. Since the mold
is an object, the Vuforia model target can be used to recognize the dental
mold. Vuforia allows two methods for object tracking - using CAD file or
3D scans. Since there are no existing CAD files for this dental mold used in
this module, the 3D scan method was used for model target generation. The
Object Target for dental mold is created by scanning the mold using Vuforia
Object Scanner. With the help of this program, objects can be scanned from
all angles to record its vantage points. It enables testing of scan results and
further scanning. The process culminates in the creation of an Object Data
(*.OD) file that contains the source information needed to define the Object
Target in the Target Manager on the Vuforia Developer Portal. The Object
Data file is uploaded to the Vuforia Target Manager on the Vuforia
Developer Portal after scanning, where an Object Target is created and can
be bundled into a Device Database. This database is downloaded and used
in Unity to recognize the dental mold.

● Dental X-ray Mapping using AR

After identifying the dental mold it is required to map the output X-ray
image of the first module to the mold. The X-ray is uploaded to the android
device and with the use of the camera of the android device, X-ray can be
mapped to the dental mold. As the X-ray image contains the abnormal area
of the teeth set, students can identify the differences of the normal teeth and
the abnormal teeth.

● Evaluation

I hope to implement this module using the AR Foundation and Vuforia


plugins in Unity platform. After the implementation my idea is to give the
implemented module to the external users and evaluate the accuracy of the
both implementations. After the evaluation, we can come to a conclusion
about the best implementation.
5.2.4. Module 4 - VR simulation

Figure 5.11 - Approach For Module 4

Virtual reality represents the first chance to merge 3-D visual imaging with
interaction at a level that allows realistic simulation of intricate anatomic
dissection or surgical operations. It can help students understand surgical
anatomy by allowing them to investigate the interrelationships of various
organ systems from viewpoints not available through other typical teaching
methodologies. Before attempting new surgical operations on an animal
model, a researcher might test them on many occasions. It is becoming
easier to teach surgical skills. There are less dangers. There are fewer
animals required for study. The simulator is a "stand-alone" training system
in its present typical form (HMD).
This module focuses on a virtual reality simulation.Many dentistry students
are struggling with clinical procedures as a result of the present epidemic
and crisis circumstances. Implement a virtual simulation after studying the
results of prior modules to make learning simpler for dentistry students.
Instead of VR gloves, employ hand motion tracking here. Only a virtual
reality HMD, a head-mounted device that offers virtual reality to the wearer,
will be utilized.

The next steps detail how the proposed technique works and the methods
we will use.

● Create the 3D environment

The environment in VR simulation was created using the Unity


program. Obtaining free 3D models from the Internet is required. The
rest is done with the Blender program. Blender has an outstanding
collection of 3D modeling and sculpting capabilities and is regarded as
a perfectly credible alternative to commercial modeling applications. It's
been increasingly common in big studio pipelines in recent years.

● Animations, Physics and Scripting

Physics allows things to be manipulated by real-world forces such as


gravity, velocity, and acceleration. Unity Physics is a hybrid of rigid
body dynamics systems and spatial query systems. It was created from
the ground up utilizing the Unity data-oriented tech stack. Animations
are played back using the animation component. It is possible to attach
animation clips to the animation component and control playback from
the script. Unity's animation system is weight-based, and it allows
Animation Blending, Additive animations, Animation Mixing, Layers,
and complete control over all elements of playback. An animation
system contains information on how specific objects' position, rotation,
or other features should change over time. Every segment can be
considered a single linear recording.

● Hand Tracking

One of the classic qualities that distinguishes humans from non-human


primates is the dexterity provided by our opposable thumbs. Almost
every minute of our waking life, we use our hands to gesture, feel, and
interact with our surroundings. When we are unable or constrained in
utilizing our hands, we are tremendously handicapped, and a variety of
once-comfortable chores become unbearably unpleasant. Hand tracking
is the process by which the headset cameras, LiDAR array, or external
sensor stations track the location, depth, speed, and orientation of the
hand. This tracking data is then evaluated and converted into a virtual,
real-time depiction of hands and motions inside the virtual environment.

Challenges

- Object interaction

According to subjective Likert evaluations on a variety of


descriptive criteria, the controller-free interaction seemed much less
comfortable and precise than the controller-based engagements.

- Tracking Location

Physical obstruction, along with Tracking limitations are likely to


result in unnatural head motions in manual activities to keep the
hands visible, which may impede the transfer of training from
virtual to real-world environments or have drastic consequences on
immersion if the hands vanish from peripheral vision at an
unexpected or inconsistent point.
● Evaluation of the VR system

A questionnaire-based user assessment will be implemented in the


future. The system is evaluated by both specialists and users. The
questionnaire is created using simulation. There will be questions for
experts and questions for students.Questionnaires are primarily used to
explore the subjective experiences of people using the gear, with the
goal of identifying methods to improve the simulator's future
generation. The questionnaire's content is primarily focused on the
user's relationship with the simulator. Users of the simulator are divided
into two groups: trainers and trainees. The device is a training tool for
trainers, and as such, it must be evaluated for its supporting role in
teaching, such as if it decreases teaching costs or enhances teaching
efficiency. The device is indeed a learning opportunity for trainees, and
it has to be examined for learner performance and the creation of an
educational environment.

5.3. Summary
Analysis and design of each module are discussed in this chapter.
CHAPTER 6
6. IMPLEMENTATION
6.1. Introduction
In this section we discuss the overall implementation of the project up to now.
6.2. Implementation
1.6.1. Dental X-ray image processing

In module 1, To detect missing tooth & restoration tooth I used Python


programming language with OpenCV open-source image processing
library and collected data from Doctor Manjula Herath and from Doctor
Bandu Goonetilleke and Kaggle web site. I collected X-rays with some
annotated images from there. However, finding a large training dataset
is a very difficult task. Online data sources also include very limited
data for dental x-rays. An overfitting problem might occur if the training
dataset contains an insufficient number of image samples. We used an
image augmentation technique to enhance the number of samples to
prevent this difficulty.

Data Augmentation

To generate various copies of original images, several image


transformation techniques were used, including shifting, rotation, and
flipping. There was a very limited number of datasets for a dental x-ray
as a result, we used the following image transformations to increase the
number of samples.

Rotation: This is achieved by rotating the image in a random direction


(left or right). The pixel values of a picture are moved left, right, up, and
down according to the degree value provided between 0–180 during this
process.

Horizontal Flip: Image pixels are transferred horizontally from one half
of the image to the other in this procedure
Figure 6.1 - Source Code Of Rotate And Flipped Input Image

Figure 6.2 - Result Of Data Augmentation

Preprocessing

To improve the quality of the dental x-ray images in this phase I use
preprocessing techniques. Here I used Adaptive Histogram Equalization
(AHE) for an enhanced image.

Adaptive Histogram Equalization (AHE)

It is an image-processing method that aids in boosting image contrast.


The adaptive approaches of various histograms completely diverge from
normal histogram equalization.
6.3 - SOURCE CODE OF ADAPTIVE HISTOGRAM EQUALIZATION METHOD

Figure 6.4 - Result Of Adaptive Histogram Equalization Method

I used Median Filtering to remove noise from images. The noise


removal algorithms reduce or remove the visibility of noise by
smoothing the entire image leaving areas near contrast boundaries.

Figure 6.5 - Source Code Of Median Filter


Figure 6.6 - Result Of Noise Remove Using Median Filter

Image Segmentation

The characteristics of panoramic X-ray images pose difficulties for


segmentation. So traditional segmentation methods are not suitable for
dental x-ray images. Because it reduces the accuracy of the future result
also. Below are some implementations of traditional segmentation
methods to prove that issue.

The global thresholding segmentation method


Figure 6.7 - Source Code Of Global Thresholding Method, Calculate F-Score And Image Resize

I used ten dental x-ray images to calculate the mean value of f-scores of the
result to prove the above mentioned problem.
Figure 6.8 - Source Code For Calculate Total F-Score Value

Figure 6.9 - Result Of Global Thresholding Segmentation


Figure 6.10 - Source Code Of Calculate Mean Value Of F-Score

The mean value of f-scores was very less under global thresholding
segmentation. So the global thresholding segmentation method is not
suitable for the dental X-ray segmentation process.

K means Clustering segmentation method

Figure 6.11 - Source Code For K Means Clustering Segmentation

Here I used ten X-ray images for calculating the mean value of Intersection
over union for proving the above mentioned problem.
Figure 6.12 - Source Code For Calculate Sum F-Score Value And Display Output Image

K means clustering segmentation result


Watershed segmentation

6.13 - SOURCE CODE FOR WATERSHED SEGMENTATION


Figure 6.14 - Result Of Watershed Segmentation

Dental x-ray image segmentation by using above methods shows less


accuracy. Using learning-based techniques may be one option to
segment the suggested data collection. So I will build a model for
segment dental x-ray images using U-Net network which is a type of
CNN.
1.6.2. Face recognition and dental X-ray mapping

Add ARFoundation that allows you to create augmented reality


experiences once and then build them for either Android or iOS devices
without making any modifications and ARCore plugin which can be
used to create augmented reality apps for Android devices.

Figure 6.15 – Adding Ar Foundation And Ar Core

Add XR Plugin Management which helps to streamline XR plug-in


lifecycle management and potentially provide users with build-time UI
via the Unity Unified Settings system.

Figure 6.16 – Adding Xr Plugin Management


Face Identification using Unity AR Foundation. The face mesh is made
up of 468 points that represent the human face. It is also characterized
in terms of the center posture.

Figure 6.17 – Face Identification

Map a sample teeth set into face prefab

Figure 6.18 – Map Teeth To Prefab


Teeth set mapping to the human face by using Unity AR Foundation.
From the identified 468 points that represent the human face, identify
the mouth area of the face. Then map the entered teeth set image into
the human face.

Figure 6.19 - Map Teeth Image To Face

Here are some codes I have written to implement face dental image
mapping.

Figure 6.20 – Code For Module 2(1)


Figure 6.21 – Code For Module 2(2)

Figure 6.22 – Code For Module 2(3)


Figure 6.23 – Code For Module 2(4)
1.6.3. Dental mold recognition and dental X-ray mapping
Import Vuforia plugin to the Unity’s AR project and create an object
target. Then scanned the dental mold and created an .obj file of the
dental mold. Upload it to the Vuforia Target Manager and create the
object database. Map a 3D object to the dental mold to show that the
mold has been identified.

Figure 6.24 – 3d Model Generation Of Dental Mold


Figure 6.25 – Vuforia Database Generation

Figure 6.26 – Vuforia Model Generation


Figure 6.27 – Unity Environment Of Model Detection

Figure 6.28 – Mold Detection


1.6.4. VR simulation

Environment

Figure 6.29 - Environment Creation For Module 4

Hand Physics and animations


Animator panel

Figure 6.30 - Animator Panel In Unity Vr


Grab animation

Figure 6.31 - Grab Animation Creation

Pinch animation
Figure 6.32 - Pinch Animation Creation
Script for hand physics

Figure 6.33 - Code For Module 4

6.3. Summary
Implementations of each module are discussed in this chapter.
CHAPTER 7
7. DISCUSSION
The dental diagnostic technique is still difficult, and its accuracy is restricted. This issue
emphasizes the need for best opinions, so dentists can identify diseases at an early
stage. Our research team is going to implement an application to automatically identify
missing teeth and restore teeth. In this study we classified and identified missing teeth
and restoration teeth after mapping the processed dental X-ray into the human face and
dental mold. And also we develop a VR simulation to facilitate the clinical practice
process for dental students.
References

[1] N. K. W. Y. Takafumi OHTANI, "Application of haptic device to implant dentistry


― accuracy verification of," Dental Materials Journal, vol. 28, pp. 75-81, 2009.
[2] H. H. T. H. L. K. M. T. D. H. Y. M. T. Hideyuki Suenaga, "Real-time in situ three-
dimensional integral videography," International Journal of Oral Science, vol. 5,
pp. 98-102, 2013.
[3] L. F. C. G. Zhu M, "A novel augmented reality system for displaying inferior
alveolar nerve bundles in maxillofacial surgery," Scientific Reports, vol. 7, no. 1,
pp. 89-98, 2017.
[4] M. C. M. R. Pellegrino G, "Augmented reality for dental implantology: a pilot
clinical report of two cases.," BMC Oral Health, vol. 19, no. 1, p. 158, 2019.
[5] A. L. F. F. Juan M-C, "A Mobile Augmented Reality system for the learning of
dental morphology.," Digital Education Review, vol. 30, no. 1, p. 234–247, 2016.
[6] R. R. M. C. Touati R, "Comparison of two innovative strategies using augmented
reality for communication in aesthetic dentistry: a pilot study," J Healthc Eng, vol.
2019, no. 1, p. 7, 2019.
[7] F. S. F. L.-L. F. Llena C, "Implementation of augmented reality in operative
dentistry learning," Eur J Dent Educ, vol. 22, no. 1, pp. 122-130, 2018.
[8] E. S. Espejo-Trung LC, "Development and application of a new learning object for
teaching operative dentistry using augmented reality," J Dent Educ, vol. 79, no. 11,
pp. 1356-1362, 2015.
[9] Y. Z. J. H. Y. W. P. L. Y. C. H. Z. Dangxiao Wang, "iDental: A Haptic-Based
Dental Simulator and Its Preliminary User Evaluation," IEEE Transactions on
Haptics, vol. 5, no. 4, p. 12, 2012.
[10] L. Y. T. M. Z. Y. C. J. Wang F, "Application of a 3D Haptic Virtual Reality
Simulation System for Dental Crown," ITME, Fuzhou, 2017.
[11] B. D. W. P. V. J. D. S. Boer IR de, "The Simondont in education," Ned Tijdschr
Tandheelkd, vol. 119, no. 1, pp. 294-300, 2012.
[12] P. B. D. Cristian Luciano, "Haptics-based virtual reality periodontal training
simulator," Virtual Reality, vol. 13, no. 1, p. 69–85, 2009.
[13] M. M. B. R. G. Elby Roy, "The Need for Virtual Reality Simulators in Dental
Education: A Review," The Saudi Dental Journal, Saudi, 2017.
[14] B. P. D. J. A. S. Z. M. Steinberg AD, "A haptic-3D virtual reality dental training
simulator," J Dent Educ, vol. 12, no. 1, pp. 1574-1582, 2007.
[15] E. S. Urbankova A, "The use of haptics to predict preclinic operative dentistry
performance," J Dent Educ, vol. 12, no. 12, pp. 1548-1557, 2011.
[16] E. M. E. S. Urbankova A, "A complex haptic exercise to predict preclinical
operative dentistry performance," J Dent Educ, vol. 77, no. 11, pp. 1443-1450,
2013.
[17] P. A. L. K. T. T. Papadopoulos L, "Design and evaluation of a simulation for
pediatric dentistry," J Med Internet Res, vol. 15, no. 11, pp. 207-220, 2013.
[18] S. A. Y. M. Z. J. Zafar S, "Pedagogical development in local anaesthetic," Eur Arch
Paediatr Dent, vol. 10, no. 1, pp. 1-8, 2021.
[19] E. K. Liebermann A, "Dental morphologies in a virtual teaching environment.,"
Virtual education, vol. 10, no. 1, pp. 143-1150, 2020.
[20] D. D. P. L. M. V. Mladenovic R, "Effect of augmented reality simulation on
administration of local anaesthesia," Eur J Dent Educ, vol. 24, no. 3, pp. 507-512,
2020.
[21] Z. J. Zafar S, "Evaluation of HoloHuman augmented reality as a novel education
tool in dentistry," Eur J Dent , vol. 24, no. 3, pp. 259-265, 2020.
[22] B. Z. M. K. Mladenovic R, "Practice of local anesthesia applications in 3D
environment during the COVID-19 pandemic," J Dent Educ, vol. 12, no. 2, pp.
267-274, 2020.
[23] P. L. M. K. V. N. B. Z. M. J. Mladenovic R, "Effectiveness of augmented reality
mobile simulator in teaching local anesthesia of inferior alveolar nerve block," J
Dent Educ, vol. 83, no. 4, pp. 423-428, 2019.
APPENDIX A
Individual contribution to the project

184131H - Prasad Y.B.


My contribution to this project is Dental X-ray image processing and classification.
At the beginning I annotated the Dental X-ray image. In this stage, I have focused
on detecting and classifying teeth whether missing teeth or restoration teeth.

I learned how to work with the OpenCV library and Python. Before the image
preprocessing part I used the Data Augmentation technique to generate various
copies of the original images and solve the overfitting problem. In the preprocessing
stage to improve the enhancement of the image, Adaptive Histogram Equalization
was used. Then to remove noises like salt and pepper, Median Filter was used.

Then in the image segmentation process, I calculated the accuracy of some


traditional image segmentation methods of Watershed, K-means clustering, and
Global thresholding segmentation methods in image processing. But the accuracy
of each method was very low. So, I had to follow a learning-based method for the
image segmentation process as the U-Net network which is based on CNN. The U-
Net network and CNN was very new technology to me. Therefore, I started learning
how to build a model using the U-Net network.

184064E - Jayathilaka K.G.D.H.

My module is Face Recognition and dental x-ray mapping. In this module I need to
identify the human face and map the output of the dental x-ray image processing
module onto the particular human face. I had to find some sample data for the
mapping mechanism because the dental x-ray image processing module is not
finished yet. We are requested to do this module by using Augmented Reality. So
I have selected Unity as the platform since the AI utilized by Unity functions aids
in the finest real-time occlusion and object tracking. First, I read some research
papers to gain knowledge about existing Augmented Reality technologies and
applications related to my research area and the limitations of them. By gathering
existing knowledge, I have identified the main two methods used in Unity AR to
track the real-time objects that are AR Foundation and Vuforia. So, I decided to go
with the AR Foundation first. Since Unity and augmented reality are entirely new
to me, before starting the implementation I did several tutorials and read
documentations to be familiar with these technologies. Then I broke down my
module into three main stages. These are AR face recognition, dental x-ray
mapping using AR and evaluation. First I implemented human face recognition
using the AR Foundation. Using face recognition results I could extract the mouth
area of the face. After identifying the mouth area, I map sample teeth data set into
the human face by using AR Foundation. Here I use an android application to map
the teeth set onto the human face. We can map the given dental-ray image into the
human face with the use of a camera in the mobile. So, I hope to implement face
recognizing and dental-ray mapping steps by using Vuforia as well and then I hope
to evaluate those two methods and find the better one.

184120A - Perakotuwa H.P.N.S.


My module is to implement the dental mold recognition and map the output of the
dental X-ray image processing module to the dental mold. The requirement was to
implement this module using augmented reality. Before starting the implementation
I went through research papers related to augmented reality and its use in dentistry.
After analysing the existing systems, their limitations and drawbacks I decided to
implement a augmented reality mobile application for this module. I recognized
that the ideal platform for this purpose is Unity AR since it provides real time object
tracking functionality. Since I was not familiar with Unity platform, I went through
Unity’s documentation. With the gathered knowledge, I identified Vuforia and AR
Foundation plugins can be used for object tracking. Then I divided my module into
three sub components - mold identification, mapping the X-ray to the mold,
Evaluation. First I started the implementation with Vuforia and implemented the
dental mold identification. Vuforia supports two methods for object identification.
First method uses an existing CAD file of the object while the second method needs
a scanned file of the 3D object. Since there were no any existing CAD files for the
dental mold we are using, I had to scan the mold and create a 3D file using a
scanning application. I used an android mobile application to identify the dental
mold. I hope to map the X-ray to the dental mold with the use of the camera of the
mobile application and implement the same thing using AR Foundation plugin and
finally do an evaluation for both of these plugins.

184151T - Sandakelum P.A.K.M.


My module is to implement a virtual reality simulation to make learning easier for
dental students. Doctors request us to use hand motion tracking instead of using VR
gloves. Only the equipment doctors requested us to use is a virtual reality HMD.
So, I have selected Unity as the platform because the artificial intelligence used by
Unity functions aids in the best real-time occlusion and object tracking. Since Unity
and Virtual Reality are new for me, I have followed several YouTube tutorials and
read several articles to be familiar with those technologies. After studying about the
technologies, I divided my module into some stages which are to create the 3D
environment, animations, physics and scripting, hand tracking and evaluation of the
VR system. To create the 3D environment, I had to create needed 3D models by
myself. So I had to study about Blender software. Then, I did 3D object modeling
by using Blender software. After that, I discovered about adding animation and
hand physics to the virtual simulation. For that I followed some basic animations
and physics adding tutorials and then I customized these techniques according to
my scenario and added hand physics and animations in my virtual simulation.
Likewise, I have implemented the virtual environment which was needed for this
virtual simulation. So, I hope to implement hand tracking and develop the clinical
environment scenario. Finally, I hope to evaluate a VR clinical educational
procedure alone with hand tracking by using dental students and experts.

You might also like