Project Proposal: Level 4
Project Proposal: Level 4
Project Proposal: Level 4
Level 4
When it comes to available technological resources for dentists, there is a lack between
mapping dental-x ray scans with actual human faces. This research study will identify
potential possibilities of filling that gap using immersive technologies such as Augmented
Reality. Also identifying abnormalities in the x-ray before mapping with the human face
will come handy as well. On the other hand due to the current pandemic and crisis situation
many medical students who are studying to become dentists are having issues with clinical
practices. This research study will identify the feasibility of simulating practicals related to
the dentist's domain using Virtual Reality as well.
The proposed solution mainly consists of four modules. Dental radiographs which are
called dental X-rays are the most commonly used method of dental imaging. Dental X-rays
used to detect cavities, bone loss, malignant or benign tumors, hidden dental structures,
etc. There are two main users of the proposed system: dentists and dental students. First,
gather dental X-rays and identify the abnormalities in those X-rays. Further, identify from
which side of the teeth set was the X-ray taken. After processing the dental X-ray image,
map the processed dental X-ray into the human face and extract the details there. It helps
dentists during surgeries like oral and maxillofacial surgery, orthodontics, implantology,
restorative dentistry. The processed dental X-ray image is mapped into a dental mold as
well. Its purpose is to make learning easier for dental students. After analyzing the data
from the previous modules, results can be accessed by dentists and dental students. Dentists
can use those data directly for their surgeries and treatments. Using analyzed data,
implement a VR simulation to train dental students.
Table of Content
CHAPTER 1
1. INTRODUCTION
1.1. Introduction
1.4.1. Aim
1.4.2. Objectives
1.6. Summary
CHAPTER 2
2. LITERATURE REVIEW
2.1. Introduction
2.4. Summary
CHAPTER 3
3. TECHNOLOGY ADAPTED
3.1. Introduction
3.3. Summary
CHAPTER 4
4. OUR APPROACH
4.1. Introduction
4.2. Dental X ray mapping using AR and VR dental simulator
4.3. Users
4.4. Input
4.6. Output
4.7. Summary
CHAPTER 5
5.1. Introduction
5.3. Summary
CHAPTER 6
6. IMPLEMENTATION
6.1. Introduction
6.2. Implementation
6.2.4. VR Simulation
6.3. Summary
CHAPTER 7
7. DISCUSSION
REFERENCES
APPENDIX A
Tables and Figures
Figure 2.1 – Comparison Between Existing Systems Related To Mosule 1
Figure 2.2 – Comparison Between Existing Systems Related To Mosule 2, Module
3 And Module 4
The idea of oral hygiene steadily became more essential with the growth of the
elderly population. Hence, the importance of dental care issues increased. Between
2015-2020, the growth rate of the global market for oral medical equipment was
4.7% [3]. As per World Health Organization data, nearly all adults and more than
60% of teenagers worldwide have some kind of dental caries [4]. Even though the
dental diseases are gradually increasing, the rate of the available treatments is
comparatively low [3]. Therefore, enhancing the current dental practices with
technology based methods and equipment increase the growth rate of treatments.
After having a discussion with professional dentists in Sri Lanka we have identified
there is a lack of technological resources to address the above problems. Hence, it
inspired and motivated the proposed solution.
After having a discussion with professional dentists in Sri Lanka we have identified
a few problems that can be taken under a research study.
On the other hand due to the current pandemic and crisis situation many medical
students who are studying to become dentists are having issues with clinical
practices. This research study will identify the feasibility of simulating practical
related to the dentist's domain using Virtual Reality as well.
1.6. Summary
This chapter includes an overview of the project research area, in addition to that
research's background and motivation, the problem to be solved, and the research's
aim and objectives. The rest of the report is organized as follows.
The second chapter discusses comparable work and provides a review of previous
research. Chapter 3 describes the technology used in the research implementation
process. In Chapter 4, we detail our approach to resolving the problem. The
proposed system and its submodules are analyzed and designed in Chapter 5.
Chapter 6 describes the implementation that has taken place. Chapter 7 brings the
discussion to a close by summarizing and describing the report.
CHAPTER 2
2. LITERATURE REVIEW
2.1. Introduction
The effectiveness of the suggested system will be covered in this chapter while
discussing, comparing and evaluating other approaches, solutions, and techniques
used in the addressed problem area.
Jie Yang, yuchen Xie, Lin Lui provided the periapical dental X-ray images
acquired before and after the operations, together with the Automated
Dental Image Analysis Learning on Small Dataset using Deep datasets,
processes, and outcomes. They assisted clinicians in classifying diseases as
improving or deteriorating, with no clear change.The convolutional neural
network with a transfer learning methodology—NASNet—has been
implemented in the work (AL-Ghamdi, 2022). Cavity, filling, and implant
are the three sorts of classes that are used in the classification process. The
116 patient images in the small training dataset. Neural networks with many
max-pooling layers, dropout layers, and activation functions are used in
image processing stages.
2.3.2. Only one type of object in the panoramic tooth image was detected
The location of the restoration item in the data has been successfully located
using the object detection model that has been constructed using panoramic
x-ray pictures of teeth (D Suryani, M N Shoumi, & R Wakhidah, 2021). In
the panoramic tooth imaging, just one type of object—a tooth restoration—
was found. This is because it is challenging to locate a dataset of panoramic
dental images that includes a variety of things, like dental implants and
endodontic procedures. The findings from the panoramic tooth image's
object detection yield confidence level values between 0.91 and 0.96.
H. S. Bhadauria, and Anuj Kumar To extract the restoration component
from dental X-ray pictures, Nitin Kumar proposed integrating Fuzzy
Clustering with Iterative Level Set Active Contour Segmentation for
Detection of Dental Restoration. In this case, the median filter uses Fuzzy
clustering to preprocess the image and segments.
Jiafa Mao, Kaihui Wang, Yahong Hu, Weiguo Sheng, and Qixin Feng
proposed a Grab Cut algorithm for dental X-ray images based on full
threshold segmentation. They obtained the outline collection of Iwholen and
Crowns. The synthetic image of the contour and crown is subjected to
morphological open operation and median filtering before being used as a
mask for grab cut to create the target tooth image.Said.E.H, Dias .E,M,
Nasar .G.F used mathematical morphology to suggest a method for
segmenting teeth in digitized dental X-ray films. To enhance the
effectiveness of teeth segmentation, a gray-scale contrast stretching
transformation is used. They concluded that their approach has the lowest
failure rate and can handle bitewing and periapical dental radiograph views.
Llena el al. AR cavity (43) 3rd Theoretica ten speculative There were no
2018, models on year DS l ideas significant
Spain [7] computers knowledge variations in
and mobile prior to, Preparation of knowledge
devices shortly Class I and between groups,
following, Class II cavities however there
and six were substantial
months Students’ disparities in
after satisfaction cavity depth and
training extent for Class I
and Class II
Clinical cavities.
skills Computers were
favored by
Satisfactio students over
n mobile devices.
questionna
ire
Figure 2.2 – Comparison Between Existing Systems Related To Mosule 2, Module 3 And Module
4
2.4. Summary
Discusses the existing systems and the limitations of the other’s work.
CHAPTER 3
3. TECHNOLOGY ADAPTED
3.1. Introduction
This chapter focuses on the project's technologies. In this chapter, machine
learning, deep learning techniques and computer vision are briefly described.
In addition, Unity platform, Vuforia and AR Foundation plugins used in
Augmented Reality implementation and Unity and Blender platforms in Virtual
Reality are described.
3.2.2. Vuforia
A library called Vuforia is used to develop augmented reality on portable
electronics. In order to project virtual information such as text, photos,
videos, or 3D animations on the target images and objects, Vuforia analyzes
images and 3D objects to find and record features. Because the library that
comes with Vuforia contains the necessary basic code to implement AR, the
developer can totally concentrate on the final product they are creating
without having to worry about how the system will function at its most
fundamental level. This library works with the Unity Game Engine,
Android, and iOS devices.
3.2.3. AR Foundation
The AR Foundation package contains all of the GameObjects and classes
required to create interactive AR experiences in Unity rapidly. To create
comprehensive apps, AR Foundation combines cutting-edge capabilities
from ARKit, ARCore, Magic Leap, and HoloLens with special Unity
features. This framework makes it possible to utilize each of these
characteristics in a single workflow.
● ARCore - ARCore is Android's augmented reality framework.
AR Foundation collaborates with ARCore to bring AR
capabilities to Android devices.
● ARKit - ARKit is Apple's augmented reality framework. AR
Foundation collaborates with ARKit to bring AR capabilities to
Apple devices.
● XR Plugin Management - The XR Plugin Management package
provides a straightforward management tool for platform-
specific XR plug-ins like ARKit and ARCore.
3.2.4. Blender
Blendr is used to create objects which are used in Virtual Reality simulation.
Blender is a free and open-source 3D computer graphics software tool set
that can be used to create animated films, visual effects, art, 3D-printed
models, motion graphics, interactive 3D apps, virtual reality, and,
previously, video games. 3D modeling, UV mapping, texturing, digital
sketching, raster graphics editing, rigging and skinning, fluid and smoke
simulation, particle simulation, soft body simulation, sculpting, animation,
match movement, rendering, motion graphics, video editing, and
compositing are among the features of Blender.
3.2.5. Android
It is a mobile operating system intended specifically for touchscreen devices
such as smartphones and tablets. It is based on a modified version of the
Linux kernel as well as other open source applications. The Open Handset
Alliance, an Android developer alliance, created it, and Google officially
supports it.
The source code has been used to build a wide range of Android variants
for various devices, including game consoles, portable media players,
digital cameras, and computers. To create the mobile application, we use
Android.
3.2.6. Computer Vision
Computer vision is the process by which computers and systems extract
meaningful information from digital videos, pictures, and other visual
inputs.. AI enables computers to think and computer vision enables them to
see, observe and understand. Some examples of well-established activities
using computer vision are Categorization of Images, Object Detection,
Observation of Moving Objects, Retrieval of Images Based on Their
Contents. OpenCV2 is used in our project as it is based on image processing.
OpenCV2 is an open-source library that can be used on different platforms
3.2.9.2. C#
The programming language used in Unity is C#. Unity's languages
are all object-oriented scripting languages. Scripting languages, like
any other language, comprise phrases or bits of speech, the most
important of which are variables, functions, and classes.
3.2.9.3. Java
JAVA is a programming language that is utilized in the development
of Android apps. It is a class-based and object-oriented
programming language with syntax influenced by C++. JAVA's
main goals are to be simple, object-oriented, robust, secure, and high
level.
JVM (JAVA Virtual Machine) is used to run JAVA applications,
however Android has its own virtual machine called Dalvik Virtual
Machine (DVM) that is tailored for mobile devices.
3.3. Summary
This chapter discusses the technologies used and their suitability for the
implementations.
CHAPTER 4
4. OUR APPROACH
4.1. Introduction
This chapter will provide a description for the proposed solution with reference to
users, inputs, outputs, process, technology that implements the solution
Dentists can enter soft copies of dental x-rays using a mobile application, as shown
in above diagram. Then the predicted image is requested through an API request
.The output results are then sent to the mobile application via API, where two
Augmented Reality systems can use the output image.
Figure 4.2 – Block Diagram Of The System Of Module 1, Module 2 And Module 3
After receiving the dental X-ray images which identify abnormalities , map the
processed dental X-ray into the human face and extract the details there with the
use of Augmented Reality. The dental x-ray images map to the human face using
the camera of the mobile device. It helps dentists during surgeries like oral and
maxillofacial surgery, orthodontics, implantology, restorative dentistry.
Moreover, using the abnormalities marked x-ray is used to map into a dental mold
with the use of Augmented Reality. Here also the dental x-ray images map to the
dental mold using the camera of the mobile device. This module helps the dental
students to practice before they come with the actual operations. And also the
dentist can explain to patients about their surgeries using mapped X-ray.
4.2.1. Users
Main users of this system are dental doctors who are performing dental
surgeries and the dental students who are practicing pre-clinical studies.
Output of the dental X-ray image processing module is used by both
surgeons and the students. Dental mold recognition and X-ray mapping
module and the VR simulation module are used by the dental students while
the face recognition and X-ray mapping module is used by the dental
surgeons.
4.2.2. Input
Soft copies of dental X-ray images are utilized as the input for the system.
4.2.4. Output
X-ray mapped to the patients’ face, X-ray mapped to the dental mold and
the VR simulation
4.3. Summary
This chapter discusses the approach of this research including descriptions
about the entire system, users who are targeted, Inputs to the system, how the
inputs are processed and the final output of the system.
CHAPTER 5
5. ANALYSIS & DESIGN
5.1. Introduction
This chapter contains details of the design of the system including descriptions of
each module, what each module does and the interaction between the modules.
Dental X-ray images are utilized as the input for radiography image
processing algorithms, which use mathematical processes to process
radiography images. Numerous methods are available for digital image
processing that can be used to process input steps,
(a) (b)
Original Augmented
5.5 – THE ORIGINAL ROWS REPRESENT THE ORIGINAL SAMPLE COUNTS, WHILE THE ENHANCED ROWS
DEMONSTRATE HOW DATA AUGMENTATION INCREASED THE NUMBER OF SAMPLES .
Average Filter - The effective linear filter is the average filter. The
mean value of each pixel's neighbors, including itself, is used to
replace each one in this image. It makes sense to lower the Gaussian
noise. It is practical to reduce impulsive noise and simple to design.
However, it does not override the image's edges.
In this module face recognition and dental X-ray mapping to the human face
is done using Augmented Reality. The output data from the Dental X-ray
image processing module is used as the input data of this module. Since the
Dental X-ray image processing module is not completely done yet, I have
used some sample data.
1. AR Face Recognition
3. Evaluation
First, I identify the area of the face and then map dental x-ray image which
is output from the Dental X-ray image processing module to the face by
using Augmented Reality. For the dental image mapping to the face, we
have developed an android mobile application. By using the camera of the
android device, we can map the input x-ray image to the particular human
face .Unity is the development platform I have chosen to develop this
module. Unity is a cross-platform game engine that provides developers
with a simple workspace in which to work on and create AR applications.
Here there are two main plugins used in Unity to develop Augmented
Reality applications. AR Foundation and Vuforia are those two main
plugins used in Unity. In this module I do face recognition and dental X-ray
mapping to the human face by using those two plugins and finally evaluate
the better solution by comparing the accuracy of each.
● AR Face Recognition
First, we need to identify the human face before the dental x-ray mapping
stage. AR foundation provides AR face to ease the face recognition process.
ARFace objects are used to represent faces and are generated, updated, and
removed by the ARFaceManager. The ARFaceManager fires a
facesChanged event once each frame, which comprises three lists, faces that
have been added, faces that have been changed, and faces that have been
removed since the last frame. When the ARFaceManager identifies a face
in the scene, it creates a Prefab with an ARFace component to track the face.
The ARFace component, which is connected to the Face Prefab, provides
access to detected faces. Vertices, indices, vertex normals, and texture
coordinates are all provided by ARFace. A central pose, three region poses,
and a 3D face mesh are all provided via the Augmented Faces API.
points toward the left ear are shown by the positive X-axis (X+).
points upwards out of the face are shown by positive Y-axis (Y+).
points into the center of the head are shown by positive Z-axis (Z+).
Region poses are significant portions of a user's face that are located on the
left, right, and tip of the nose. The area postures are oriented along the same
axis as the central position. The face mesh is made up of 468 points that
represent the human face. It is also characterized in terms of the center
posture.
In the first stage I have identified the area of the face and identified 468
points that represent the human face. By that, I could identify the mouth
area of the face. As the next step, Output dental x-ray from the Dental X-
ray image processing module is required to be mapped onto the human
mouth area. To map this dental x-ray, doctors requested us to develop an
android application. In this android application, we can select the particular
patient's x-ray. Mapping part is done by using the camera of the android
device. When we hold the camera of the device to the human face, the
particular x-ray is mapped to the face. Since the output image of the Dental
X-ray image processing module contains the abnormal area of the teeth set,
we can understand in which areas of the teeth set are having those
abnormalities.
● Evaluation
This module is focused on to identify the dental mold and map the X-ray to
the mold using augmented reality. The output data from the Dental X-ray
image processing module is used as the input data of this module. The
intention of this module is to improve the pre-clinical practices of dental
students.
3. Evaluation
The output of the dental X-ray image processing module is used as the input
for this module. The output X-ray image map to the dental mold using
augmented reality using an android application. With the use of the camera
of the android device, we can map the input x-ray image to the dental mold
.Unity platform is used to develop this module. Unity is a cross-platform
game engine that provides developers with a simple workspace in which to
work on and create AR applications. AR Foundation and Vuforia are two
main plugins that can be used in Unity to develop Augmented Reality
applications. My intention is to develop this module using both these
plugins and evaluate the accuracy of procedure by using external users.
After identifying the dental mold it is required to map the output X-ray
image of the first module to the mold. The X-ray is uploaded to the android
device and with the use of the camera of the android device, X-ray can be
mapped to the dental mold. As the X-ray image contains the abnormal area
of the teeth set, students can identify the differences of the normal teeth and
the abnormal teeth.
● Evaluation
Virtual reality represents the first chance to merge 3-D visual imaging with
interaction at a level that allows realistic simulation of intricate anatomic
dissection or surgical operations. It can help students understand surgical
anatomy by allowing them to investigate the interrelationships of various
organ systems from viewpoints not available through other typical teaching
methodologies. Before attempting new surgical operations on an animal
model, a researcher might test them on many occasions. It is becoming
easier to teach surgical skills. There are less dangers. There are fewer
animals required for study. The simulator is a "stand-alone" training system
in its present typical form (HMD).
This module focuses on a virtual reality simulation.Many dentistry students
are struggling with clinical procedures as a result of the present epidemic
and crisis circumstances. Implement a virtual simulation after studying the
results of prior modules to make learning simpler for dentistry students.
Instead of VR gloves, employ hand motion tracking here. Only a virtual
reality HMD, a head-mounted device that offers virtual reality to the wearer,
will be utilized.
The next steps detail how the proposed technique works and the methods
we will use.
● Hand Tracking
Challenges
- Object interaction
- Tracking Location
5.3. Summary
Analysis and design of each module are discussed in this chapter.
CHAPTER 6
6. IMPLEMENTATION
6.1. Introduction
In this section we discuss the overall implementation of the project up to now.
6.2. Implementation
1.6.1. Dental X-ray image processing
Data Augmentation
Horizontal Flip: Image pixels are transferred horizontally from one half
of the image to the other in this procedure
Figure 6.1 - Source Code Of Rotate And Flipped Input Image
Preprocessing
To improve the quality of the dental x-ray images in this phase I use
preprocessing techniques. Here I used Adaptive Histogram Equalization
(AHE) for an enhanced image.
Image Segmentation
I used ten dental x-ray images to calculate the mean value of f-scores of the
result to prove the above mentioned problem.
Figure 6.8 - Source Code For Calculate Total F-Score Value
The mean value of f-scores was very less under global thresholding
segmentation. So the global thresholding segmentation method is not
suitable for the dental X-ray segmentation process.
Here I used ten X-ray images for calculating the mean value of Intersection
over union for proving the above mentioned problem.
Figure 6.12 - Source Code For Calculate Sum F-Score Value And Display Output Image
Here are some codes I have written to implement face dental image
mapping.
Environment
Pinch animation
Figure 6.32 - Pinch Animation Creation
Script for hand physics
6.3. Summary
Implementations of each module are discussed in this chapter.
CHAPTER 7
7. DISCUSSION
The dental diagnostic technique is still difficult, and its accuracy is restricted. This issue
emphasizes the need for best opinions, so dentists can identify diseases at an early
stage. Our research team is going to implement an application to automatically identify
missing teeth and restore teeth. In this study we classified and identified missing teeth
and restoration teeth after mapping the processed dental X-ray into the human face and
dental mold. And also we develop a VR simulation to facilitate the clinical practice
process for dental students.
References
I learned how to work with the OpenCV library and Python. Before the image
preprocessing part I used the Data Augmentation technique to generate various
copies of the original images and solve the overfitting problem. In the preprocessing
stage to improve the enhancement of the image, Adaptive Histogram Equalization
was used. Then to remove noises like salt and pepper, Median Filter was used.
My module is Face Recognition and dental x-ray mapping. In this module I need to
identify the human face and map the output of the dental x-ray image processing
module onto the particular human face. I had to find some sample data for the
mapping mechanism because the dental x-ray image processing module is not
finished yet. We are requested to do this module by using Augmented Reality. So
I have selected Unity as the platform since the AI utilized by Unity functions aids
in the finest real-time occlusion and object tracking. First, I read some research
papers to gain knowledge about existing Augmented Reality technologies and
applications related to my research area and the limitations of them. By gathering
existing knowledge, I have identified the main two methods used in Unity AR to
track the real-time objects that are AR Foundation and Vuforia. So, I decided to go
with the AR Foundation first. Since Unity and augmented reality are entirely new
to me, before starting the implementation I did several tutorials and read
documentations to be familiar with these technologies. Then I broke down my
module into three main stages. These are AR face recognition, dental x-ray
mapping using AR and evaluation. First I implemented human face recognition
using the AR Foundation. Using face recognition results I could extract the mouth
area of the face. After identifying the mouth area, I map sample teeth data set into
the human face by using AR Foundation. Here I use an android application to map
the teeth set onto the human face. We can map the given dental-ray image into the
human face with the use of a camera in the mobile. So, I hope to implement face
recognizing and dental-ray mapping steps by using Vuforia as well and then I hope
to evaluate those two methods and find the better one.