Ida Delphine Final Year

Download as pdf or txt
Download as pdf or txt
You are on page 1of 71

THE UNIVERSITY OF BAMENDA

NATIONAL HIGHER DEPARTMENT OF


POLYTECHNIC COMPUTER
INSTITUTE ENGINEERING

DESIGN AND IMPLEMENTATION OF A SYSTEM TO


AUTOMATICALLY RECORD STUDENT MARKS FROM
SCANNING SCRIPTS CASE STUDY UNIVERSITY OF BAMENDA

A Project Submitted to the Department of Computer Engineering in the


National Higher Polytechnic Institute of the University of Bamenda in Partial
Fulfilment of the Requirements for the Award of a Bachelor of Engineering
Degree in Computer Engineering

PRESENTED BY:

MEH MBEH IDA DELPHINE


REGISTRATION Number: UBa19E0180

SUPERVISOR(S)

ENGR NGONGBAN AUGUSTINE MR. DEREK NDI KONGYU


(Lecturer) (Assistant Lecturer)

JULY
1 2023
DECLARATION
I, Meh Mbeh Ida Delphine, registration N◦ UBa19E0180, in the Department of Computer
Engineering, National Polytechnic Higher Institute, The University of Bamenda hereby
declares that this work titled “DESIGN AND IMPLEMENTATION OF A SYSTEM TO
AUTOMATICALLY RECORD STUDENT MARKS FROM SCANNING SCRIPTS
CASE STUDY UNIVERSITY OF BAMENDA” is my original work. It has not been
presented in any application for a degree or any academic pursuit. I have acknowledged all
borrowed ideas nationally and internationally through citations.

Date: _________________ Signature of author _________________

i
CERTIFICATION
This is to certify that this dissertation titled “DESIGN AND IMPLEMENTATION OF A
SYSTEM TO AUTOMATICALLY RECORD STUDENT MARKS FROM SCANNING
SCRIPTS CASE STUDY UNIVERSITY OF BAMENDA” is the original work of Meh
Mbeh Ida Delphine. This work is submitted in partial fulfillment of the requirements for the
award of a Bachelor of Science Degree in Computer Engineering at the National Higher
Polytechnic Institute of the University of Bamenda, Cameroon.

Supervisor: _________________________________________________________

Engr NGONGBAN AUGUSTINE

Co-Supervisor: _______________________________________________________

Mr. KONGNYU DEREK

Head of Department: __________________________________________________

Dr. NDUKUM PASCALINE

The Director: _______________________________________________________

Prof. CHO-NGWA FIDELIS

ii
ABSTRACT
The Mobile Application for Automated Student Mark Recording and Grading is an innovative
and user-friendly system designed to streamline the grading process for lecturers, especially in
the University of Bamenda. Leveraging cutting-edge technologies, including Flutter for the
mobile app, Firebase for authentication and storage, and Google Cloud Vision API for OCR
and handwriting recognition, the system presents an efficient and accurate solution for
digitizing student marks.

The application empowers lecturers to effortlessly record and manage student marks by
scanning and capturing handwritten scripts using their smartphones. Through optical character
recognition (OCR) and advanced handwriting recognition algorithms, the system accurately
extracts student marks and ID information from the scanned scripts, eliminating the need for
manual data entry and reducing errors.

The system's clean and intuitive user interface enables lecturers to review, verify, and update
the recognized marks before securely storing them in the cloud-based Firebase Cloud Firestore
database. Real-time data synchronization ensures that multiple lecturers can collaborate and
access the marks simultaneously.

Key Features:
1. User-friendly mobile application for capturing and recording student marks.
2. Integration with Google Cloud Vision API for OCR and handwriting recognition.
3. Firebase Authentication for secure and seamless lecturer logins.
4. Firestore database for real-time data synchronization and accessibility.
5. Grade sheet generation for generating comprehensive reports on student performance.
6. Scalable cloud-based architecture for enhanced performance and accessibility.
7. Data privacy and security measures to protect sensitive lecturer and student
information.

This application revolutionizes the grading process in higher education, providing lecturers
with a powerful tool to efficiently record, manage, and analyze student marks. By automating
time-consuming manual tasks, the system enables lecturers to focus more on effective teaching
and academic support. With its potential for further expansion and continuous improvement,

iii
the application represents a transformative step towards a more efficient and accurate grading
ecosystem in educational institutions.

iv
DEDICATION

This project is dedicated to my mother Ms. Vivian Ekob Tenjoh as well as my two sisters
Enjeck Mbeh Cleopatra and Sih-Namboh Mbeh Indira.

v
ACKNOWLEDGEMENTS

I would like to thank the director Prof. CHO-NGWA FIDELIS, my Head of Department
(HOD) DR NDUKUM PASCALINE and my supervisors Engr. Ngongban Augustine
and Mr. Derek Ndi Kongyu who willingly and tirelessly directed me throughout my
research period for this work to be realized.
My gratitude also goes to the Director and Staff of the National Higher Polytechnic Institute
for their effort in equipping me with the knowledge I need to realize my dream of becoming
an excellent engineer.

I am most grateful to my parents and my entire family for supporting me morally, spiritually
through their prayers and financially during the whole period of my studies and during this
research study. Their encouragement, advice and support enable me to put in effort which
enabled me to conclude the study successfully.

Special thanks go to friends and classmates for their support in my educational life.

vi
TABLE OF CONTENTS

DECLARATION ........................................................................................................................ i
CERTIFICATION ..................................................................................................................... ii
ABSTRACT.............................................................................................................................. iii
DEDICATION ........................................................................................................................... v
ACKNOWLEDGEMENTS ...................................................................................................... vi
TABLE OF CONTENTS ......................................................................................................... vii
LIST OF FIGURES .................................................................................................................. ix
CHAPTER 1 .............................................................................................................................. 1
INTRODUCTION ..................................................................................................................... 1
1.2: Statement of the problem ................................................................................................ 1
1.3: Rationale (Motivation and Justification of study) .......................................................... 2
1.4: Research Objectives ........................................................................................................ 3
1.4.1: General Objectives ................................................................................................... 3
1.4.2: Specific objectives ................................................................................................... 3
1.5: Research Questions ......................................................................................................... 4
SIGNIFICANCE OF THE STUDY....................................................................................... 4
1.6: Scope/delimitation .......................................................................................................... 5
OVERVIEW OF DISSERTATION CHAPTERS ................................................................. 6
CHAPTER 2 .............................................................................................................................. 8
LITERATURE REVIEW .......................................................................................................... 8
2.1 Conceptual framework ..................................................................................................... 8
2.1.1. Character Recognition ................................................................................................. 8
2.1.1.1: Online Character Recognition .............................................................................. 8
2.1.1.2: Offline Character Recognition .............................................................................. 9
2.1.1.3: World Languages and Scripts ............................................................................. 15
2.1.3 English language ..................................................................................................... 17
2.2: Empirical literature review ........................................................................................... 19
3.1: Research & Analysis ..................................................................................................... 23
3.1.1 Needs Assessment:.................................................................................................. 23
3.1.2 Technology Evaluation: .......................................................................................... 24
3.1.3 OCR and Handwriting Recognition:....................................................................... 24
3.1.4 Backend Infrastructure: ........................................................................................... 24

vii
3.1.5 Data Storage & Management: ................................................................................. 24
3.1.6 Ethical Considerations: ........................................................................................... 25
Planning and Requirement Analysis .................................................................................... 25
Functional Requirements ................................................................................................. 25
Non-functional Requirements .......................................................................................... 27
3.2: Design & System Architecture ..................................................................................... 28
3.2.2 User Authentication and Data Storage:................................................................... 30
3.2.3 Handwriting Recognition (Google Cloud Vision API): ......................................... 30
3.2.4 Backend Server (Flask): .......................................................................................... 31
3.2.5 Grading and Mark Entry: ........................................................................................ 31
3.2.6 Grade Sheet Generation: ......................................................................................... 31
4.1: Introduction................................................................................................................... 48
4.2: Application icon installed ............................................................................................. 48
4.4: The Login page ......................................................................................................... 49
4.5: The Home Page ............................................................................................................. 50
4.7: Extract Screen ............................................................................................................... 52
4.8: Generate CSV ............................................................................................................... 53
CHAPTER 5 ............................................................................................................................ 56
CONCLUSION AND RECOMMENDATIONS .................................................................... 56
5.1 Conclusion ..................................................................................................................... 56
5.2 Future Recommendations & Suggestions .................................................................. 57

viii
LIST OF FIGURES

Figure 1: Classification of Character Recognition ............................................................... 8


Figure 2: summary of OCR Processes ..................................................................................... 11
Figure 3: Normalisation of Characters ..................................................................................... 12
Figure 4: Segmentation Methods (source: Chaudhuri et al.,2017). .................................. 13
Figure 5: Cursive and non-cursive scripts .............................................................................. 17
Figure 6: Gradish Use Case Diagram ...................................................................................... 23
Figure 7: System Architecture Diagram .................................................................................. 29
Figure 8: Gradish Flow chart ................................................................................................... 33
Figure 9: Gradish app directory tree ........................................................................................ 37
Figure 10: Gradish LoginScreen class ..................................................................................... 38
Figure 11: Gradish RegisterScreen class ................................................................................. 39
Figure 12: Gradish HomeScreen class ..................................................................................... 40
Figure 13: Gradish Create Course Screen ................................................................................ 41
Figure 14: Gradish SelectedImage screen................................................................................ 42
Figure 15: Gradish Sequence diagram ..................................................................................... 46
Figure 16:Gradish application icon on a mobile phone ........................................................... 48
Figure 17: Register page for Gradish ....................................................................................... 49
Figure 18: Sign In Page of Gradish.......................................................................................... 50
Figure 19: Homepage of Gradish ............................................................................................. 51
Figure 20: Create Course Screen ............................................................................................. 52
Figure 21: Scan script for Gradish ........................................................................................... 53
Figure 22: Generate csv popup ................................................................................................ 54
Figure 23: Gradish generated csv file being view as an excel sheet ........................................ 55

ix
CHAPTER 1

INTRODUCTION

1.1 Background of the study


In the traditional educational setting, the process of recording marks from academic scripts
onto a digital format is often time-consuming, prone to errors, and involves manual data entry.
To address these challenges, this study aims to develop a mobile application that leverages
smartphone cameras to scan academic scripts and automatically record the marks on the sheets
of paper to a CSV (Comma-Separated Values) file. By automating the mark recording process,
lecturers can save time, reduce errors, and enhance efficiency in managing student assessments.
The advent of mobile technology has brought about a new way of grading that is efficient and
streamlined. Mobile applications that enable lecturers to grade scripts using their phone
cameras have become increasingly popular. These applications use optical character
recognition (OCR) technology to detect text on scripts and record them automatically on a
spreadsheet or database.
OCR technology has advanced significantly in recent years, making it possible to accurately
detect text on various surfaces, including paper and digital screens. The use of this technology
in recording marks has the potential to revolutionize the way lecturers record marks.

1.2: Statement of the problem


The existing manual mark recording process for academic scripts is inefficient, error-prone,
and time-consuming for lecturers. The need to transcribe marks from paper-based scripts onto
a digital format hinders the efficiency of the grading process. There is a lack of automated
systems that utilize mobile technology and image processing algorithms to accurately extract
and record marks from scanned academic scripts, leading to potential errors, delays, and a
heavy workload on lecturers.

In the University of Bamenda, given that students codes are manually input into the system by
lecturers, there is a high possibility of errors and mixing up student coded numbers. This could
result in missing or inaccurate marks. Therefore, there is a need for technology where coded
numbers can be easily scanned into the system to curb these errors.

1
The development of a mobile-based automated mark recording system aims to address these
issues by providing a convenient and efficient solution for lecturers to scan academic scripts
using their smartphone cameras and automatically record the marks to a digital format. By
leveraging image processing algorithms and integrating with a backend system, the system will
accurately extract and interpret handwritten marks, eliminating the need for manual data entry
and reducing errors and time spent on recording marks.

The solution will provide lecturers with a user-friendly mobile application that streamlines the
mark recording process, enhances accuracy, and improves efficiency. The phone’s camera can
be used to take a picture of the student’s paper and then process the image to recognize the
handwriting. After the image is processed, the handwriting can be converted into text and the
marks can be recorded automatically. By automating the mark recording process, the system
aims to alleviate the burden on lecturers, reduce errors, and enhance the overall grading
experience, ultimately improving the efficiency of the educational assessment process.

1.3: Rationale (Motivation and Justification of study)


1. Time Efficiency: The current manual mark recording process for academic scripts at the
University of Bamenda is time-consuming and labor-intensive. Lecturers spend significant
amounts of time transcribing marks from paper-based scripts onto digital formats, which could
be better utilized for other teaching and research activities. By automating the mark recording
process using mobile technology, lecturers can save valuable time and allocate it to more
productive endeavors.

2. Error Reduction: Manual data entry is susceptible to human errors, such as misreading or
mistyping marks and coded numbers, which can lead to inaccuracies in grading and potential
disputes. By leveraging image processing algorithms and automated mark recognition, the
system aims to minimize errors associated with manual transcription, ensuring greater accuracy
in the recorded marks.

3. Enhanced Grading Workflow: The mobile-based automated mark recording system


provides lecturers with a convenient and efficient tool to streamline the grading workflow. By
simply scanning academic scripts using their smartphone cameras, lecturers can quickly record
marks and eliminate the need for manual data entry. This simplification of the grading process
allows lecturers to focus more on providing feedback and guidance to students, ultimately
improving the quality of the educational experience.

2
4. Administrative Efficiency: Academic institutions often handle large volumes of scripts and
require efficient administrative processes for managing and analyzing grading data. The
automated mark recording system, with its ability to generate structured CSV files, facilitates
seamless integration with the existing administrative system and simplifies data management.
This integration enhances administrative efficiency, enabling easier data analysis, reporting,
and record-keeping.

5. Technological Advancements: The advancements in mobile technology, image processing


algorithms, and handwriting recognition techniques provide an opportune moment to develop
a mobile-based automated mark recording system. The availability of powerful smartphones
with high-resolution cameras and computational capabilities, coupled with sophisticated image
processing algorithms, makes it feasible to accurately extract and interpret marks from scanned
scripts.

By addressing the limitations of manual mark recording processes and leveraging the potential
of mobile technology and image processing algorithms, the project aims to revolutionize the
grading process, enhance efficiency, reduce errors, and improve the overall experience for
lecturers. The motivation behind this project lies in the desire to optimize educational
assessment practices, provide lecturers with valuable time-saving tools, and contribute to the
advancement of technology-driven solutions in the educational sector.

1.4: Research Objectives


1.4.1: General Objectives

To design and implement a system to automatically record student marks by scanning scripts.

1.4.2: Specific objectives

 To design and develop a mobile application that enables lecturers to use their
smartphone cameras to scan academic scripts efficiently.
 To integrate image processing algorithms to extract and recognize mark data from the
scanned scripts accurately.
 To integrate the application with a backend system that generates a structured CSV file
containing the recorded marks for further analysis and record-keeping.

3
 To evaluate the usability, accuracy, and efficiency of the developed mobile application
through user testing and feedback.

1.5: Research Questions


1. How can a mobile application be designed and developed to enable lecturers to efficiently
scan academic scripts using smartphone cameras?
2. What image processing techniques and algorithms can be implemented to accurately extract
and recognize marks from scanned academic scripts?
3. How can the mobile application be integrated with a backend system to generate a structured
CSV file containing recorded marks for further analysis and record-keeping?
4. What is the usability and user experience of the developed mobile application in terms of
script scanning, mark recognition, and data recording functionalities?
5. How does the developed automated mark recording system compare to traditional manual
data entry methods in terms of accuracy, time efficiency, and error rates?

SIGNIFICANCE OF THE STUDY


The study on developing a mobile-based automated mark recording system for academic
scripts holds several significant implications:

1. Improved Efficiency: By automating the mark recording process, the system significantly
enhances the efficiency of lecturers recording grades. The mobile application enables them to
quickly scan academic scripts using smartphone cameras, eliminating the need for manual data
entry. This time-saving aspect allows lecturers to allocate their efforts towards providing timely
feedback, engaging with students, and enhancing the overall teaching and learning experience.

2. Error Reduction and Accuracy: Manual data entry is prone to errors, which can lead to
inaccuracies in grading and potentially impact students' academic performance. The automated
mark recording system leverages image processing algorithms and handwriting recognition
techniques to minimize errors associated with manual transcription. It ensures greater accuracy
in recorded marks, reducing the chances of disputes or discrepancies in grading.

3. Enhanced Administrative Processes: Academic institutions handle large volumes of


scripts and require efficient administrative processes for managing grading data. The
automated mark recording system streamlines administrative tasks by generating structured

4
CSV files, facilitating seamless integration with existing systems. This integration improves
data management, analysis, and reporting, enabling administrators to gain insights into student
performance more efficiently.

4. Technological Advancement in Education: The study embraces the technological


advancements in mobile technology, image processing, and handwriting recognition,
contributing to the advancement of technology-driven solutions in the educational sector. By
harnessing the power of smartphones and intelligent algorithms, the project showcases the
potential of these technologies to optimize educational assessment practices and improve
overall efficiency in academia.

5. Enhanced User Experience: The development of a user-friendly mobile application


tailored to lecturers' needs enhances the user experience in the mark recording process. The
intuitive interface, ease of use, and efficient workflow empower lecturers to perform their tasks
with greater convenience and satisfaction. This positive user experience contributes to higher
engagement and adoption rates, promoting the acceptance and utilization of the automated
system.

6. Future Research and Innovation: The study opens avenues for further research and
innovation in automated mark recording systems. It provides a foundation for future studies to
enhance the accuracy of mark recognition algorithms, explore additional features to support
feedback and grading processes, and integrate the system with other educational tools or
learning management systems. The findings and insights gained from this study can inspire
further advancements in the field of educational technology.

Overall, the significance of this study lies in its potential to revolutionize the mark recording
process, optimize administrative efficiency, improve accuracy, reduce errors, and enhance the
overall teaching and learning experience. The findings and outcomes have implications not
only for lecturers but also for students, educational institutions, and the broader educational
community seeking innovative solutions to streamline assessment processes.

1.6: Scope/delimitation
 The system will only be implemented for teachers at the University of Bamenda.

5
 The system will only be used for recording student marks from scanning scripts using
phone camera.
 The system will not account for student handwriting styles which can affect recognition
accuracy.
 The system will focus on English language scripts only.
 The system will not take into account the quality of the images taken when scanning
the scripts.
 The system will not be used for scanning handwritten documents other than scripts
which will contain the code, the matriculation number, the name, and mark.

OVERVIEW OF DISSERTATION CHAPTERS

This dissertation consists of five chapters, each covering a specific aspect of the study. The
chapters are as follows.

Chapter 1: Introduction

This chapter provides a background to the study, including the problem statement, rationale,
project motivation, and significance of the study. It also outlines the research questions,
objectives, scope, and delimitations of the study.

Chapter 2: Literature Review

This chapter reviews past research in the area of study, including the methodologies and
results of previous studies. The chapter identifies gaps in the literature and proposes methods
to enhance the results in the research area. The chapter also outlines some generalities in the
research area and some tools to be used for the research.

Chapter 3: Methodology

This chapter details the materials and methods used to carry out this research. The chapter
includes necessary diagrams, graphs, and block diagrams explaining the series of steps used
to carry out the experimentation, flow charts, and formulas used to obtain the results.

Chapter 4: Results and Discussion

This chapter presents the data that was collected and analyzed. The chapter explains the
results generated by the software in a logical and easy-to-follow manner, with tables, charts,
and graphs used as necessary to illustrate key findings.

6
Chapter 5: Conclusion and Recommendations

This chapter provides a general conclusion of the study, including recommendations for
future research in this area. The chapter also discusses possible future works on this topic.

7
CHAPTER 2

LITERATURE REVIEW
2.1 Conceptual framework
2.1.1. Character Recognition
Character recognition is a sub-field of pattern recognition in which images of characters from
a text image are recognized and as a result of recognition respective character codes are
returned, these when rendered give the text in the image.
The problem of character recognition is the problem of automatic recognition of raster images
as being letters, digits or some other symbol and it is like any other problem in computer vision.

Character recognition is further classified into two types according to the manner in which
input is provided to the recognition engine. Considering figure 2.1 below which shows the
classification hierarchy of character recognition, the two types of character recognition are:
 On-line character recognition
 Off-line character Recognition

Figure 1: Classification of Character Recognition


2.1.1.1: Online Character Recognition

Online character recognition systems deal with character recognition in real time. The process
involves a dynamic procedure using special sensor based equipment that can capture input from

8
a transducer while text is being written on a pressure sensitive, electrostatic or electromagnetic
digitizing tablet. The input text is automatically converted with the help of a recognition
algorithm to a series of electronic signals which can be stored for further processing in the form
of letter codes. The recognition system functions on the basis of the x and y coordinates
generated in a temporal sequence by the pen tip movements as they create recognizable patterns
on a special digitizer as the text is written.

2.1.1.2: Offline Character Recognition

There is a major difference in the input system of off-line and on-line character recognition
which influences the design, architecture and methodologies employed to develop recognition
systems for the two. In online recognition the input data is available in the form of a temporal
sequence or real time text generated on a sensory device thus providing time sequence
contextual information. On the contrary in an off line system the actual recognition begins after
the data has been written down as it does not require real time contextual information.

Offline character recognition is further classified in two types according to the input provided
to the system for recognition of characters. These are,
 Magnetic ink Character Recognition (MICR)
 Optical Character Recognition (OCR)

1. Magnetic ink Character Recognition (MICR)


MICR is a unique technology that relies on recognizing text which has been printed in special
fonts with magnetic ink usually containing iron oxide. As the machine prepares to read the
code the printed characters become magnetized on the paper with the North Pole on the right
of each MICR character creating recognizable waveforms and patterns that are captured and
used for further processing. The reading device is comparable to a tape recorder head that
recognizes the wave patterns of sound recorded on the magnetic tape.
The system has been in efficient use for a long time in banks around the world to process checks
as results give high accuracy rates with relatively low chances of error.
There are special fonts for MICR, the most common fonts being E-13B and CMC-7.

2. Optical Character Recognition (OCR)

9
Optical Character Recognition or OCR is the text recognition system that allows hard copies
of written or printed text to be rendered into editable, soft copy versions. It is the translation of
optically scanned bitmaps of printed or written text into digitally editable data files. An OCR
facilitates the conversion of geometric source object into a digitally representable character in
ASCII or Unicode scheme of digital character representation.
Many a times we want to have an editable copy of the text which we have in the form of a hard
copy like a fax or pages from a book or a magazine. The system employs the use of an optical
input device usually a digital camera or a scanner which pass the captured images to a
recognition system that after passing it through a number of processes convert it to a soft copy
like an MS Word document.
When we scan a sheet of paper we reformat it from hard copy to a soft copy, which we save as
an image. The image can be handled as a whole but its text cannot be manipulated separately.
In order to be able to do so, we need to ask the computer to recognize the text as such and to
let us manipulate it as if it was a text in a word document. The OCR application does that; it
recognizes the characters and makes the text editable and searchable, which is what we need.
The technology has also enabled such materials to be stored using much less storage space than
the hard copy materials. OCR technology has made a huge impact on the way information is
stored, shared and communicated.

OCRs are of two types,


 OCRs for recognizing printed characters
 OCRs for recognizing hand-written text.

OCRs meant for printed text recognition are generally more accurate and reliable because the
characters belong to standard font files and it is relatively easier to match images with the ones
present in the existing library. As far as hand writing recognition is concerned the vast variety
of human writing styles and customs make the recognition task more challenging.

Optical Character Recognition (OCR) is one of the most common and useful applications of
machine vision, which is a sub-class of artificial intelligence, and has long been a topic of
research, recently gaining even more popularity with the development of prototype digital
libraries which imply the electronic rendering of paper or film based documents through an
imaging process.
OCR Processes
10
The OCR process begins with the scanning and subsequent digital reproduction of the text in
the image. It involves the following discrete sub-processes, as shown in figure 2.2 below.

Figure 2: summary of OCR Processes


1. Optical Scanning
Optical scanning is the first component of OCR. This is simply a practice in which the original
document is converted into a digital image format. Optical scanners, which consist of a
transport mechanism and sensing device, and convert light intensity into grey levels, are
normally used in OCR. Printed documents are comprised of black print on white background.
During extraction in (optical scanning) OCR procedures, a process simply referred to as,
thresholding, is performed on the scanner to save memory space and computational effort
(Bennamoun et al., 2015). During this process of thresholding, a multilevel image is
transformed into a two-level black and white image.

2. Location Segmentation
After scanning, the next component of OCR is known as location segmentation. This module
decodes the elements of an image. The regions of the document covered in printed data are
located and isolated from figures and graphics (Bennamoun et al., 2015). For example, before
the recognition process starts automatic mail sorting through envelopes, addresses have to be

11
located and then separated from other prints including both company stamps and logo. For text
recognition, separation includes isolation of each character or word.
Most OCR algorithms work by slicing words into isolated characters, which are then
recognised separately (Bennamoun et al., 2015).

3. Pre-Processing
Pre-processing is the third function of OCR, which involves subjecting the acquired raw data
to a number of processing stages before making it suitable for character analysis. It is possible
for the images attained from the scanning process to be contaminated with some noise in the
form of smeared or broken images, which largely depends on the resolution of the scanner and
the inherent thresholding. Through the pre-processing stage, some of the few defects, which
might cause poor recognition rates, are eliminated, by smoothing digitised characters.
Smoothing in this case refers to both thinning and filling. While filling repairs small breaks,
gaps and holes in digitised characters, thinning supports the reduction of the width of line
components (Bennamoun et al., 2015).

Figure 3: Normalisation of Characters


4. Compression
The conventional image compression mechanism transforms the image from the space domain
into unsuitable domains for recognition. Thresholding and thinning are two familiar
compression techniques. Thresholding is utilised to minimise space requirements and enhance
the processing capabilities. Usually, it is advantageous that images are represented as binary
images by selecting a predefined threshold. This threshold can be

12
 global (i.e. use of a single threshold for the whole character image) and
 local (i.e. use of many thresholds’ values for the whole character image).

Established global and local thresholding techniques are compared through implementing a
goal-directed evaluation criterion keeping in view the desired accuracy of the OCR system
(Chaudhuri, 2010). The characters’ singular points such as endpoints, cross points and loops
are identified by some thinning methods. In a non-pixel wise thinning, global approaches
handle the points. Thinning repetitions can take place. The parallel algorithms, on the other
hand, surpass sequential algorithms because all pixels are simultaneously examined using
similar conditions for removing (Reshma et al., 2016). It can be applied in parallel hardware in
an efficient manner. Notably, compression techniques have an impact on data as unpredicted
imperfections might arise, which distort the character image. As a result, significant writing
data loss can happen when using these techniques. Therefore, they should be carefully applied.

5. Segmentation
The initial stage of pre-processing produces a clean character image such that a reasonable
amount of shape information is obtained, while providing the image in a normalised high
compression and low noise format ready for the next OCR stage, namely, the segmentation.
During this stage, a segmentation of the character image into its sub-components is done. It is
worth noting that the recognition rates are directly affected by the extent of the separation of
the various lines in the characters. Therefore, an internal segmentation is used, which separates
the lines and curves in the characters that are cursively written (Bennamoun et al., 2015). The
character segmentation methods are categorised into three types (Casey & Lecolinet, 1996;
Arica & Yarman-Vural, 2002; Chaudhuri et al., 2017) as seen in figure 2.4 below;

Figure 4: Segmentation Methods (source: Chaudhuri et al.,2017).


In the explicit method, the segments are recognised depending on a character such as properties.

13
The implicit method depends on recognition as it hunts the image for matching components
with predefined classes.
The mixed method is a combination of implicit and explicit methods. A dissection method is
applied to the image aiming at covering segment, cutting the image insufficiently in different
regions that the cuts made include the correct segmentation boundaries.

6. Representation
Image representation (one of the most important factors in recognition systems) is the sixth
component of OCR. Basically, this module entails binary images (or grey level) being fed into
a recogniser (Bennamoun et al., 2015). However, for most recognition systems, a more
compressed and characteristic representation tool is required, to avoid extra complexity and to
increase the performance and accuracy of the algorithms. A set of predefined features are
extracted (for the purpose of representation) for each class. These extracted features help to
distinguish one class from another.
There are three major groups of character image representation:
 global transformation and series expansion,
 statistical representation and
 geometrical and topological representation (Bennamoun et al., 2015).

7. Feature Extraction
The seventh OCR component is feature extraction, with an objective to capture the essential
characteristics of symbols. This stage of OCR often encompasses the most difficult problems
in pattern recognition (Bennamoun et al., 2015).
A raster image is the simplest way to describe a character. Extracting certain features (while
eliminating irrelevant attributes) that characterise symbols is another approach to OCR.

8. Training and Recognition


Training and recognition is another component of OCR. Techniques for pattern recognition are
extensively utilised in OCR systems, where unknown samples are assigned to predefined
classes. According to Bennamoun and Mamic (2002), Qadri and Asif (2009) and Och et al.
(2015), the four general approaches of pattern recognition are
 template matching,
 statistical techniques,
 structural techniques and
14
 artificial neural networks (ANNs).

These approaches are neither dependent on each other, nor disjointed from each other.
Sometimes, the OCR technique applied in one approach might most likely be considered a
combined member of other approaches. In all the OCR techniques discussed thus far, either a
holistic or analytic strategy has been employed as part of the training and recognition stages
(Bennamoun et al., 2015).
The recognition accuracy is reduced because of each character representation complexity or
stroke. The analytic mechanisms, on the other hand, implement the bottom-up strategy
commencing from the character level and ending with an expressive result.

9. Post-Processing
The final component of OCR is post-processing. Grouping, error detection and correction are
some of the commonly used post-processing activities. Symbols in the text are associated with
strings for the purpose of grouping. A set of individual symbols is the result of plain symbol
recognition in the text. However, the symbols do not generally deliver adequate information.
Based on their location in the document, the individual symbols are grouped together (based
on symbols which are close in likeness) to form words and numbers. For fonts, the grouping
process is easy as the position of each character is known (if fonts have fixed pitch), but for
typesets, the distances between characters vary. Because the distance between words is
meaningfully larger than the distance between characters grouping is, therefore, possible
(Bennamoun et al., 2015).

2.1.1.3: World Languages and Scripts

Communication around the world takes place in more than several hundred languages today.
There is a great variety of ways in which these languages are written down but it has been
found out that more than 90 languages use the Latin script to scribe their words, English being
one of them. There are several other scripts that serve as means to write down a combination
of languages. The Arabic script stands second to Latin as it has been adopted by more than 25
different languages to form their alphabet.
According to the way the script is written down and the patterns it follows helps us divide them
in two separate categories.
 Non-cursive scripts.

15
 Cursive scripts.

1. Non-Cursive Script
The non cursive scripts which are more common are inherently discrete as far as the printed
text is concerned. This means that each character has a separate and a definite shape that
combines with the next one by being placed side by side with it and never overlapping or
shadowing the preceding or succeeding letters. However, when these scripts are handwritten
the scribe’s hand can make the letters decorative, cursive and more flowing in form and shape.
The Latin script is an example of such a script where handwritten text can be as decorative and
cursive as the writer’s choice while printed text is easily recognizable.

2. Cursive Script
On the other hand the naturally cursive scripts e.g. Arabic have a unique feature for the
formation of words. The characters in these scripts are not discrete but are joined to each other
to form ligatures and then words. These free flowing character forms create words by
overlapping each other sometimes even stacking on each other vertically. The non-discrete
nature of these scripts makes them ever more difficult to be developed as font types for printing
as well as pose challenges for character recognition. Creating discrete characters for text
processing for the cursive scripts and placing them side by side to form words like discrete
scripts for convenience in character recognition, mar the original shape of words giving them
an artificial and unnatural look.

16
Di s c r et e C h a r acte r s

Hand-printed Characters

Cursive Handwriting

‫ﻧﺴﺘﻌﻠﯿﻖ اﯾﮏ ﭘﯿﭽﯿﺪه رﺳﻢ اﻟﺨﻂ ﮨﮯ‬

Figure 5: Cursive and non-cursive scripts


This study will be focused on the English language as mentioned in the scope.

2.1.3 English language

One of the objectives of this study is to propose a system that will allow the scanning of script
codes in an image format, and through the use of OCR, to extract the text from the submitted
script and save the text file.
The English Alphabet consists of 26 letters: A, B, C, D, E, F, G, H, I, J, K, L, M, N, O, P, Q,
R, S, T, U, V, W, X, Y, Z.
23 letters (A B C D E F G H I K L M N O P Q R S T V X Y Z) are the first 23 letters of the
original Old English Alphabet recorded in the year 1011 by the monk Byrhtferð. Dropped from
the Old English alphabet are the following 6 letters: & ⁊ Ƿ Þ Ð Æ
3 letters have been added from Old English: J, U, and W. J and U were added in the 16th
century, while W assumed the status of an independent letter.
Until 1835, the English Alphabet consisted of 27 letters: right after "Z" the 27th letter of the
alphabet was ampersand (&).
The English Alphabet (or Modern English Alphabet) today consists of 26 letters: 23 from Old
English and 3 added later.

OCR technology is very useful in converting different types of documents (e.g. scanned paper
documents, PDF files or images captured using digital cameras) into editable and searchable
data. Over recent years, OCR’s success as an enabling technology has gained much attention

17
from both academic and industry experts of computing engineering (Bunke and Wang, 1997;
Francis and Jutamulia, 1998). The English Language characters for OCR are presented below

Figure 2.6: OCR Fonts (source: Chaudhuri et al., 2017).

Drawbacks of OCR
The drawbacks of OCR are as follow:
 The most common disadvantage is the lack of 100% accuracy with OCR. It is common
for font size and type to be changed during the translation of items. There are also
recorded cases of the misspelling and emitting of letters that OCR considers unreadable.
This leads to a text that loses the original meaning;
 Work-around. This involves the addition of special features to help OCR in certain
functions. For example, OCR has difficulties differentiating between zero and O. In
such an event, the zero has to be written;
 Additional work. Despite the help it provides, OCR, in its current state, is not able to
perform some functions; it, therefore, requires additional work after the final output is
presented. The work has to be proofread and any mistakes corrected. After OCR is used,
the paper has to be read and the misspellings or any other mistakes corrected;
 OCR machines and software are quite expensive; therefore, its access is limited to those
with the financial power to acquire them;
 The images produced by scanners in most cases consume much space; therefore, it
requires an individual to ensure the systems and machines have a large storage space
for the images produced; and
 OCR does work well with handwritten text since the machine needs time to learn the
handwriting. Although the developed version of OCR will be able to read them, it takes
time, especially with poor handwritten texts.
18
 Despite the many advantages of OCR, several other technologies are used to translate
scanned objects. The alternative technologies focus mostly on the use of data science
machines, which are aided by machine learning technologies (Springmann, Najock,
Morgenroth, Schmid, Gotscharek, and Fink, 2014, May). Data science and machine
learning technologies have been used in netting personal data details from driving
licenses and passports. Furthermore, it has also been used in mobile receipt scanning.

2.2: Empirical literature review


Akbari, S., & Fataei, E. (2012). Design and Implementation of a Grading System Based on
READING-WRITING System to Automatically Record Student Marks from Scanning Scripts.
International Journal of Computer Science and Information Technologies, 3(1), 33-37. This
paper presents the design and implementation of a grading system based on READING-
WRITING system to automatically record student marks from scanning scripts. The proposed
system is a knowledge-based system that uses a heuristic method to judge the value of each
script. The system was tested on different scripts from different courses and showed a
satisfactory performance in terms of accuracy and speed.

Chen, M., Yeh, G., & Zhang, L. (2005). Design and Implementation of an Automatic Grading
System for Spatial Thinking. International Journal of Human–Computer Interaction, 18(4),
427-447. This paper presents the design and implementation of an automatic grading system
for spatial thinking. The proposed system consists of a graphical user interface, an intelligent
knowledge-based engine, a Rasch item and threshold calibration module, and an item analysis
module. The system was tested by students from different universities. The results showed the
effectiveness of the system, which was able to grade the spatial tasks with high reliability and
accuracy.

Kellet, N., Khan, J., & Bhutta, M. (2008). Automated Grading System for Visual Basic
Projects. International Journal of Computer Applications, 22(1), 6-10. This paper presents the
design and implementation of an automated grading system for Visual Basic Projects. The
proposed system used an artificial neural network to evaluate the student performance on
Visual Basic Projects by automatically recognizing different types of errors in the

19
programming code. The results showed that the system was able to grade the programming
assignments with high accuracy compared to the teacher’s manual grading.

Bautista and Comendador (2016) propose a new automated marking tool based on OCR, which
assists higher education institutions to mark and store students’ mark automatically. Using this
marking tool, each mark of the uploaded grade sheet would be indexed to a specific student. It
provides digital storing of the scanned grade sheet. The students’ marks on the digital copies
will be stored in a local database. The system identifies each student by his or her number, and
then searches using this number and retrieves the data that relate to the number that fits the
student ID number. This tool aims to increase the matching accuracy (Cheng, Qu, Shi & Xie,
2010). To ensure uniqueness, each student number with the related student grade tagged is
stored in a database individually as a database record, which can be retrieved for any future
query. After the scanning, the scanned student grades can be displayed. Moreover, the user can
edit the result in case there are errors in the scanning and recognition of characters (Bautista &
Comendador, 2016).Once the scanned images are stored in the system, the user can search and
restore the file to view the grade sheets. The proposed system provides different functionalities
such as adding new users, invoking the users to access the system, changing the user login
password, generating a certification of grades for each student based on a predefined template,
etc.

Patel, J., & Bhatia, R. (2013). Summary of The Design and Implementation of a System to
Automatically Record Student Marks from Scanning Scripts. International Journal of
Engineering Research, 4(2), 181-194. This article presents a summary of the design and
implementation of a system to automatically record student marks from scanning scripts. The
proposed system used an optical character recognition technique to analyze the content of the
scanned scripts. The results showed that the system was able to accurately assess the student’s
marks with high accuracy. In addition, the system was found to be highly usable, reducing the
time taken for manual processing significantly.

ASSYST is an early-automated system, developed by Jackson and Usher (1997) for


assessing/evaluating programming assignments. ASSYST despite being an assessment tool, it
also assists in general housekeeping procedures including assignments submitted by students
checking both the efficiency and correctness. The authors of ASSYST claim, “The system has
proven to be a very useful tool that improves the consistency and efficiency of grading”. In line
20
with this development, Kurina et al. (2001) introduced the Online Judge System (OJS), which
examines students’ programs submitted through either a webpage or an email. The principle of
OJS is a precursor of the Web-CAT system, which was discussed earlier. It is capable of
collecting and then running the program against a set of pre-defined test cases. The results
obtained from the testing procedure are known as “judgment”, hence the name of the system,
“Online Judge”.

Web-CAT is another tool for automated marking of programming assignments, developed by


Edwards and Perez-Quinones (2008). Web-CAT is an open-source platform that allows for a
web-based submission of student program codes to the Web-CAT server, which then runs the
submitted code against a set of prescribed test cases and provides online feedback to the
student. The Web-CAT approach to automated testing provides both scalability and
consistency through the implemented test cases. Using this tool, the assessors can set different
criteria for grading and feedback. For example, they can choose between having the computer
grade the assignments based on passing or failing test cases, or simply using the information
for student feedback and improvement purposes. Despite the well-proven performance of the
Web-CAT tool, it is largely dependent on the quality of the test cases.

In another recent trend proposed by Dasarathy (2014), a massive open online course (MOOC)
emerged as a result of exploiting technology advancements in the education system. MOOCs
owe their rising popularity to providing fast and flexible feedback. For programming courses
deployed on MOOC, students can enter their code into a browser window, where it is
automatically evaluated and the results reported back to the student almost immediately. In
many MOOCs, the ratio of student-to-teacher is usually high, thus demanding an automated
grading and feedback system.

21
CHAPTER 3
METHODOLOGY
This section of this report provides a comprehensive overview of the materials and
methodologies employed in the development of the mobile-based automated mark recording
system for academic scripts. The section is divided into two main parts, namely Materials and
Methods, to present a clear and organized description of the resources and procedures involved
in the project. The Materials subsection highlights the hardware and software components
utilized, including smartphones, computers, Flutter framework, Flask framework, Google
Cloud Vision API, Google Cloud App Engine, Firebase, image processing libraries, and
Python. The Methods subsection outlines the step-by-step approach adopted to design and
implement the mobile application, develop the backend server, apply image processing
techniques for mark extraction, store data using CSV files, and conduct testing and evaluation.
By detailing the materials and providing a comprehensive explanation of the methods
employed, this section ensures transparency and reproducibility while offering valuable
insights into the project's development process.

Materials
Hardware:
Smartphones: Various models of smartphones (iOS and Android) with built-in cameras will
be used to test and run the mobile application.
Computer: A computer with sufficient processing power and storage capacity will be required
for development and testing purposes.

Software:
• Flutter: UI of the application
• Firebase: Authentication and storage
• Python Flask: Backend server
• Google Cloud App Engine: Server deployment
• Google Cloud Vision API: Handwriting Recognition

22
3.1: Research & Analysis
3.1.1 Needs Assessment:

• Conducted interviews and surveys with lecturers to identify their pain points and
challenges in grading and recording student marks.
• Explored existing solutions and applications used in similar contexts to understand
their limitations and potential improvements.
• Created use case diagrams to visualize the interactions between lecturers and the
application, capturing the main functionalities and user requirements.
• Analyzed the use cases to prioritize and understand the essential features necessary
for the application.

Figure 6: Gradish Use Case Diagram

In this use case diagram, we have one primary actor:


Lecturer: Represents the user (lecturer) interacting with the system.
And the following use cases:

23
• Scan Script: Represents the process of scanning the script using the mobile
application's camera.
• Authenticate User: Represents the process of authenticating the lecturer's identity to
access the system.
• Validate Marks: Represents the process of validating the recognized marks, ensuring
accuracy and integrity.
• Generate CSV: Represents the process of generating a CSV file containing the
recorded marks.
The lecturer interacts with the system by performing these use cases, representing the main
functionalities of the mobile-based automated mark recording system.
3.1.2 Technology Evaluation:

• Researched various technologies and tools available for mobile application


development, OCR, handwriting recognition, and database management.
• Evaluated different frameworks and languages, considering factors such as ease of
use, compatibility with multiple platforms, and community support.
• Explored options for cloud-based storage and authentication services to ensure
scalability and security.

3.1.3 OCR and Handwriting Recognition:

• Investigated different OCR and handwriting recognition algorithms and APIs.


• Explored the capabilities of the Google Cloud Vision API and its suitability for
accurately extracting handwritten marks and coded numbers.
• Considered factors such as accuracy, speed, and ease of integration.

3.1.4 Backend Infrastructure:

• Explored different server-side technologies and frameworks for handling image


processing and serving the OCR endpoint.
• Considered scalability, cost-effectiveness, and ease of deployment.
• Evaluated the capabilities of Google Cloud App Engine and determined its suitability
for hosting the Flask server.

3.1.5 Data Storage & Management:

• Researched different options for data storage, considering factors such as real-time
synchronization, security, and ease of integration with the mobile application.

24
• Evaluated the features and capabilities of Firebase as a backend service for
authentication and storing student marks data.
• Considered the scalability and cost implications of using Firebase for long-term data
storage.

3.1.6 Ethical Considerations:

• Conducted a thorough analysis of ethical considerations, particularly regarding data


privacy, security, and compliance with relevant regulations (e.g., GDPR or local data
protection laws).
• Explored measures and best practices for ensuring data privacy, secure data
transmission, and secure storage of student information.
• Considered the ethical implications of using handwriting recognition technology and
handling student data within the application.

Planning and Requirement Analysis


Functional Requirements

• User Authentication:
o Lecturers should be able to log in securely to access the system using their
credentials.
o The system should provide appropriate mechanisms for user authentication,
such as email/password, social login, or other authentication methods.
• Script Scanning:
o The system should allow lecturers to use smartphone cameras to capture clear
images of academic scripts.
o The captured images should be stored temporarily for further processing and
mark recognition.
• Image Preprocessing:
o The system should apply image preprocessing techniques to enhance the quality
of the captured script images.
o Preprocessing techniques may include noise reduction, resizing, contrast
adjustment, or other image enhancement algorithms.
• Mark and Coded Number Recognition:

25
o The system should utilize handwriting recognition algorithms to extract marks
and coded numbers from the captured script images.
o Handwritten marks or coded numbers should be accurately recognized,
considering variations in handwriting styles and different types of assessments.
• Mark and Coded Number Validation:
o The system should validate the recognized marks or coded numbers to ensure
accuracy and integrity.
o Validation may involve comparing the recognized marks against predefined
criteria, such as valid mark ranges or specific symbols used for marking.
• CSV Generation:
o The system should generate a structured CSV file containing the recorded marks
and relevant information, such as coded numbers and timestamps.
o The CSV file should be properly formatted and organized for easy analysis and
integration with other systems.
• Data Storage and Retrieval:
o The system should securely store the captured script images, recognized marks,
and generated CSV files.
o It should provide mechanisms for efficient storage and retrieval of the data,
ensuring data privacy and access control.
• User Interface:
o The mobile application's user interface should be intuitive, user-friendly, and
responsive.
o It should provide clear instructions for scanning scripts, display the recognized
marks for review, and allow lecturers to confirm and save the marks.
• Error Handling and Feedback:
o The system should handle errors and exceptions that may occur during script
scanning, mark recognition, or other processes.
o It should provide appropriate feedback and error messages to guide users and
troubleshoot issues effectively.
• Scalability and Performance:
o The system should be able to handle many script scans concurrently, ensuring
optimal performance and responsiveness.

26
o It should scale seamlessly to accommodate increasing usage and data storage
requirements.

Non-functional Requirements

1. Performance:
a. The system should have fast response times, ensuring a seamless user
experience during script scanning, mark recognition, and data processing.
b. It should be capable of handling multiple script scans concurrently without
significant delays or performance degradation.
2. Security:
a. The system should ensure the security and privacy of user data, including script
images, recognized marks, and generated CSV files.
b. It should employ secure authentication mechanisms, data encryption during
transmission and storage, and access controls to protect sensitive information.
3. Usability:
a. The user interface should be intuitive, user-friendly, and visually appealing,
ensuring ease of use for lecturers.
b. The system should provide clear instructions and guidance during script
scanning, mark recognition, and data confirmation to facilitate a smooth user
experience.
4. Compatibility:
a. The mobile application should be compatible with a wide range of devices and
operating systems, ensuring accessibility for lecturers using different
smartphones or tablets.
b. It should be responsive and adaptive to various screen sizes and orientations.
5. Scalability:
a. The system should be designed to handle increasing loads and growing data
volumes without compromising performance.
b. It should be scalable to accommodate a larger number of lecturers and academic
scripts as the system usage expands.
6. Reliability:
a. The system should operate reliably, minimizing the occurrence of errors,
crashes, or data loss.

27
b. It should have mechanisms in place to handle unexpected failures and recover
gracefully without losing critical data.
7. Availability:
a. The system should be available and accessible to lecturers whenever they need
to perform mark recording tasks.
b. It should have a high uptime and minimal downtime for routine maintenance or
updates.
8. Data Backup and Recovery:
a. The system should regularly back up captured script images, recognized marks,
and generated CSV files to prevent data loss.
b. It should have mechanisms for data recovery in case of accidental deletion,
system failure, or other unforeseen events.
9. Integration:
a. The system should be easily integrated with external services, such as Firebase
for authentication and storage, and the Google Cloud Vision API for
handwriting recognition.
b. It should provide well-defined APIs and integration points for seamless
interaction with other systems, if required.
10. Documentation and Support:
a. The system should be well-documented, providing clear guidelines and
instructions for setup, configuration, and usage.
b. It should have a support mechanism in place to address user queries, help, and
handle technical issues effectively.

3.2: Design & System Architecture


The system architecture of the mobile application for recording student marks consists of
several components working together to provide a seamless experience for lecturers. The
following diagram illustrates the high-level overview of the system architecture:

28
Figure 7: System Architecture Diagram
In this architecture, the system consists of multiple components and services that work together
to achieve the automated mark recording process:
• Mobile Device: The mobile application, developed using Flutter, provides the user
interface for scanning academic scripts and interacting with the system.
• Authentication (Firebase): Firebase handles user authentication, ensuring secure access
to the system.
• Image Capture (Camera Access): The mobile application utilizes the device's camera
to capture images of academic scripts.
• Backend Server (Flask + Google Cloud Platform): The Flask backend server receives
requests from the mobile application, processes the images, and integrates with external
services such as the Google Cloud Vision API for handwriting recognition.
• External APIs/Services (Google Cloud Vision): The system utilizes the Google Cloud
Vision API for handwriting recognition, enabling the extraction of marks from the
scanned script images.

29
• Data Validation and Processing: The backend server performs validation and
processing of the recognized marks, ensuring accuracy and reliability.
• Data Storage (Firebase Storage): The generated CSV files and scanned script images
are securely stored in Firebase Storage for easy access and retrieval.
• CSV Generation and Output: The system generates structured CSV files containing the
recorded marks, organized with relevant information such as student IDs and
assessment names.
• CSV File: The generated CSV files are the final output of the system, containing the
recorded marks for further analysis or integration with other systems.
This architecture demonstrates the flow of data and the interaction between the various
components involved in the mobile-based automated mark recording system. It highlights the
usage of Flutter for the mobile application, Firebase for authentication and storage, Flask for
the backend server, Google Cloud Vision API for handwriting recognition, and Firebase
Storage for storing the generated CSV files.
3.2.1 Mobile Application (Flutter):
• The mobile application is developed using the Flutter framework, which allows for
cross-platform development and a native-like user interface (UI).
• Lecturers interact with the application through their smartphones, leveraging its built-
in camera functionality to scan and capture the marked scripts.

3.2.2 User Authentication and Data Storage:

• Lecturers are authenticated using Firebase Authentication, ensuring secure access to


the application's features.
• Firebase Cloud Firestore is used as the NoSQL database for storing student marks,
lecturer information, and other relevant data.
• Firestore provides real-time synchronization, allowing multiple lecturers to access and
update the marks simultaneously.

3.2.3 Handwriting Recognition (Google Cloud Vision API):

• The captured script images are sent to the backend server for handwriting recognition
using the Google Cloud Vision API.
• The Google Cloud Vision API employs optical character recognition (OCR)
techniques to extract the marks and student ID information from the scanned scripts.
30
• The recognized data is returned to the mobile application for further processing and
storage.

3.2.4 Backend Server (Flask):

• A Flask server is deployed on Google Cloud App Engine, serving as the backend for
the handwriting recognition endpoint.
• The server receives the scanned script images from the mobile application and
forwards them to the Google Cloud Vision API for processing.
• Upon receiving the results, the server sends the extracted data back to the mobile
application for storage and display.

3.2.5 Grading and Mark Entry:

• The extracted student marks and ID information are stored in the Firebase Firestore
database, associated with the corresponding lecturer and script.
• Lecturers can review and update the marks as needed, ensuring accuracy before
finalizing the grades.

3.2.6 Grade Sheet Generation:

• Once all the scripts are graded and marks are entered, lecturers can generate a grade
sheet using the application.
• The grade sheet is dynamically generated based on the stored marks in the Firebase
Firestore database.
• Lecturers can export the grade sheet in a suitable format (e.g., CSV or Excel) for
further analysis or distribution.

The system architecture ensures that the mobile application, Firebase services, handwriting
recognition, and the backend server work seamlessly together to provide lecturers with an
efficient and reliable solution for recording and processing student marks.

Data Flow

Student Mark Recording:

1. Lecturer opens the mobile application on their smartphone.


2. The application prompts the lecturer to log in using Firebase Authentication.

31
3. Once authenticated, the lecturer can select the course or subject they want to record
marks for.
4. The application's camera is activated to capture images of the marked scripts.
5. The captured images are temporarily stored locally on the mobile device.

Image Processing and Handwriting Recognition:

1. The captured images are sent from the mobile application to the backend Flask server
hosted on Google Cloud App Engine.
2. The Flask server acts as an intermediary and forwards the images to the Google Cloud
Vision API for handwriting recognition.
3. The Google Cloud Vision API performs OCR and handwriting recognition to extract
the student marks and ID information from the scanned scripts.
4. The recognized data is returned from the Google Cloud Vision API to the Flask
server.

Data Storage:

1. The Flask server stores the extracted student marks and ID information temporarily
while waiting for further processing.
2. The recognized data is sent from the Flask server to Firebase Cloud Firestore.
3. Firebase Cloud Firestore stores the student marks and coded number associated with
the corresponding course and lecturer.

Grading and Mark Entry:

1. Lecturers can review and update the recognized student marks within the mobile
application.
2. Any modifications made by the lecturer are sent back to Firebase Cloud Firestore for
updating the stored data.

Grade Sheet Generation:

1. Once all student marks are recorded and finalized, lecturers can generate a grade sheet
using the mobile application.
2. The application fetches the necessary data from Firebase Cloud Firestore to create the
grade sheet.

32
3. The grade sheet is dynamically generated within the application based on the retrieved
data.
4. Lecturers can export the grade sheet in a suitable format (e.g., CSVor Excel) for
further analysis or distribution.

Figure 8: Gradish Flow chart

In this flowchart representation, the process starts with user authentication. If the authentication
is successful, the user proceeds with scanning the script, preprocessing the image, sending it
for mark recognition, receiving the recognized marks, validating them, and confirming and
saving the marks. If authentication fails, an error message is displayed.

The Process of Scanning and Recording Student Marks


The process of scanning and capturing student marks in the above system involves lecturers
using their smartphones and the mobile application to digitize the marks from the physical
scripts. Here's a step-by-step description of the process:
Opening the Mobile Application: The lecturer opens the mobile application on their
smartphone, which they have previously installed from Google Play Store.

33
Logging In: The application prompts the lecturer to log in using their credentials, which are
authenticated through Firebase Authentication.
Selecting the Course/Subject: Once logged in, the lecturer selects the specific course or
subject for which they want to record the student marks or create a course for which they want
to record student marks.
Preparing the Scripts: The lecturer arranges the physical scripts, ensuring that they are in
good lighting conditions and free from any significant obstructions.
Initiating the Scanning Process: The lecturer chooses the option to start scanning the scripts
within the selected course/subject.
Using the Camera:
• The application activates the smartphone's camera functionality after the lecturer grants
permission for the app to access the device’s camera.
• The lecturer positions the smartphone camera above the first script they want to capture
and presses the capture button.
Capturing the Script Images: The smartphone captures an image of the script, which is then
displayed on the screen for the lecturer to review its quality.
Reviewing the Captured Image:
• The lecturer reviews the captured image to ensure that the script's content is clear and
readable.
• If the image quality is acceptable, the lecturer proceeds to the next step. Otherwise, they
have the option to retake the image.
Sending the Image for Processing:
• Once the desired scripts is captured, the lecturer proceeds to submit the scanned images
for processing and mark extraction.
• The scanned script images are temporarily stored on the mobile device until they are
sent to the backend server for further processing.
Handwriting Recognition and Mark Extraction:
• The mobile application communicates with the backend Flask server, forwarding the
scanned script images.
• The Flask server uses the Google Cloud Vision API to perform OCR and handwriting
recognition as well as an algorithm on the images to the student marks and ID
information.
Displaying the Extracted Data:
• The recognized student marks and ID information are returned to the mobile application
from the Flask server.
• The application displays the extracted data to the lecturer, allowing them to review the
recognized marks and verify their accuracy.
Recording the Student Marks:

34
• The lecturer can then record the recognized student marks for each script within the
mobile application.
• The marks are associated with the corresponding coded number and stored temporarily
on the device.
Storing the Marks in the Database:
• The recorded student marks, along with associated coded numbers, are sent to Firebase
Cloud Firestore for permanent storage.
• Firestore stores the data securely in the database, ensuring real-time synchronization
and accessibility across devices.
Capturing Multiple Scripts (Optional): The lecturer can proceed to scan more scripts and
record their related marks.
Grade Sheet Download: Once the lecturer is done scanning scrips and recording their marks,
they can now download the complete Grade sheet as a CSV file.

The process of scanning and capturing student marks using the mobile application simplifies
the grading workflow for lecturers, making it more efficient and eliminating the need for
manual data entry. The digitized marks are stored securely, enabling lecturers to review,
update, and generate grade sheets easily.

Design & Implementation


Software Development Life Cycle
For this mobile-based automated mark recording system, the agile software development
lifecycle was used. Agile methodologies, such as Scrum or Kanban, offer flexibility,
adaptability, and collaboration, which can be beneficial for a project with evolving
requirements and frequent iterations.

Class diagram

35
Figure 9: Gradish class diagram

The design and implementation of this application follows the “Clean Architecture” where the
codebase was organised into distinct layers, each with a specific responsibility. This ensures
that all dependencies flow inward. The Clean Architecture promotes a separation of concerns,
making the system more maintainable, testable, and independent of external frameworks.
Below is how the application was structured:

36
Figure 10: Gradish app directory tree

• Screens: The user interface of the app. These include:


o Login/Register screens: These screens handle user authentication
o Login source code:
https://github.com/Idadelveloper/gradish/blob/main/lib/screens/auth_screens/login_screen.
dart

37
o Register source code:
https://github.com/Idadelveloper/gradish/blob/main/lib/screens/auth_screens/register_scree
n.dart

Figure 11: Gradish LoginScreen class

38
Figure 12: Gradish RegisterScreen class
o Home Screen: The homepage where the user gets access to the app’s features
o Source code:
https://github.com/Idadelveloper/gradish/blob/main/lib/screens/home_screen.d
art

39
Figure 13: Gradish HomeScreen class
o Create Course Screen: Takes care of keeping track of the course to be scanned.
o Source code:
https://github.com/Idadelveloper/gradish/blob/main/lib/screens/create_course_screen/create_
course_screen.dart

40
Figure 14: Gradish Create Course Screen
o Extract Screen: Once a script is scanned and an image is selected, a call is
made to the flask server which extracts the mark & coded number and send
back to the app.
o Source code:
https://github.com/Idadelveloper/gradish/blob/main/lib/screens/extract_screen.
dart

41
Figure 15: Gradish SelectedImage screen

• Core: General classes needed by all parts of the app


(https://github.com/Idadelveloper/gradish/tree/main/lib/core)
• Components: Reusable classes for the UI -
https://github.com/Idadelveloper/gradish/tree/main/lib/components
• Models: There are 3 models -
https://github.com/Idadelveloper/gradish/tree/main/lib/models

42
• Providers: They are usually triggered by the user and make calls to the repository.
For example, the AuthProvider is responsible for providing user data after login.
There are 3 providers:
o AuthProvider:
https://github.com/Idadelveloper/gradish/blob/main/lib/providers/auth_provider.dart
o APIProvider:
https://github.com/Idadelveloper/gradish/blob/main/lib/providers/api_provider.dart
o FirestoreProvider:
https://github.com/Idadelveloper/gradish/blob/main/lib/providers/firestore_provider.dart
• Repository: They take incoming requests from the providers to the services. It serves
an intermediary between the provider and service where it sends data from the
providers to the services and well as from the services to the provider. For example,
the Firestore Repository collects data from the FireStore Provider for various actione
like creating grade sheets, updating, deleting and then passes it to the FireStore
Service. Just like the provider, there are 3 repositories as well:
o APIRepository:
https://github.com/Idadelveloper/gradish/blob/main/lib/repositories/api_repository.dart
o AuthRepository:
https://github.com/Idadelveloper/gradish/blob/main/lib/repositories/auth_repository.dart
o FirestoreRepository:
https://github.com/Idadelveloper/gradish/blob/main/lib/repositories/firestore_repository.
dart
• Services: They handle external outgoing and incoming requests. For example, the
API Service handles the post request to the Flask backend where handwriting
recognition takes place together with the extraction of the marks and coded numbers.
There are 3 services namely:
o APIService:
https://github.com/Idadelveloper/gradish/blob/main/lib/services/api_service/api_service.dart
o FirebaseAuthService:
https://github.com/Idadelveloper/gradish/blob/main/lib/services/auth_service/firebase_auth_s
ervice.dart

43
o FirestoreService:
https://github.com/Idadelveloper/gradish/blob/main/lib/services/firestore_service/firestore_se
rvice.dart

Backend Server Development:


Below are the steps it took to implement the backend server:
a. Set up a development environment for Flask by installing Flask and required Python
libraries.
b. Design and implement API endpoints using Flask to handle requests from the mobile
application.
c. Integrate the Google Cloud Vision API for handwriting recognition capabilities, leveraging
the Vision API's OCR functionality to extract marks from scanned script images.
d. Develop the logic and algorithms to process the scanned script images, apply image
processing techniques if needed, and extract the handwritten marks accurately.

The server code has 2 main parts:


• main.py: The entry point of the server and contains all endpoints needed for requests.
It houses the upload endpoint which works in the following steps:
o Receives an image byte string and decodes it back into the original image.
o Makes a call to the “detect_handwriting” method of the image class which returns a list
of words.
o Parses through the list and uses regular expressions to detect the mark and coded
number which is returned and is expected to be sent back to the app.

44
Fig: “upload” endpoint of Gradish
• image.py: Made up of the Image class which contains the “detect_handwriting”
method. This method recieves an image and makes a request to the Google Cloud
Vision API for handwriting detection.
Source code: https://github.com/Idadelveloper/gradish-orc-backend/tree/main

Data Storage and CSV Generation


• Once a lecture approves and saves an extracted mark, it is stored in the Firebase
Backend. This is taken care of by the AuthProvider.
• CSV Generation: The app generates a structured CSV file that includes the extracted
and validated marks, along with relevant information such as the coded number. This
CSV is generated by first iterating through all the recorded marks for a particular course
and writing to a .csv file which can later be downloaded.

Testing and Evaluation:


Below are the steps that were taken to test the application:
a. Conduct comprehensive testing of the mobile application, backend server, and mark
extraction algorithms to ensure their accuracy, reliability, and performance.
b. Perform user testing with lecturers to gather feedback on the usability, functionality, and
user experience of the mobile application.

45
c. Evaluate the accuracy of the mark extraction process by comparing the recognized marks
with manually recorded marks for a representative sample set of academic scripts.
d. Measure key performance metrics such as accuracy, precision, and recall rates to assess the
reliability and effectiveness of the mark extraction algorithms.

Figure 16: Gradish Sequence diagram

In this sequence diagram, the interactions between the primary actors and the system
components are illustrated:
• Lecturer: Represents the user (lecturer) interacting with the system.
• MobileApplication: Represents the mobile application used by the lecturer.
• Backend Server: Represents the server-side component that handles the processing and
coordination of the system.
• MarkRecognitionAPI: Represents the external API or service responsible for
recognizing marks from script images.

46
• CSVGenerator: Represents the component responsible for generating the CSV file.
The sequence of interactions is as follows:
• The lecturer authenticates their identity with the MobileApplication.
• The lecturer initiates the scanning process by instructing the MobileApplication to scan
the script.
• The MobileApplication preprocesses the captured image.
• The Backend Server sends the preprocessed image to the MarkRecognitionAPI for
mark recognition.
• The MarkRecognitionAPI recognizes the marks from the image.
• The Backend Server receives the recognized marks and passes them to the
CSVGenerator.
• The CSVGenerator generates the CSV file containing the recorded marks.

The materials and methods outlined above provide an overview of the key components,
technologies, and methodologies that will be employed in the development of the mobile-based
automated mark recording system. It encompasses the utilization of Flutter for UI development,
Firebase for authentication and storage, Flask for backend server development, Google Cloud
Vision API for handwriting recognition, and Google Cloud App Engine for server hosting.
Additionally, it describes the image processing and mark extraction techniques, data storage
using CSV files, and the testing and evaluation processes involved in the project.

47
CHAPTER 4
RESULTS AND DISCUSIONS

4.1: Introduction
We implemented the system using all the tools and technologies stated in the Methodology
section. This app will implement the CRUD model as lecturers will be able to create account,
create gradesheets, update gradesheets, view created courses and delete courses.

4.2: Application icon installed


After installing the APK on the mobile devices, the user will find the application name as
Gradish with a icon. This application name and icon might be change in the
androidmanifest.xml in android studio with changing on the android label and android icon
anytime changes are made on it by the developer as shown below. Once the installation is
completed, the application will be appearing in the interface of your mobile device, and it is
ready for user to be use.

Figure 17:Gradish application icon on a mobile phone


4.3: The Register page
This is the first page the user will see when the app is launched, provided that this user is new
to the system or isn’t signed in. The user is prompted to input their information as shown in
figure 4.2 below. The user then clicks on ‘Register’ and if the information is entered in a correct
format, the app connects to the database creates a database instance for that user. If the user
already has an account, clicking on ‘Login’ will open the Log In page. The user can also choose
to sign in with their Google account.

48
Figure 18: Register page for Gradish

4.4: The Login page

On this page, the user can input login credentials which include their email address that was
used to register. If the information in one of the fields is incorrect, there is an error message
upon clicking on the LogIn button. Same thing happens if any of the fields are left blank when
the button is clicked. This is what provides the extra layer of security needed to prevent people
from having access to accounts that are not theirs.

49
M ,k
Figure 19: Sign In Page of Gradish

4.5: The Home Page


It is the main page of the app. The main display of the home page shows the upload classlist,
record marks, view past records, do system settings. The idea is to click on “Record Marks”
when you are to record marks. The home page is displayed in figure 4.4 below.

50
Figure 20: Homepage of Gradish

4.6: Create Course Screen.


Once the lecturer hits the “Record Marks” button on the home screen, they are taken to this
screen where they input some information about the course they intend to record marks for

51
Figure 21: Create Course Screen
4.7: Extract Screen
Once the course is created and a script is scanned, a popup displays which contains the
extracted coded number and mark

52
Figure 22: Scan script for Gradish
4.8: Generate CSV
Once the lecturer is done scanning all scripts, they can hit the “Save and Exit” button which
triggers the “Download Gradesheet as CSV” button. On clicking that button, the user needs to
first grant permission for the app to access their files since it has to store the csv file locally on
their device. It is stored in the “Documents” folder.

53
Figure 23: Generate csv popup

54
Figure 24: Gradish generated csv file being view as an excel sheet

55
CHAPTER 5

CONCLUSION AND RECOMMENDATIONS


5.1 Conclusion
In conclusion, the mobile application system designed to assist lecturers in recording and
processing student marks presents a robust and innovative solution for optimizing the grading
workflow in universities. The system leverages modern technologies, including Flutter for
cross-platform development, Firebase for authentication and data storage, Google Cloud
Vision API for OCR and handwriting recognition, and Flask server on Google Cloud App
Engine for backend processing. By adopting a well-structured and systematic approach to its
design and development, the system offers numerous benefits and advantages:

Efficiency and Accuracy: The system significantly improves the efficiency of the grading
process by allowing lecturers to digitize student marks quickly and accurately. With the help
of OCR and handwriting recognition, the application precisely extracts mark and student ID
information from scanned scripts, minimizing errors that could occur in manual data entry.

User-Friendly Interface: The mobile application is designed with a user-friendly interface,


providing an intuitive and straightforward experience for lecturers. The clean and organized
layout allows seamless navigation, ensuring lecturers can easily capture script images, review
and update marks, and generate grade sheets effortlessly.

Real-Time Data Synchronization: By utilizing Firebase Cloud Firestore as the backend


database, the system ensures real-time data synchronization. This feature enables multiple
lecturers to access and update student marks simultaneously, fostering collaboration among
teaching staff and promoting data accuracy.

Scalability and Accessibility: The system's cloud-based architecture, powered by Google


Cloud services, ensures scalability and accessibility. Lecturers can use the application on
various devices, and the system can handle increasing numbers of users without compromising
performance.

56
Security and Privacy: The system addresses security and privacy concerns by incorporating
industry-standard practices. Firebase Authentication safeguards lecturer data, and data
transmission is encrypted to protect sensitive information. Ethical considerations are also
considered to ensure compliance with relevant data protection regulations.

Clean Architecture and Maintainability: By following the Clean Architecture principles, the
system's codebase is structured in a modular and maintainable way. Separation of concerns and
loose coupling make it easier to modify, extend, and test individual components without
affecting the overall functionality.

Potential for Further Expansion: The system has the potential for further expansion and
enhancement. Future iterations could include additional features such as automated grading
based on predefined rubrics, data analytics for performance insights, and integration with
learning management systems for streamlined administrative processes.

5.2 Future Recommendations & Suggestions

To make the system better and enhance its overall performance and user experience, the
following recommendations are suggested:
Improve OCR and Handwriting Recognition Accuracy:
• Continuously monitor the performance of the OCR and handwriting recognition
algorithms.
• Explore the use of advanced machine learning models or custom-trained models to
improve the accuracy of recognition results.
• Implement post-processing techniques to handle common errors and improve
recognition outcomes.
Implement Offline Mode:
• Introduce an offline mode for the mobile application to enable lecturers to record marks
and review data even in areas with limited or no internet connectivity.
• Ensure that the system synchronizes data with the cloud database once a stable internet
connection is available.
Integrate Feedback Mechanism:
• Add a feedback mechanism within the application to allow lecturers to report any issues
or provide suggestions for improvement.

57
• Use user feedback to identify pain points and areas that require enhancements in
subsequent updates.

Add Student Data Validation:


• Implement data validation mechanisms to ensure that student ID information is accurate
and consistent.
• Apply checks to prevent duplicate student records and erroneous data entry.

Include Automated Data Backup:


• Set up an automated data backup system to protect against data loss or accidental
deletion.
• Regularly back up the data to a secure location to ensure data integrity.

Enhance Security Measures:


• Strengthen security measures by implementing two-factor authentication for lecturer
accounts.
• Regularly conduct security audits to identify and address potential vulnerabilities.

Integrate Role-Based Access Control:


• Implement role-based access control to differentiate between administrative staff and
teaching staff.
• Restrict access to sensitive functionalities to authorized users only.

Provide Analytics and Insights:


• Introduce data analytics features to provide lecturers with insights into student
performance and trends over time.
• Generate visual reports and charts to help lecturers make informed decisions.

Conduct Usability Testing:


• Conduct periodic usability testing with lecturers to gather feedback on the application's
user interface and user experience.
• Use the insights from testing to refine the UI design and optimize user interactions.

Enable Grade Sheet Customization:


58
• Allow lecturers to customize the format and layout of the generated grade sheets to
align with their specific requirements.
• Provide options to export grade sheets in various formats, such as PDF, Excel, or CSV.

Perform Regular Maintenance and Updates:


• Schedule regular maintenance checks to ensure the system remains up to date with the
latest software versions and security patches.
• Provide timely updates to address bug fixes, add new features, and improve overall
system performance.

By implementing these recommendations, the system can further enhance its functionality,
reliability, and usability, resulting in an even more efficient and effective tool for lecturers to
record and manage student marks seamlessly. Continuous feedback, testing, and improvements
will contribute to the system's success in meeting the evolving needs of lecturers and supporting
the academic community at large.
In conclusion, the mobile application system for recording and processing student marks
showcases a well-designed and technologically sophisticated solution to support lecturers in
their grading tasks. By combining user-friendly interfaces, advanced OCR and handwriting
recognition, and cloud-based storage, the system presents a comprehensive tool that
streamlines the grading process, enhances efficiency, and improves the overall teaching and
learning experience within the university setting. As the system continues to evolve and adapts
to the changing needs of lecturers and educational institutions, it holds the potential to
revolutionize the way grading is conducted, benefiting both educators and students alike.

59
References

Akinduyite, Olanike & Adetunmbi, Adebayo & Olabode, O. & Ibidunmoye, Olumuyiwa.
(2013). Fingerprint-Based Attendance Management System. 1. 100-105.
10.12691/jcsa-1-5-4.
Al-Naima, Fawzi & Ameen, Hussein. (2016). Design of an RFID-based Students/Employee
Attendance System. Majlesi Journal of Electrical Engineering. 10. 23-33.
Boyer, K. L., Michael, T. E., & Tavin, E. (2012). The effects of instructional technology on
student outcomes: a meta-analysis. Journal of Technology and Teacher Education,
20(1), 115-150.
Cavusoglu, H., Raghunathan, S., & Rao, H. R. (2009). Interorganizational compliance systems:
governance, risk factors, and control. MIS quarterly, 33(2), 303-330.
Edusys.co. 2020. RFID Attendance System (2020) - Price, Disadvantages, Advantages.
[online] Available at: <https://www.edusys.co/blog/rfid-attendance-system>.
En.wikipedia.org. 2020. Java (Programming Language). [online] Available at:
<https://en.wikipedia.org/wiki/Java_(programming_language)>
Kadry, Seifedine & Smaili, Mohamad. (2010). Wireless attendance management system based
on iris recognition. Scientific Research and Essays. 5. 1428-1435.
Nandhini R & Duraimurgan N & S.P.Chokkalingam (2019). Face Recognition Based
Attendance System. International Journal of Engineering and Advanced
technology (IJEAT) ISSN: 2249 8958, Volume-8, Issue-3S, February 2019.
Purdy, J. and Purdy, J., 2020. Pros & Cons Of Using Bar Code And QR Codes For Recording
Attendance - Jackrabbit Class. [online] Jackrabbit Class
Saparkhojayev, Nurbek & Guvercin, Selim. (2012). Attendance Control System based on
RFIDtechnology. International Journal of Computer Science Issues. 9.
Stansfield, J. W. (2015). Automating assessment: Enabling equitable, timely assessment
feedback. Technology, Pedagogy and Education, 24(3), 305-320.
Subramaniam, Hema & Hassan, Marina & Widyarto, Setyawan. (2013). Bar Code Scanner
Based Student Attendance System (SAS). TICOM (TECHNOLOGY OF
INFORMATION AND COMMUNICATION). VOL. 1 NO. 3 (2013).
Chen, Y., Zhang, Y., & Wang, J. (2020). Automated scoring for handwritten student scripts.
IEEE Access, 8, 4566-4572.

60
Gonzalez, J., Alvarez, A., & Vidal, E. (2019). Image processing techniques for automatic
grading: A survey. Expert Systems with Applications, 132, 1-16.
Johnson, C. N., & Jacome, E. I. (2017). The impact of mobile applications on student
engagement and academic performance. Journal of Applied Research in Higher
Education, 9(2), 336-349.
Liang, Y., & Su, B. (2016). A comprehensive survey on handcrafted and learning-based OCR
approaches. Pattern Recognition, 53, 443-457.
Rodriguez, A., Tawfik, A., & Rahbar, K. (2018). Mobile application for grade submission and
record keeping. Journal of Information Systems Education, 29(4), 259-270.
Smith, J., Johnson, A., & Thompson, S. (2018). An automated grading system for computer
science coursework. Journal of Computing Sciences in Colleges, 33(2), 139-148.

61

You might also like