Handwritten Character Recognition Using Multiscale Neural Network Training Technique

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

World Academy of Science, Engineering and Technology

International Journal of Computer and Information Engineering


Vol:2, No:3, 2008

Handwritten Character Recognition Using


Multiscale Neural Network Training Technique
Velappa Ganapathy, and Kok Leong Liew

feedforward backpropagation neural network requires long


Abstract—Advancement in Artificial Intelligence has lead to the training time just to “memorize” all possible input vectors that
developments of various “smart” devices. Character recognition are fed into the network. However, there is always a
device is one of such smart devices that acquire partial human possibility that the network will give false results due to poor
intelligence with the ability to capture and recognize various
generalization capability. Generalization problems can be
characters in different languages. Firstly multiscale neural training
with modifications in the input training vectors is adopted in this overcome by using the Multiscale Training Technique (MST)
paper to acquire its advantage in training higher resolution character [5], which is the concept that will be emphasized in this paper
Digital Open Science Index, Computer and Information Engineering Vol:2, No:3, 2008 waset.org/Publication/2037

images. Secondly selective thresholding using minimum distance with modifications in the input training vectors.
technique is proposed to be used to increase the level of accuracy of The recognition accuracy of the handwritten characters
character recognition. A simulator program (a GUI) is designed in depends a lot on the exemplars that are used for recognition.
such a way that the characters can be located on any spot on the
In general, the overall recognition process can be divided into
blank paper in which the characters are written. The results show that
such methods with moderate level of training epochs can produce 3 main sections, namely segmentation, preprocessing, and
accuracies of at least 85% and more for handwritten upper case classification [6]. Segmentation requires isolating the
English characters and numerals. characters individually before they are fed to the
preprocessing unit where the important features of characters
Keywords—Character recognition, multiscale, backpropagation, (feature extraction) are identified. Finally, classification
neural network, minimum distance technique. process is done by determining the category or the group of
each character used during the recognition process.
I. INTRODUCTION

C HARACTER recognition is a form of pattern recognition


process. In reality, it is very difficult to achieve 100%
accuracy. Even humans too will make mistakes when come to
II. OVERVIEW OF THE PROPOSED CHARACTER RECOGNITION
SIMULATOR PROGRAM AND METHODOLOGY
The characters were prepared on a piece of blank paper.
pattern recognition. Pattern distortion, presence of unwanted Black Artline70 marker pen was used to write down all of the
objects or disoriented patterns will affect the percentage characters. Samples of 10 for each character (each having
accuracy. The most basic way of recognizing patterns is different style) were used, hence a total of 360 characters were
through using the probabilistic methods. This has been done used for recognition (26 upper case letters + 10 numerical
by using the Bayesian decision theory as mentioned by Po, digits). These characters were captured using a digital
H.W, [1] and Liou C.Y. & Yang, H.C, [2]. camera, Canon IXUS 40. The captured images were then used
Another alternative method in pattern recognition is the k- to generate input vectors for the backpropagation neural
nearest neighbor algorithm used in Dynamic Classifier network for training. In this case, multiscale technique with
Selection by Didaci, L. & G, Giacinto [3]. In k-nearest selective thresholding is proposed and the results using this
neighbor (k-NN) algorithm, the pattern class (say x) is method will be compared with other methods. For simulation,
obtained by looking into k number of nearest pattern sets that separate character sets were also prepared using the same
have the least Euclidean distance with that pattern itself method as mentioned previously. The characters were
(pattern x). captured automatically using the GUI created and the captured
Common ways used for character recognition would be the images were stored into a temporary folder for quick
use of artificial neural networks and feature extraction reference. Input vectors are generated and fed into the trained
methods as in Brown, E.W, [4]. Neural network such as neural network for simulation.
This simulator program is designed in such a way that the
characters can be written on any spot on the blank paper. The
program has the ability to capture the characters through
Manuscript received :31st March 2008. This work was supported by
careful filtering. 8-connectivity is used to allow connected
Monash University, Sunway Campus, Malaysia.
Velappa Ganapathy is with the Monash University, Sunway Campus, pixels in any direction to be taken as the same object.
Malaysia (phone: +603-55146250; fax: 603-55146207; e-mail:
Velappa.ganapathy@ eng.monash.edu.my).
Kok Leong Liew is with Monash University Sunway Campus Malaysia
([email protected]).

International Scholarly and Scientific Research & Innovation 2(3) 2008 638
World Academy of Science, Engineering and Technology
International Journal of Computer and Information Engineering
Vol:2, No:3, 2008

III. EXEMPLARS PREPARATION w


RW = (2)
The characters prepared as explained in Section 2, are wmax
scanned using a scanner or captured using a digital camera
and these characters will be segregated according to their own Relative-width ratio is defined as object’s bounding box
character group. One example is shown below in Fig. 1. width, w, over maximum bounding box width among all
objects, wmax.

h
RH = (3)
hmax
Fig. 1 Sample of character A
Relative-height ratio is defined as object’s bounding box
Note that the scanned or captured images are in RGB scale. height, h, over maximum bounding box height among all
These images have to be converted into grayscale format objects, hmax.
before further processing can be done. Using appropriate For example, if the RW of one object exceeds RWmin,
grayscale thresholding, binary images are to be created. (where RWmin is the threshold value for comparison), that
object will be captured. Similar analogy goes for RH and
Digital Open Science Index, Computer and Information Engineering Vol:2, No:3, 2008 waset.org/Publication/2037

threshold value RHmin which applies to bounding box height


of objects. Typical values chosen for RWmin and RHmin are 0.1
and 0.3 respectively.
Fig. 2 Binary image of sample character A The captured objects should now all consist of valid
characters (and not unwanted objects) to be used for neural
Fig. 2 shows the generated binary image using image network training. Each of the captured character images have
processing tool box in MATLAB and 8-connectivity analysis. different dimensions measured in pixels because each of them
The next step is to obtain the bounding box of the have different bounding box sizes. Hence, each of these
characters (Fig. 3). Bounding box is referring to the minimum images needs to be resized to form standard image
rectangular box that is able to encapsulate the whole character. dimensions. However, in order to perform multiscale training
The size of this bounding box is important as only a certain technique [5], different image resolutions are required. For
width-to-height (WH) ratio of the bounding box will be this purpose, the images are resized into dimensions of 20 by
considered in capturing. Those objects of which bounding 28 pixels, 10 by 14 pixels, and 5 by 7 pixels. Note that these
boxes are not within a specific range will not be captured, objects are resized by using the averaging procedure. 4 pixels
hence will be treated as unwanted objects. The next criteria to will be “averaged” and mapped into one pixel (Fig. 4).
be used for selection of objects are relative-height (RH) ratio
and also relative-width (RW) ratio. RH ratio is defined as the
ratio of the bounding box height of one object to the
maximum bounding box height among all the objects of that
image. Likewise, RW refers to the ratio between the widths of
one bounding box to the maximum width among all bounding
boxes widths in that image.
Fig. 4 The pixels shaded in gray on the left are “averaged”
(computing mean of 101, 128, 88, and 50) and the pixel shaded in
gray on the right is the “averaged” pixel intensity value. This
example shows averaging procedure from 20 by 28 pixel image into
10 by 14 pixel image

IV. ARTIFICIAL NEURAL NETWORK TRAINING


Backpropagation neural network is used to perform the
character recognition. The network used consists of 3 layers
and they include input layer, hidden layer, and output layer.
The number of input neurons depends on the image
Fig. 3 The outer rectangle that encapsulates the letter A is the
resolution. For example, if the images that are used for
bounding box
training have a resolution of 5 by 7 pixels, then there should
w be 35 input neurons and so on. On the other hand, the number
WH = (1) of output neurons is fixed to 36 (26 upper case letters + 10
h numerical digits). The first output neuron corresponds to letter
Width-to-height ratio is defined as ratio of bounding box A, second corresponds to letter B, and so on. The sequence is
width, w, to bounding box height, h, of that object. A, B, C… X, Y, Z, 0, 1, 2 … 7, 8, 9. The number of neurons

International Scholarly and Scientific Research & Innovation 2(3) 2008 639
World Academy of Science, Engineering and Technology
International Journal of Computer and Information Engineering
Vol:2, No:3, 2008

in the hidden layer (layer 2) is taken arbitrarily by trial and neural network simulation.
error to be 1500 [7].

V. MULTISCALE TRAINING TECHNIQUE

Fig. 7 The parameters used for character reassembly. Bounding box


is defined with the width, w, height, h, horizontal centroid distance,
xc, vertical centroid distance, yc, and vector that locates the centroid
G
of a character from origin (X0,Y0), P
Fig. 5 Conceptual diagram of multiscale training technique
Digital Open Science Index, Computer and Information Engineering Vol:2, No:3, 2008 waset.org/Publication/2037

The training begins with 5 by 7 pixels exemplars (stage 1).


These input vectors are fed into the neural network for
training. After being trained for a few epochs, the neural
network is “boosted” by manipulating the weights between the
first and the second layer. This resultant neural network is
trained for another few epochs by feeding in 10 by 14 pixels
exemplars (stage 2). Again, the trained neural network is
boosted for the next training session. In a similar fashion, the
boosted neural network is fed in with 20 by 28 pixels
exemplars for another few epochs until satisfactory
convergence is achieved (stage 3). The conceptual diagram of
multiscale neural network is shown in Fig. 5.

Fig. 8 “HELLO THERE” example

In order to determine which character is to be printed first,


the vectors that locate the centroids of each character are
obtained. The magnitudes of the vectors are to be computed
G
P = xc 2 + yc 2
simply by using . Next, the character with the
G
P
smallest is considered, which is the first character of the
first row. Based on this first character, the range of search, R,
Fig. 6 Original neural network (top) and the “boosted” neural is determined where R = UL – LL. The upper and lower
network (bottom)
limits, UL and LL are computed as follows:
Referring to Fig. 6, P1, P2, P3, and P4 are pixel intensity
values and Pave is the averaged pixel intensity value of these UL = yc first + 0.7 Hmax (4)
pixels. After the boosting process, the original weight value LL = yc first – 0.7 Hmax (5)
W, is split into 4, each connected to one pixel location.
Note that yc first is the vertical centroid distance of the first
VI. IMAGE CAPTURING FOR SIMULATION character to the Y0. Hmax refers to the largest bounding box
height among all characters in the sample image. The
Exemplars for simulation also have to go through
constant, 0.7, is taken to be arbitrary. Within this search range
geometrical based filtering that require parameters such as
R, the centroid positions of the characters that are located
WH, RW, RWmin, RH, and RHmin as shown in Fig. 7. within UL and LL will be stored temporarily as these
Similarly, the captured and cropped images will be resized to characters are actually located in the first row. Referring to
standard resolutions: 20 by 28 pixels, 10 by 14 pixels, and 5 Fig. 8, “HELLO” is located in the first row, but not “THERE”
by 7 pixels. because the centroid positions of letters T, H, E, R, and E are
For network simulation, it is important to ensure that the not within UL and LL, hence they are neglected. Next, the
output printed characters are arranged in appropriate order. characters within the first row will be arranged by
Hence, identified characters need to be reassembled after

International Scholarly and Scientific Research & Innovation 2(3) 2008 640
World Academy of Science, Engineering and Technology
International Journal of Computer and Information Engineering
Vol:2, No:3, 2008

G
reconsidering the vector P of all G these characters. The The calculated dfr1,2 will be compared against the threshold
character with the next smallest P will be the second th1,2 where the subscripts 1 and 2 refer to the output values
character (first row, second column) and so on until all of the used (1 means highest output, 2 means second highest output).
characters in the first row have been considered. Once this is Note that th1,2 is not fixed (different pair of characters has
G
different th1,2) and is determined based on certain algorithm
done, the vector magnitude, P of the remaining characters (T,
(selective thresholding). If th1,2 ≥ dfr1,2, then minimum
H, E, R, and E) will be computed to determine the first
character of the second row. Again, the one with the smallest distance would be applicable. Minimum distance, MD, simply
P
G means the sum of the squared differences of the corresponding
will be the first character of second row. Similar procedure pixel intensity values between a pair of image set (template
is repeated to determine the remaining characters in the image and the input sample image).
second row.
m n
VII. NEURAL NETWORK SIMULATION USING SELECTIVE MD = ∑ ∑ (aij − bij ) 2
THRESHOLDING MINIMUM DISTANCE TECHNIQUE (MDT) i =1 j =1

Prior to network simulation, the captured character images Equation 7: Given template image and input image samples with m
will need to be converted into input vectors and this step is number of rows and n number of columns, the MD is computed as
shown, where ai,j and bi,j are pixel intensity values for template
Digital Open Science Index, Computer and Information Engineering Vol:2, No:3, 2008 waset.org/Publication/2037

described in the previous section, namely Artificial Neural


image and input image respectively located at i-th row and j-th
Network Training. The input vectors will be cascaded and fed column
into the neural network. Unlike neural network training, the
targeted output matrix is not required; instead, the network Referring back to output vector example [0.33 0.81 0.72]T,
will produce an output matrix similar to the one in Fig. 9. the character output can be either B or C. Thus, 2 sets of MD
is required where the first set, MD1 is between template image
⎡ 0.2 0.8 0.7 0.2 0.02⎤ character B with the input image sample, whilst the second
⎢ 0.9 0.2 0.11 0.01 0.1 ⎥
⎢ ⎥ set, MD2 is between template image character C with input
⎢⎣0.15 0.3 0.12 0.88 0.6 ⎥⎦ image sample. If MD1 > MD2, then character C is the most
Fig. 9 An example of output matrix after neural network simulation possible output due to smallest difference in terms of pixel
intensity value between input sample and template character
Again the assumption here is 3 possible outputs with 5 C.
character samples. If the first row, second row, and third row
correspond to characters A, B, and C respectively, then the VIII. RESULTS AND DISCUSSION
first character (first column) is giving an output of character B Comparisons were made between the neural networks that
because the highest output value is 0.9 which is located at the were trained using the brute force method and the ones using
second row. Similar way goes for the rest. The output multiscale training method. The results are shown below. In
characters for this example would be BAACC. addition, comparisons were also made between ordinary
However, there is a possibility that the neural network network simulation and the ones using selective Minimum
might produce wrong output characters due to character Distance Technique.
clashing and to reduce such possibility, selective thresholding
minimum distance technique is used. Selective thresholding Results
requires the use of certain heuristic to be used in determining In Fig. 10 curves, 1, 2 and 3 indicate MST stage 1, stage-2
whether minimum distance technique is applicable in that and stage-3 respectively and curve 4 indicates the neural
situation. For instance, if the generated output vector for one network trained using brute force. Note that MST x-y-z means
particular sample is [0.33 0.81 0.72]T, the output character that the neural network is trained using multiscale training
would be letter B (using the same assumption used previously technique [5] for x time units in stage 1, y time units in stage
for 3 possible outputs). However, it can be observed that letter 2, and z time units in stage 3.
C is quite likely to be the correct character output as well due
to close output values for second and third row (0.81 and
0.72). To resolve this problem, differential ratio, dfr, is
computed and will be compared with a certain threshold value
to determine if minimum distance technique should be applied
to resolve such clashing.
O - O2
dfr1,2 = 1 (6)
O1
Differential ratio is calculated by computing the difference
between highest output value O1 and second highest output
value O2 of that sample as a ratio to O1.

International Scholarly and Scientific Research & Innovation 2(3) 2008 641
World Academy of Science, Engineering and Technology
International Journal of Computer and Information Engineering
Vol:2, No:3, 2008

Training Results on Time Axis

Fig. 12 Comparison between ordinary network simulation (top) and


the one with selective thresholding minimum distance technique
(bottom)

Discussion
It is shown that MST training allows faster convergence.
Digital Open Science Index, Computer and Information Engineering Vol:2, No:3, 2008 waset.org/Publication/2037

MST training enables large resolution images (20 by 28


pixels) to be used for training at much less time compared to
brute force that uses large resolution images throughout the
entire training process. MST makes use of smaller resolution
images for speed and larger resolution images for accuracy.
Multiple stages of training allow the network to make better
use of training time compared to ordinary brute force method.
In terms of percentage accuracy, MST networks generally
produce greater number of correctly identified characters as
percentage of total number of characters in simulation samples
compared to brute force network (with only 73.61% in the
example considered). This proves that MST networks have
greater generalization ability. Fig. 10 gives the comparison
made between brute force and MST 25-25-150 (top), MST
50-50-100 (middle), and MST 75-75-50 (bottom).
However, different MST configurations will produce
different degrees of accuracies as shown in Fig. 11. Results
also show that networks that use selective thresholding
minimum distance technique generally produce higher
percentage accuracy compared to the networks that do not use
it. This is shown in Fig. 12.
Fig. 10 Graphs of Mean Square Error (MSE) versus Time units.
Comparison made between brute force and MST 25-25-150 (left),
MST50-50-100 (middle), and MST 75-75-50 (right) IX. CONCLUSION
When the resolution of the character images grows larger,
Neural Network Parameters used in the Experiment neural network training tends to be slow due to more
Number of hidden neurons = 1500 processing for larger input matrix. If the character images
Number of epoch = 200 have lower resolution, the training process is much faster.
Training algorithm = ‘trainscg’ However, some important details might be lost. Hence, it is a
Transfer function used in hidden layer = ‘tansig’ tradeoff between image resolution and training speed to
Transfer function used in output layer = ‘logsig’ recognize hand written characters. To optimize between these
two parameters, it has been shown that one can adopt the
Simulation Results multiscale training technique with modifications in input
vectors as it provides faster training speed for different image
resolutions. On the other hand, it is shown that selective
thresholding MDT can also be used to increase the percentage
accuracy of the identified characters at the cost of simulation
Fig. 11 Simulation results showing the percentage of correctly time. Results also show that networks that use selective
identified characters. For brute force method (not shown in table), thresholding minimum distance technique generally produce
percentage accuracy was only measured at the end of stage 3. The higher percentage accuracies compared to the networks that
accuracy measured was 73.61%

International Scholarly and Scientific Research & Innovation 2(3) 2008 642
World Academy of Science, Engineering and Technology
International Journal of Computer and Information Engineering
Vol:2, No:3, 2008

do not use it. Efficient algorithm is still to be explored to


determine the appropriate threshold level to allow MDT to be
used effectively.

REFERENCES
[1] Wu, P.H. (2003), Handwritten Character Recognition, B.Eng (Hons)
Thesis, the School of Information Technology and Electrical
Engineering, the University of Queensland.
[2] Liou, C.Y. & Yang, H.C. (1996), “Hand printed Character Recognition
Based on Spatial Topology Distance Measurement”, IEEE Transactions
On Pattern Analysis and Machine Intelligence, Vol. 18. No. 9, pp 941-
945.
[3] Didaci, L. & Giacinto, G. (2004), Dynamic Classifier Selection by
Adaptive k-Nearest-Neighbourhood Rule, Available:
http://ce.diee.unica.it/en/publications/papers-prag/MCS-Conference-
th
19.pdf (Accessed: 2007, October 11 ).
[4] Brown, E.W. (1993), Applying Neural Networks to Character
Recognition, Available:
http://www.ccs.neu.edu/home/feneric/charrecnn.html (Accessed: 2007,
Digital Open Science Index, Computer and Information Engineering Vol:2, No:3, 2008 waset.org/Publication/2037

th
October 11 ).
[5] Robinson, G. (1995), The Multiscale Technique, Available:
http://www.netlib.org/utk/lsi/pcwLSI/text/node123.html (Accessed:
2007, October 11th).
[6] Handwritten Character Recognition, Available:
http://tcts.fpms.ac.be/rdf/hcrinuk.htm (Accessed: 2007, October 11th).
[7] Rivals I. & Personnaz L. A statistical procedure for determining the
optimal number of hidden neurons of a neural model. Second
International Symposium on Neural Computation (NC.2000), Berlin,
May 23-26 2000.

Velappa Ganapathy was born on 1st May 1941 at


Singalandapuram, Salem, India. He had obtained his
Bachelor of Engineering in Electrical & Electronics
Engineering and Master of Science in Electrical
Engineering both from the University of Madras,
India. He did his PhD in Electrical Engineering from
the Indian Institute of Technology, Madras, India.
He had worked in various capacities as Associate
Lecturer, Lecturer, Assistant Professor, Associate
Professor and Professor at the Government College
of Technology, Coimbatore, Anna University Chennai, Multimedia
University, Malaysia and Monash University Malaysia.
His research interests are Digital Signal Processing, Robotics, Artificial
Intelligence and Image Processing. Currently he is with the Monash
University Malaysia.

S. B. Kok Leong Liew is with Monash University Sunway Campus Malaysia.


He had just completed his Bachelor of Engineering Degree in Mechatronics
Engineering and joined a private company in Malaysia as a Technical Trainee.

International Scholarly and Scientific Research & Innovation 2(3) 2008 643

You might also like