Paper 4

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

Proceedings of the Second International Conference on Inventive Research in Computing Applications (ICIRCA-2020)

IEEE Xplore Part Number: CFP20N67-ART; ISBN: 978-1-7281-5374-2

Deep Learning Network Architecture based


Kannada Handwritten Character Recognition
N. Shobha Rani 1, Subramani A C 2, Akshay Kumar P 3, Pushpa B R 4
Department of Computer Science, Amrita School of Arts & Sciences, Mysore
Amrita Vishwa Vidyapeetham, India
E-mail: [email protected] 1, [email protected] 2, [email protected] 3, [email protected] 4

Abstract: In this work, a novel model for recognition of transfer learning heads to the use of labeled data of successful
handwritten Kannada characters using transfer learning from intelligent systems for the training purpose of new systems
Devanagari handwritten recognition system is presented. The with similar characteristics that has a smaller data corpus[2],
objective is to use the knowledge of large data corpus of
several attempts using transfer learning by Lee[3],
Devanagari recognition system as training data to perform the
Navarretta[4], Cao[5], Zoph[6] proved to be successful in
recognition of handwritten Kannada characters that has a smaller
data corpus. The transfer of knowledge for recognition is carried
realization of higher accuracies. Therefore in this paper, an
out using deep learning network architecture to VGG19 NET. attempt made towards using the labelled dataset of Devanagiri
VGG19 NET is defined of five blocks of hidden layers, two dense character recognition system toward Kannada handwritten
fully connected layers and an output layer. Each block (except recognition system.
block1) consists of four convolution layers along with a max Kannada handwritten character recognition is one of
pooling layer. In proposed classification framework, Devanagari the challenging and unresolved research problems. In spite of,
character set consists of totally 92000 images with 46 classes and a couple of attempts based on machine learning architectural
Kannada character set is built with 81654 for training and 9401 for
models Chacko[7], Coates[8], Deng [9] and quite a few deep
testing, for about 188 classes with each class comprising of 200-500
learning architectures, Acharya [10], Chen [11], Wu[12]
sample image. A total of 1,23,654 data samples is employed for
training with VGG19 NET. For experimentation 9401 samples of
limitations of these system remained unsolved. The problem
about 188 classes built of about 40-100 samples in each classes is of handwritten character recognition system is a complex
used and for which accuracy close to 90% is achieved. Validated problem irrespective of script type. The various barriers that
accuracy after evaluation in 10 epochs with VGG19 NET, it has comes in the way of achieving higher recognition rates
recorded an accuracy of 73.51% with a loss of 16.18%. includes indefinite number of handwriting styles,
unconstrained environment maintained while writing, varied
Keywords— Deep learning, neural network, transfer orientation in writing etc. Along with these, the complexity of
learning, Kannada character recognition, handwritten South Indian script Kannada, its consonants/vowels modifiers,
characters recognition, handwritten characters, convolution compound characters etc., improves the odds of these systems.
neural networks. Along with this availability of small size data corpus is one of
the drawback for these types of system. Thus in proposed
I. INTRODUCTION
work, the uncertainty issue of datasets is handled using
transfer learning model based on Devanagari character
In the recent times, evolution of deep learning system is found
recognition systems. In the recent years there are several
dominantly in most of successful intelligent systems. The idea
significant number of researches that are reported in this area,
of transfer learning has been integrated from the problem of
a detailed review of these works are discussed subsequently.
large datasets requirements for deep learning systems. Though
Siddiqua et al[13] proposed a method for Kannada character
deep learning systems are very powerful and robust towards
recognition in scene images using transfer learning. The
achieving higher accuracies these systems are dependent on
numbers of neural networks are trained and tested using 1700
very large datasets with proper labelling of training data [1].
Kannada scene characters that are extracted from the char74k
This requirement of large data corpus for its implementation
datasets. The modifications are made to Alexnet by placing
has restrained many research attempts. Later, the notion of

978-1-7281-5374-2/20/$31.00 ©2020 IEEE 213

Authorized licensed use limited to: UNIVERSITY OF ROCHESTER. Downloaded on September 21,2020 at 15:43:04 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the Second International Conference on Inventive Research in Computing Applications (ICIRCA-2020)
IEEE Xplore Part Number: CFP20N67-ART; ISBN: 978-1-7281-5374-2

batch normalization that results in accuracy of 96%. The [19] proposed the model the transfer knowledge from one
experimentations carried out with VGG19 for better recognition system to another for handwritten number
performance. The authors have focused only on recognition recognition in scripts including Hindi, Arabic, and Bangla.
First the system is trained with numerals from the three one-
Kannada aksharamala characters. Kunte [14] explored an idea
of-a-kind scripts and the best resulting weights is saved. In
to handle different font sizes and types to recognize Kannada addition, because of over fitting problem the CNN layers are
characters in printed document. The pre-processing methods paused for weight updation. Meanwhile, classification layers
are applied to remove noise, skew corrections and binarized. used feature vectors acquired from another language to detect
Horizontal, vertical projection profile and connected distinctive digits. In this work convolutional neural network is
components segments the document image into line, words used for transfer learning which results in decreased time and
and characters. Hu’s invariant moments and Zernike moments competitive accuracy and also it indicates that Independence
of model-particular training reduces the re-training time within
are extracted which are further applied to Radial basis function
the target task notably.
neural classifiers. The experiments are performed with 50 In addition, Aneja et al [20] carried out comparisons of pre-
samples as training and 20 document samples as testing of trained models for handwritten recognition of Devanagari
about 2500 characters in each class. A method for character alphabets using transfer learning for Deep Convolution Neural
recognition from historical handwritten Kannada documents Network (DCNN). This work implements AlexNet, VGG,
using transfer learning proposed by Chandrakala et al [15] . Inception ConvNet and DenseNet as fixed feature extractors.
The pre-processing approaches are applied to enhance contrast The experiments performed outcomes that InceptionV3
and to remove the degradations and images are semi performs better accuracy than the AlexNet because of
automatically segmented. For feature extraction, stochastic regularization imposed by smaller convolution filter sizes and
gradient descent with momentum (SGDM) algorithm is used. highest number of layers. The DenseNet model accomplished
The experiments are carried out to compare the performances poorest because of the structure of its architecture. Tang et al
of SVM and DCCN. The dataset considered of 1260 character [21] proposed CNN-primarily based transfer learning
images that belongs to 118 different classes. Text-Line approach applied on ancient Chinese character recognition.
segmentation is proposed for the historical document images First of all, a CNN model is trained by printed Chinese
by Ravi et al [16]. Three methods are proposed by applying character samples, and this network structure and weights are
horizontal projection and connected components. The method utilized to initialize another CNN model which recognizes the
3 presents good results with connected components and historical characters. The new model is refined by a few
bounding box that are applied to the sample images consists of historical or handwritten Chinese character samples and used
217 lines from which 178 correctly extracted. Cireşan et al to test the target characters. Transfer learning with CNN fine-
[17] explored an idea on character recognition tasks of Latin tuning shows the use of an additional adaptation layer, the
and Chinese characters by transfer learning with Deep Neural amount of samples selected for transfer learning from the
Networks (DNN). Minimal retraining is employed to target domain, the selection of samples in the source domain
recognize uppercase letters from the DNN trained on digits. and the schemes for updating the network parameters. This
Work focused on training weights to all layers except the last work concluded that it is viable to combine traditional transfer
layer that are utilized for different task by transferring learning methods with CNN-based transfer learning method.
knowledge from already learnt task and further for any new Oquab et al., [22] simple transfer learning procedure are
task prediction only retraining of classification is required. applied on challenging benchmark datasets of relatively
Containing similar structures that differentiate each with smaller size .The author has additionally demonstrated that
structures such as horizontal lines and dots to prevent over
high potential of the mid-level extracted features from an
fitting of these networks and improving accuracy even for a
varied and challenging dataset by using Dropout layer and Image Net-trained CNNs. Although the performance of this
data increment model Zhao et al [18] came up with the model increased as augmentation carried out on the source
method of transfer learning for the deep learning problem task data using only 12% of the Image Net corpus results in
with a small sample datasets and a version of the convolution the higher outcomes on the Pascal VOC 2012 datasets
neural network combined with global average pooling classification and recognition challenges. Asha et al [23]
(TLCNN-GAP) based on transfer learning. Transferring the proposed character recognition system for documents written
knowledge from a pre-trained CNN model on a large data set
in Kannada language. Pre-processing is carried out to enhance
to a small sample dataset with the aid of adjusting the full-pool
layer, convolution layers and the softmax classifier .This documents by noise removal and contrast enhancement
method of TLCNN-GAP efficaciously reduces range of techniques. Boundaries are applied to each line of the
parameters passed and optimizes generalization capacity of document, vertical segmentation is performed to segment each
network and stability by reducing the over-fitting. Tushar et al. word and further segmentation is carried out for character

978-1-7281-5374-2/20/$31.00 ©2020 IEEE 214

Authorized licensed use limited to: UNIVERSITY OF ROCHESTER. Downloaded on September 21,2020 at 15:43:04 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the Second International Conference on Inventive Research in Computing Applications (ICIRCA-2020)
IEEE Xplore Part Number: CFP20N67-ART; ISBN: 978-1-7281-5374-2

extraction from the word. For features extraction The idea of adapting datasets of a successful and efficient
Convolutional Neural Network (CNN) model is used on system through transfer learning to carry out recognition of a
Chars74K dataset and achieved 98% accuracy for the system with inadequate data sets is the inclination of proposed
document containing non-overlapping lines of characters. work. The idea of using the knowledge from deep learning
It is noticed in most of the works the number of classes to model to build another system through transfer learning is
which data is labelled is limited and focused on Roman scripts shown in figure 2, which represents the block diagram of
dominantly than other south Indian scripts. A common one of proposed system. In this work, feature extraction and
the measure or crucial parameter that can improvise the classification is carried out using the VGG NET 19, which is a
recognition accuracies of improvise and uncertain system like 19 layer deep convolutional neural network. VGG Net 19 has
handwritten character recognition system is manifestation of been proved efficient towards classification of cross domain
datasets. The composition of datasets with labelling is one of images of up to 1000 categories of datasets. In our proposed,
the vital stages for this type of systems and this step greatly VGG NET 19 is used for classification of about 188 classes of
influences the success of the systems. Therefore handling handwritten Kannada characters. Figure 3 represents the
uncertainty of data is one of the major challenges for architecture of VGG NET 19 deep network. The details of 188
handwritten character recognition systems. classes proposed for classification is as shown in table 1, the
details of datasets used is shown in table 2.
The paper is organized as follows: proposed methodology is
explained in section II. Section III explains experimental
analysis. Section IV and section V concludes the paper.

II. PROPOSED METHODOLOGY

In the proposed work, recognition of handwritten Kannada


character recognition is carried out using a transfer learning
model. As the proposed dataset used for Kannada handwritten
character recognition is even though viable for deep learning
frame work, the challenges is capturing the wide variety of
handwriting styles is a trivial issue. Therefore, the knowledge
transfer from large data corpus of Devanagari character
recognition is considered for the training purpose of proposed
frame work. Figure 1 shows the architecture of knowledge
transfer model from large data corpus to smaller one with
respect to handwritten character recognition from Devanagari
to Kannada.
Figure 2: Block diagram of proposed system

Figure 1: Architecture of transfer learning – Kannada Handwritten


Character Recognition

Table 1: Kannada characters of 188 classes proposed for classification


Class Character No. of No. of Class Labels Character No. of No. of
Labels Instance samples for samples for Instance samples for samples for
training testing training testing
Img001 C 218 50 Img095 mË 225 50
Img002 L 242 50 Img096 oÀ 225 50
Img003 eÉÃ 42 20 Img097 oÁ 208 50

978-1-7281-5374-2/20/$31.00 ©2020 IEEE 215

Authorized licensed use limited to: UNIVERSITY OF ROCHESTER. Downloaded on September 21,2020 at 15:43:04 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the Second International Conference on Inventive Research in Computing Applications (ICIRCA-2020)
IEEE Xplore Part Number: CFP20N67-ART; ISBN: 978-1-7281-5374-2

Img004 eÉÊ 227 50 Img098 p 207 50


Img005 eË 198 50 Img099 oÀÄ 224 50
Img006 M 156 50 Img100 oÀÆ 215 50
Img007 N 225 50 Img101 oÀÈ 212 50
Img008 O 198 59 Img102 oÉ 207 50
Img009 L 188 60 Img103 oÉÊ 209 50
Img010 mÁ 222 50 Img104 oÉÆ 260 50
Img011 CA 154 50 Img105 oË 172 50
Img012 N 167 50 Img106 qÀ 227 50
Img013 lÄ 206 50 Img107 r 162 50
Img014 lÆ 170 50 Img108 qÀÆ 213 50
Img015 mÉ 50 50 Img109 qÀÆ 213 50
Img016 mÉÊ 212 50 Img110 qsÀÈ 181 50
Img017 mË 225 50 Img111 qsÉ 175 50
Img018 D 201 50 Img112 qsÉÊ 157 50
Img019 O 198 50 Img113 qsÉÆ 154 50
Img020 CA 154 50 Img114 qsË 166 50
Img021 CB 12 50 Img115 t 162 50
Img022 PÀ 165 50 Img116 uÁ 184 50
Img023 Q 173 50 Img117 t 154 50
Img024 QÃ 154 50 Img118 tÄ 192 50
Img025 PÀÄ 157 51 Img119 tÆ 178 50
Img026 PÀÆ 109 50 Img120 ªÀÄ 172 50
Img027 PÀÈ 162 55 Img121 ªÀiÁ 187 50
Img028 PÉ 176 50 Img122 ¨ÉÆ 156 50
Img029 PÉà 154 50 Img123 §Æ 189 50
Img030 PÉÊ 101 50 Img124 © 182 50
Img031 PÉÆ 180 50 Img125 ¥Ë 175 50
Img032 PÉÆà 156 50 Img126 ªÉÆ 179 50
Img033 PË 162 50 Img127 ªÉÄ 170 50
Img034 PÀB 203 50 Img128 ªÀÄÆ 158 50
Img035 SÁ 156 50 Img129 «Ä 168 50
Img036 TÃ 158 50 Img130 B 181 50
Img037 RÆ 185 55 Img131 vÀÆ 153 50
Img038 ZÀ 155 50 Img132 A 122 50
Img039 RÈ 132 50 Img133 vÉ 176 50
Img040 RÈ 132 50 Img134 §È 178 50
Img041 SÉ 193 50 Img135 vÉÊ 168 50
Img042 SÉÊ 156 50 Img136 «Ä 168 50
Img043 SÉÆ 133 50 Img137 £ÉÆ 172 50
Img044 SÉÆÃ 17 50 Img138 vË 168 50
Img045 SË 196 50 Img139 ¨ÉÊ 170 50
Img046 RA 150 50 Img140 ¨Ë 182 50
Img047 RB 155 50 Img141 ªÀÄÄ 157 50
Img048 UÀ 102 50 Img142 Ã 173 50
Img049 UÁ 160 50 Img143 tÈ 30 50
Img050 V 188 50 Img144 zÉÆ 152 50
Img051 VÃ 171 50 Img145 zË 153 50
Img052 UÀÄ 168 50 Img146 £Á 185 50
Img053 UÀÆ 174 50 Img147 £Ë 155 50
Img054 UÀÈ 181 50 Img148 £ÉÊ 155 50
Img055 UÉ 157 50 Img149 £ÀÄ 152 50
Img056 UÉÃ 167 50 Img150 ¤ 170 50

978-1-7281-5374-2/20/$31.00 ©2020 IEEE 216

Authorized licensed use limited to: UNIVERSITY OF ROCHESTER. Downloaded on September 21,2020 at 15:43:04 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the Second International Conference on Inventive Research in Computing Applications (ICIRCA-2020)
IEEE Xplore Part Number: CFP20N67-ART; ISBN: 978-1-7281-5374-2

Img057 UÉÊ 139 50 Img151 zÉ 175 50


Img058 UÉÆ 198 50 Img152 £ÀÈ 167 50
Img059 UÉÆà 53 50 Img153 £À 161 50
Img060 UË 224 50 Img154 zÀ 186 50
Img061 UÀA 26 50 Img155 zÁ 170 50
Img062 UÀB 20 50 Img156 ¢ 164 50
Img063 WÀ 82 50 Img157 zÉÊ 166 50
Img064 X 168 50 Img158 zÀÄ 154 50
Img065 WÀÄ 153 50 Img159 zÀÆ 56 50
Img066 WÀÈ 157 50 Img160 zÀÈ 169 50
Img067 WÉ 194 50 Img161 £ÀÆ 171 50
Img068 WÉÃ 124 50 Img162 xÀ 173 50
Img069 WÉÆ 140 50 Img163 Y 172 50
Img070 WË 157 50 Img164 xÀÆ 157 50
Img071 Y 188 50 Img165 xË 161 50
Img072 ZÀ 155 50 Img166 xÉÆ 155 50
Img073 ZÁ 108 50 Img167 xÉ 364 50
Img074 A 155 50 Img168 xÀÈ 161 50
Img075 ZÀÄ 158 50 Img169 zsÀ 166 50
Img076 ZÀÆ 156 50 Img170 zsÁ 178 50
Img077 ZÀÈ 105 50 Img171 ¢ü 171 50
Img078 ZÉÊ 126 50 Img172 uË 178 50
Img079 ZÉÆ 83 50 Img173 zsÀÄ 165 50
Img080 oÀ 225 50 Img174 zsÀÆ 152 50
Img081 eÁ 205 50 Img175 zsÀÈ 152 50
Img082 eÁ 205 50 Img176 zsÉ 174 50
Img083 F 214 50 Img177 uÉÆ 155 50
Img084 dÄ 207 50 Img178 zsÉÊ 124 50
Img085 dÈ 190 50 Img179 zsÉÆ 154 50
Img086 eÉÊ 227 50 Img180 zsË 156 50
Img087 eË 202 50 Img181 D 151 50
Img088 P 207 50 Img182 E 219 50
Img089 N 167 50 Img183 F 370 50
Img090 lÄ 206 50 Img184 G 155 50
Img091 lÆ 170 50 Img185 H 154 50
Img092 mÉ 50 50 Img186 IÄ 154 50
Img093 mÉÊ 212 50 Img187 J 171 50
Img094 mÉÆ 212 50 Img188 K 366 50

Table 2: Details of Datasets Kannada handwritten character recognition

Number of Datasets Number of samples in each class


Number Training Testing
of Training Testing (approximate (Approximate
classes range) range)

188 31654 9401 200-500 40-50

Table 1 and table 2 shows the number of sample data. Table 1 training and testing. Table 2 gives the overall number of
shows the number of sample data present in each class for training and sample datasets present for 188 classes.

978-1-7281-5374-2/20/$31.00 ©2020 IEEE 217

Authorized licensed use limited to: UNIVERSITY OF ROCHESTER. Downloaded on September 21,2020 at 15:43:04 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the Second International Conference on Inventive Research in Computing Applications (ICIRCA-2020)
IEEE Xplore Part Number: CFP20N67-ART; ISBN: 978-1-7281-5374-2

In the presented methodology, classification of handwritten with a output layer at the end which labels the features
Kannada characters is performed using VGG NET 19 with abstracted to 188 classes. Table 2 presents the details of
architecture as shown in figure 3. Overall architecture is datasets used for training and testing of Kannada handwritten
comprised in four blocks of hidden layers, two dense fully characters to VGG NET 19.
connected layers and an output layer. Initially input layer is Further the details of Devanagari handwritten character set
fed with raw inputs of images with dimensions of about 32x32 used for transfer learning includes a huge data samples of
and number of channels as 3. Further four blocks of hidden about 92000 with number of classes as 46. Each data sample is
layers for feature abstraction are defined with block 1 and of dimension 32x32 with 3 channels. Training is including the
block 2 consisting of convolution layer 1, convolution layer 2 transfer of knowledge from samples of Devanagari
and a max pooling layer, block3 and block 4 are comprised of handwritten character set along with the training samples of
four convolution layers followed by a max pooling layer. about 31654 of Kannada handwritten character set.
Finally two dense fully connected layers are present along

Figure 3: Architecture of VGG NET 19

978-1-7281-5374-2/20/$31.00 ©2020 IEEE 218

Authorized licensed use limited to: UNIVERSITY OF ROCHESTER. Downloaded on September 21,2020 at 15:43:04 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the Second International Conference on Inventive Research in Computing Applications (ICIRCA-2020)
IEEE Xplore Part Number: CFP20N67-ART; ISBN: 978-1-7281-5374-2

III. EXPERIMENTAL ANALYSIS


IV. Conclusion
In proposed method experimentation are carried out on
synthetically generated datasets belonging to 188 classes of The idea of employing the knowledge of Devanagari character
Kannada alphabetical set consisting of 31654 training and set is utilized for classification of Kannada character dataset.
9401 test instances. The statistics of datasets with respect to The use of large amount of training knowledge for the task of
each class is shown in table 1. As in augmentation to training very huge classes of 188 has shown satisfactory of accuracy of
data, the Devanagari character set to about 92000 instances about 73.51% with validation. As a measure, improvising the
belonging to 46 classes in employed. VGG19NET is used for resolution of dataset further may show enhanced accuracy.
performing classification carried out in 70 epochs for Also increase in number of instances with respect to every
classification of 9401 instances of Kannada character sets. In class will further show higher recognition rate.
each epoch 1024 iterations are used for classification of test
instances. REFERENCES

[1] LeCun, Yann, Yoshua Bengio, and Geoffrey Hinton. "Deep


learning. nature 521." (2015): 530-531.
[2] Giorgi, John M., and Gary D. Bader. "Transfer learning for
biomedical named entity recognition with neural
networks." Bioinformatics 34.23 (2018): 4087-4094.
[3] Lee, Ji Young, Franck Dernoncourt, and Peter Szolovits. "Transfer
learning for named-entity recognition with neural networks." arXiv
preprint arXiv:1705.06273 (2017).
[4] Navarretta, Costanza. "Transfer learning in multimodal
corpora." 2013 IEEE 4th International Conference on Cognitive
Infocommunications (CogInfoCom). IEEE, 2013.
[5] Cao, Pengfei, et al. "Adversarial transfer learning for Chinese
named entity recognition with self-attention mechanism." (2018).
[6] Zoph, Barret, et al. "Transfer learning for low-resource neural
machine translation." arXiv preprint arXiv:1604.02201 (2016).
[7] Chacko, Binu P., et al. "Handwritten character recognition using
Figure 4: Performance of VGG NET 19- Number of epochs vs. loss wavelet energy and extreme learning machine." International
Journal of Machine Learning and Cybernetics 3.2 (2012): 149-161.
[8] Ye, Peng, et al. "Unsupervised feature learning framework for no-
reference image quality assessment." 2012 IEEE conference on
computer vision and pattern recognition. IEEE, 2012.
[9] Deng, Li. "The mnist database of handwritten digit images for
machine learning research [best of the web]." IEEE Signal
Processing Magazine 29.6 (2012): 141-142.
[10] Acharya, Shailesh, Ashok Kumar Pant, and Prashnna Kumar
Gyawali. "Deep learning based large scale handwritten Devanagari
character recognition." 2015 9th International Conference on
Software, Knowledge, Information Management and Applications
(SKIMA). IEEE, 2015.
[11] Chen, Li, et al. "Beyond human recognition: A CNN-based
framework for handwritten character recognition." 2015 3rd IAPR
Asian Conference on Pattern Recognition (ACPR). IEEE, 2015.
[12] Mendis, Gihan J., Jin Wei, and Arjuna Madanayake. "Deep belief
network for automated modulation classification in cognitive
Figure 5: Performance of VGG NET 19- Number of epochs vs. radio." 2017 Cognitive Communications for Aerospace
accuracy Applications Workshop (CCAA). IEEE, 2017.
[13] Siddiqua, Shahzia, C. Naveena, and Sunilkumar S. Manvi.
Figure 4 and figure 5 represents the accuracy verses validated "Recognition of Kannada Characters in Scene Images using Neural
accuracy and loss obtained. Figure 4 graph presents loss Networks." 2019 Fifth International Conference on Image
verses number of epochs. Figure 5: graph represents the Information Processing (ICIIP). IEEE, 2019.
accuracy verses number of epochs. From figure 4 and Figure 5 [14] Kunte, R. Sanjeev, and RD Sudhaker Samuel. "A simple and
it is evident that loss obtained is less than 2% when accuracy efficient optical character recognition system for basic symbols in
is more than 87%. However the validated accuracy is 73.51% printed Kannada text." Sadhana 32.5 (2007): 521.
[15] Chandrakala, H. T., and G. Thippeswamy. "Deep Convolutional
while the loss is 16.01%.
Neural Networks for Recognition of Historical Handwritten

978-1-7281-5374-2/20/$31.00 ©2020 IEEE 219

Authorized licensed use limited to: UNIVERSITY OF ROCHESTER. Downloaded on September 21,2020 at 15:43:04 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the Second International Conference on Inventive Research in Computing Applications (ICIRCA-2020)
IEEE Xplore Part Number: CFP20N67-ART; ISBN: 978-1-7281-5374-2

Kannada Characters." Frontiers in Intelligent Computing: Theory


and Applications. Springer, Singapore, 2020. 69-77.
[16] Ravi, P., et al. "Text-Line Extraction from Historical Kannada
Document." Frontiers in Intelligent Computing: Theory and
Applications. Springer, Singapore, 2020. 276-285.
[17] Cireşan, Dan C., Ueli Meier, and Jürgen Schmidhuber. "Transfer
learning for Latin and Chinese characters with deep neural
networks." The 2012 International Joint Conference on Neural
Networks (IJCNN). IEEE, 2012.
[18] Zhao, Wei. "Research on the deep learning of the small sample
data based on transfer learning." AIP Conference Proceedings.
Vol. 1864. No. 1. AIP Publishing LLC, 2017.
[19] Tushar, Abdul Kawsar, et al. "A novel transfer learning approach
upon hindi, arabic, and bangla numerals using convolutional neural
networks." Computational Vision and Bio Inspired Computing.
Springer, Cham, 2018. 972-981.
[20] Aneja, Nagender, and Sandhya Aneja. "Transfer Learning using
CNN for Handwritten Devanagari Character Recognition." arXiv
preprint arXiv:1909.08774 (2019).
[21] Tang, Yejun, et al. "CNN based transfer learning for historical
Chinese character recognition." 2016 12th IAPR Workshop on
Document Analysis Systems (DAS). IEEE, 2016.
[22] Oquab, Maxime, et al. "Learning and transferring mid-level image
representations using convolutional neural networks." Proceedings
of the IEEE conference on computer vision and pattern
recognition. 2014.
[23] Asha, K., and H. K. Krishnappa. "Kannada Handwritten Document
Recognition using Convolutional Neural Network." 2018 3rd
International Conference on Computational Systems and
Information Technology for Sustainable Solutions (CSITSS).
IEEE, 2018.

978-1-7281-5374-2/20/$31.00 ©2020 IEEE 220

Authorized licensed use limited to: UNIVERSITY OF ROCHESTER. Downloaded on September 21,2020 at 15:43:04 UTC from IEEE Xplore. Restrictions apply.

You might also like