Gesture Recognition With The Leap Motion Controller

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

Rochester Institute of Technology

RIT Scholar Works


Presentations and other scholarship

2015

Gesture Recognition with the Leap Motion


Controller
Robert McCartney
[email protected]

Jie Yuan
[email protected]

Hans-Peter Bischof
[email protected]

Follow this and additional works at: http://scholarworks.rit.edu/other

Recommended Citation
McCartney, Robert; Yuan, Jie; and Bischof, Hans-Peter, "Gesture Recognition with the Leap Motion Controller" (2015). Accessed
from
http://scholarworks.rit.edu/other/857

This Conference Proceeding is brought to you for free and open access by RIT Scholar Works. It has been accepted for inclusion in Presentations and
other scholarship by an authorized administrator of RIT Scholar Works. For more information, please contact [email protected].
Int'l Conf. IP, Comp. Vision, and Pattern Recognition | IPCV'15 | 3

Gesture Recognition with the Leap Motion Controller


R. McCartney1 , J. Yuan1 , and H.-P. Bischof1,2
1 Department of Computer Science, Rochester Institute of Technology, Rochester, NY, USA
2 Center for Computational Relativity and Gravitation, Rochester Institute for Technology, Rochester, NY, USA

Abstract The Leap Motion Controller is a small USB 2. Problem Description


device that tracks hand and finger movements using infrared
LEDs, allowing users to input gesture commands into an
application in place of a mouse or keyboard. This creates The inputs coming from a mouse or a keyboard are
the potential for developing a general gesture recognition discrete and have limited interpretation. A mouse down event
system in 3D that can be easily set up by laypersons using is a single event at a given position on the screen, and
a simple, commercially available device. To investigate the dependent on the environment carries information with it
effectiveness of the Leap Motion controller for hand gesture about the position of the mouse pointer, time of the click,
recognition, we collected data from over 100 participants and so on. A double or triple click is an event over time,
and then used this data to train a 3D recognition model and will only count as such if the click events happen within
based on convolutional neural networks, which can rec- a predefined window. Apples Magic Mouse [3] somewhat
ognize 2D projections of the 3D space. This achieved an opened the door to 2D gestures, allowing users to swipe
accuracy rate of 92.4% on held out data. We also describe between pages or full screen applications and to double tap
preliminary work on incorporating time series gesture data for access to mission control.
using hidden Markov models, with the goal of detecting ar- A keyboard or mouse sends an event only if a key or
bitrary start and stop points for gestures when continuously button is pressed or the mouse is moved. They do not start to
recording data. send events as your hand approaches the device. In contrast,
3D input devices, like the Leap Motion controller1 , start to
Keywords: Gesture recognition, CNN, HMM, deep learn- send frames as soon as they are turned on. These devices
ing send a series of positions in space over time of whatever
they detect in their views. The problem becomes to convert
the output of these devices into something meaningful.
1. Introduction
The output from motion sensing devices comes in two
There was a time when communication with programs flavors: high-level and low-level. Low-level output is a series
like vi [1] was done via keyboard only. The keyboard was of frames where each frame contains information on what
used to input data and to change the execution behavior of the device has sensed, such as the number of fingers, finger
the program. The keyboard was a sufficient input device for tip positions, palm position and direction, etc. The frame rate
a one-dimensional system. depends on the user settings and compute power, but 60 or
At the moment that operating systems moved to GUIs the more frames per second is typical. High-level output is the
use of a mouse became handy to switch between graphical interpreted version of the raw frame data. This allows users
applications and exert control over them. The first notable or application developers to be informed when a particular
applications making use of a mouse came when Microsoft predefined gesture is recognized.
introduced a mouse-compatible Word version in 1983 and We are interested in gesture recognition algorithms. There-
when Apple released Macintosh 128 with an updated version fore, we are interested in the low-level information in order
of the Lisa mouse in 1984 [2]. to interpret this into high-level information for others. The
Visualizations and games moved parts of computing into 3 next section will describe the device we have used as our
dimensional spaces. The visualizations of these 3D worlds sensor, a relatively new and inexpensive motion sensing
are either projected onto the 2D space of your screen or device. Then, we will discuss the gestures used and the
experienced with stereoscopic viewing devices. The control dataset we captured for such purposes. Following that, we
of these 3D worlds is not easy, some would say extremely will discuss the particular form of dimensionality reduction
unnatural, with a 2D mouse. The availability of 3D input and normalization we used on this data. The last sections
devices allows for better control of this 3D world, but will discuss the different gesture recognition algorithms we
requires gesture recognition algorithms in order to use such used as well as their results.
devices in a natural way. This paper evaluates different
gesture recognition algorithms on a novel dataset collected
for such purposes. 1 https://www.leapmotion.com/
4 Int'l Conf. IP, Comp. Vision, and Pattern Recognition | IPCV'15 |

3. Leap Motion Device Neural networks have typically been used to recognize
static gestures, but recurrent neural networks have also been
There are many motion sensing devices available in the
used to model gestures over time [12], [13]. One of the
marketplace. The Leap Motion controller was chosen for
main advantages of this type of model is that multiple inputs
this project because of its accuracy and low price. Unlike
from different sources can be fed into a single network,
the Kinect, which is a full body sensing device, the Leap
such as positions for different fingers, as well as angles [13].
Motion controller specifically captures the movements of a
Additionally, convolutional neural networks and deep learn-
human hand, albeit using similar IR camera technology.
ing models have been used with great success to recognize
The Leap Motion controller is a very small (1.2 3
offline handwriting characters [14], which can be considered
7.6cm) USB device [4]. It tracks the position of objects in
analogous to hand gestures under certain representations as
a space roughly the size of the top half of a 1m beach ball
shown in this paper. A similar problem domain of gesture
through the reflection of IR light from LEDs. The API allows
recognition, although in a lower dimensional space, is that of
access to the raw data, which facilitates the implementation
handwritten text recognition, where long-short term memory
of gesture recognition algorithms. A summary of the spec-
networks are the current state of the art [15], [16], [17].
ifications of the API: Language support for Java, Python,
JavaScript, Objective C, C# and C++; data is captured from
the device up to 215 frames per second; the precision of the
5. Dataset
sensor is up to 0.01mm in the perception range of 1 cubic
feet, giving it the ability to identify 7 109 unique points
in its viewing area.
The SDKv2 introduced a skeletal model for the human
hand. It supports queries such as the five finger positions
in 3 dimensional space, open hand rotation, hand grabbing,
pinch strength, and so on. The SDK also gives access to
the raw data it sees. Here we use this device to implement
and analyze different gesture recognition algorithms from a
dataset collected by this API.

4. Previous Work
Fig. 1: Leap Motion Visualizer
One commonly used method of recognition involves an-
alyzing the path traced by a gesture as a time series of
discrete observations, and recognizing these time series in a In order to examine various machine learning algorithms
hidden Markov model [5]. Typically, the discrete states are on gestures generated through the Leap Motion controller,
a set of unit vectors equally spaced in 2D or 3D, and the we needed to have a dataset that captured some prototypical
direction of movement of the recorded object between every gestures. To this end, a simple GUI was created that gave
two consecutive frames is matched to the closest of these users instructions on how to perform each of a chosen set
state vectors, generating a sequence of discrete directions of 12 hand gestures and provided visual feedback to the
of movement for each gesture path [6], [7], [8]. Hidden participant when the system was in the recording stage. All
Markov models have also been used to develop online gestures were performed by holding down the s key with
recognition systems, which record information continuously the non-dominant hand to record and then using the primary
and determine the start and stop point of a gesture as it hand to execute the gesture at a distance 6" to 12" above the
collects data in real time [8], [9]. top face of the controller. The code for this capture program
Another class of methods for recognition of dynamic is located online2 .
gestures involves the use of finite state machines to represent Students and staff on the RIT campus used the GUI
gestures [10], [11]. Each gesture can be represented as a to record their versions of each of 12 gesture types: one
series of states that represent regions in space where the finger tap, two finger tap, swipe, wipe, grab, release, pinch,
recorded object may be located. The features of these states, check mark, figure 8, lower case e, capital E, and capital
such their centroid and covariance, can be learned from F. The one and two finger taps were vertical downward
training data using methods such as k-means clustering. movements, performed as if tapping a single or set of keys on
When evaluating a new gesture, as the recorded object travels an imaginary keyboard. The swipe was a single left to right
through the regions specified by these states, these sequences movement with the palm open and facing downwards, while
of states are fed into finite state machines representing each the wipe was the same movement performed back and forth
of the trained gestures. In this way, gestures whose models
are consistent with the input state sequences are identified. 2 https://github.com/rcmccartney/DataCollector
Int'l Conf. IP, Comp. Vision, and Pattern Recognition | IPCV'15 | 5

several times. The grab motion went from a palm open to a For the convolutional neural network (CNN) that we employ
closed fist position, while the release was performed in the in Section 7, this dimensionality reduction took the form
opposite direction. Pinch was performed with the thumb and of converting each instance of real-valued, variable length
forefinger going from open and separated to touching. The readings into a fixed-size image representation of the gesture.
check mark was performed by pointing just the index finger
straight out parallel to the Z axis, then moving the hand in
a check motion while traveling primarily in the X-Y plane.
The figure 8, lower case e, capital E, and capital F
were all similarly performed by the index finger alone, in the
visual pattern indicated by their name in the plane directly
above the Leap Motion controller. The native Leap Motion
Visualizer shown in Figure 1 was available for each subject
to use alongside of our collection GUI while performing the
gestures if so desired, providing detailed visual feedback of
the users hand during motion.
As each gesture was performed, the Leap Motion API
was queried for detailed data that was then appended to
the current gesture file. The data was captured at over 100 Fig. 2: One instance example of each of the gestures used
frames per second, and included information for the hand for the CNN experiment
such as palm width, position, palm normal, pitch, roll, and
yaw. Positions for the arm and wrist were also captured.
For each finger 15 different features were collected, such as
position, length, width, and direction. In all, we collected
116 features for each frame of the recording, with the
typical gesture lasting around 100 to 200 frames, although
this average varies greatly by gesture class. Files for each
gesture are arranged in top-level folders by gesture type,
inside which each participant in the study has an anonymous
numbered folder that contains all of their gesture instances
for that class. Typically, each user contributed 5 to 10
separate files per gesture class to the dataset, depending on
the number of iterations each participant performed.
In all, approximately 9,600 gesture instances were col-
lected from over 100 members of the RIT campus, with Fig. 3: The mean image of the dataset on the left and the
the full dataset totaling around 1.5 GB. The data is hosted standard deviation on the right used for normalization
online for public download3 . Individual characteristics of
each gesture vary widely, such as stroke lengths, angles, CNNs traditionally operate on image data, using alternat-
sizes, and positions within the controllers field of view. ing feature maps and pooling layers to capture equivariant
Some users had used the Leap Motion before or were activations in different locations of the input image. Due to
comfortable performing gestures quickly after starting, while the complex variations that are nevertheless recognizable to
others struggled with the basic coordination required to a human observer as a properly performed gesture, CNNs
execute the hand movements. Thus, there is considerable offer a way to allow for differences in translation, scaling,
variation within a gesture class, and identifying a particular and skew in the path taken by an individuals unique version
gesture performed given the features captured from the Leap of the gesture. To transform each gesture into constant-sized
Motion device is not a trivial pattern recognition task. input for the convolutional network, we created motion im-
ages on a black canvas using just the 3 dimensional position
6. Image Creation data of the index finger over the lifetime of the gesture.
In its raw form, the varying temporal length of each ges- That is, for each frame we took the positions reported in
ture and large number of features make it difficult to apply the Leap coordinate axes, which varies approximately from
traditional machine learning techniques to this dataset. Thus, -200 to 200 in X and Z and 0 to 400 in Y , and transformed
a form of dimensionality reduction and normalization is those coordinates into pixel space varying from 0 to 200 in
needed for any learning technique to be effectively applied. three different planes, XY , Y Z, and XZ. For each reported
position, the pixels in the 5x5 surrounding region centered
3 http://spiegel.cs.rit.edu/~hpb/LeapMotion/ on the position were activated in a binary fashion. From this
6 Int'l Conf. IP, Comp. Vision, and Pattern Recognition | IPCV'15 |

point, each of the three coordinate planes are separately or


jointly able to be used as image input data in the learning
model. However, for this first experiment on the dataset
we kept only the XY plane of index finger positions and
concentrated on those gestures that mainly varied in that
plane, as explained below.
Despite being equivariant across feature maps, CNNs still
have some difficulty in classification over widely varying
positions and orientations of input activations. Thus, we
cropped each image to fit the minimum and maximum in-
dices of nonzero activations, and then sampled the resulting
pixels to resize each image to a constant 50x50 input size.
After resizing, the pixel activations were normalized by
subtracting the mean and standard deviation for that pixel
across the entire training set, rather than using the statistics Fig. 4: A left-handed check mark after cropping, sampling,
within a single image. Note that using only the XY positions and normalization
of the index figure is a significant simplification of the data
contained in an instance of the Leap dataset, but it served
to show the applicability of computer vision techniques to
the task of gesture recognition. As a result of this, we kept classification. The second is to use a hidden Markov model
only those gestures that varied in the XY planes for the to aid in a time series recognition task.
CNN experiments, namely the check mark, lower-case e,
capital E, capital F, and figure 8. Since this subset of 7.1 Convolutional neural network
the gestures are guided by the index finger in the XY plane Convolutional neural networks are powerful models in
they appear rather well-formed there, but appear as mostly computer vision due to their ability to recognize patterns
noise in the other two planes as their appearance in those in input images despite differences in translation, skew, and
projections largely depends upon unconscious movements of perspective [14], [18], [19], [20]. They can be effective
the hand. Expanding this representation to all three planes at finding highly complex and nonlinear associations in a
of movement for all gesture classes should be sufficiently dataset. They do so in the context of supervised learning,
expressive to broaden the learning algorithm to the entire by allowing the model to update parameters dynamically so
dataset, and will be explored in future work. An example as to minimize a cost function between a target value and
of each of the gesture classes in this representation after the observed output of the model. An advantage they have
preprocessing is shown in Figure 2. Figure 3 shows the over traditional, fully-connected neural networks is that the
normalization factors used for the dataset, with the mean learned feature maps are applied with the same parameters
image on the left and the standard deviation on the right. to an entire image, drastically reducing the number of
Note that there are other possibilities for generation of parameters required to learn without seriously degrading the
the images here that we did not do, such as removing skew capacity of the model [14]. This allows for more complex
and including motion history into gray-scale representations. and deeper architectures to be employed without as serious
There are still present many forms of variation in the input a risk of overfitting the training data.
activations that are inherent to the users, such as the left- Human gestures are highly complex, nonlinear, and
handed version of the check mark shown in Figure 4. While context-dependent forms of communication with both con-
such differences as this and other examples of allowable siderable overlap and great divergence between gesture
variance in gestures from a given class are easily and types. People often perform the same gesture class in highly
unconsciously accounted for by humans, for instance by two unique and differing ways, yet to the human brain these are
people conversing in American Sign Language, they pose easily recognized as constituting the same meaning. Further,
a significant challenge to the classification models that we very subtle and small differences exist between gestures that
discuss further in Section 7 and must be accounted for when impart greatly differing meanings to the separate classes,
training such classifiers. yet such differences are not easily defined or separated.
Given this type of data, convolutional neural networks have
7. Models the advantage of learning good features as part of the
We have chosen our initial experiments on this dataset classification task itself. Thus, we do not need to handcraft
using two diverse models for classification of temporal features of each valid gesture but allow the model to learn
sequences. The first is to convert the data into a fixed them as a product of minimizing the loss function. The
image representation as discussed above and use a CNN for model can thus learn to classify gestures based off of the
Int'l Conf. IP, Comp. Vision, and Pattern Recognition | IPCV'15 | 7

Truth 7.2 Time series recognition with HMMs


E X e F 8 Though the convolutional neural network performs well
Prediction E 28 0 1 0 0 on images of the whole gesture, it does not take into
X 1 62 2 0 1 account temporal information such as the order in which the
e 2 0 30 1 4 strokes are performed. This can be addressed by modeling
F 1 2 1 40 0 individual or groups of points as discrete states in a hidden
Markov model. However, one of the principle challenges of
8 1 0 1 0 34 definition and recognition of arbitrary gestures in 3D space
Table 1: Confusion matrix for CNN without dropout is the high variability of gestures within the sequence space.
For example, many traditional dynamic gesture recognition
models have used translations between pairs of consecutive
Truth frames to generate a sequence of observations by fitting
E X e F 8 the translations to the closest of a set of evenly-distributed
E 28 0 0 0 1 discrete vectors [6]. These methods work well in 2D, but
Prediction

X 0 63 1 0 1 suffer in 3D because 3D motions tend to be more varied and


uncontrolled. Any portion of a single curved motion may
e 2 0 30 0 3 be represented by slightly different vector sequences, and
F 2 1 1 41 0 these sequences may result in highly distinct observations
8 1 0 3 0 34 sequences even though they represented the same intended
Table 2: Confusion matrix for CNN with dropout movement.
To solve this high-variability problem, we propose a
method to process a sequence of frames of positional data
and summarize them to a shorter and more generalized
complex interactions between learned features that may not sequence of lines and curves, which are then fed into
otherwise be easily discerned or discovered. a hidden Markov model as discrete states. This method
The convolutional neural networks used in these ex- involves first identifying line segments in the sequence of
periments came from MatConvNet, a toolbox for Matlab frames by calculating average vectors of consecutive points
A S from the sequence of average vectors those within a
developed in the Oxford Visual Geometry Group [21]. All
experiments were run on a GeForce GTX 960 GPU, with minimum angle distance are combined into one growing
1024 CUDA cores and 2 GB memory. In addition, NVIDIAs line segment. This line segment is then fit to one of 18
CUDA Deep Neural Network library (cuDNN)4 was in- discrete observation states represented by vectors pointing
stalled as the convolution primitives inside the MatConvNet away from the origin distributed equally in 3D space. Next
library. The network consisted of alternating convolutions those sequences of points that do not satisfy the criteria
and max pooling layers, as depicted in Figure 5, followed above but are of some minimum length of frames are likely
by two layers of a fully-connected neural network with a curved segments. These sequences of points are fit to a
softmax output. All neurons were rectified linear units, as sphere using a least squares approximation method [25].
they can be trained faster than their sigmoid counterparts The sphere then defines the centroid of the curves rotation.
[18]. The model was trained both with and without dropout, To discretize the curve, the normal vector of the rotation is
following the techniques described in [22], [23], [24]. See found by taking the cross product of the vectors emanating
Tables 1 and 2 for the results of training this network with from the discovered centroid to the two end points of the
the 5 input gesture classes. The code for this network and curve. This normal vector is then fit to a set of six state
for image creation is hosted online5 . Overall the network vectors (clockwise and counterclockwise for each of roll,
produced a 92.5% recognition rate on held-out data after pitch, and yaw). The sequence of discovered lines and points
training to perfectly fit the input data, with a very modest then serves as the observation sequence, which is much
improvement seen from using dropout with a rate of 50% shorter and more invariant to individual differences between
on the two fully connected layers. This modesty may be due training examples.
to the fact that dropout was not applied to the convolutional The performance of this model was relatively poor, at
layers, which in the future could lead to greater improve- around 50% recognition for the specified gesture set. We
ments in generalization. A few of the misclassified gesture believe this time series model is less robust to sources of
image representations can be seen in Figure 6. error in the data, specifically the combination of very small
and very large drawn gesture examples, as well as exam-
ples containing large disjointed spaces between consecutive
4 https://developer.nvidia.com/cuDNN segments of points due to sampling or user error. We hope
5 https://github.com/rcmccartney/LeapDeepLearning to address these errors in the future by experimenting with
8 Int'l Conf. IP, Comp. Vision, and Pattern Recognition | IPCV'15 |

Fig. 5: A depiction of the CNN topology used

Fig. 8: Example gestures described as sequences of lines


(black) and curves (red)

Fig. 6: Examples of misclassified gestures

available per gesture instance. Recurrent neural networks-


long short term memory models in particular-could prove
rescaling and re-sampling training gesture paths. effective at dealing with the varying temporal nature of
human gestures. Future work will also expand the scope to
encompass the segmentation task as well as the classification
task. One particularly interesting avenue of research is in
combining the models discussed in Section 7 into a single
online recognition engine. The HMM could specialize in
segmenting gestures as they occur, using the two hidden
states of in-gesture" and between-gesture" to distinguish
between when a human hand is trying to semantically
communicate or just resting. Once segmented, the frames
of data from the in-gesture" state could then be sent to the
Fig. 7: The set of discretized states describing the motion of CNN model for classification. Note that the requirement to
consecutive groups of 3D points, including 18 line directions segment actual communication from idling is not an issue
and 6 curve directions when using other input devices such as a mouse, and arises
here due to the inability to set these 3D devices into non-
recording states.

8. Future Work 9. Conclusions


This experiment represents the first to use the novel The Leap Motion controller is a promising device for
dataset collected from the Leap Motion controller. There enabling user-friendly gesture recognition services. Based
is still much to be explored with this dataset as well as on our results, the data generated by this device can be
with applying other forms of learning algorithms to our accurately classified by representing its 3D gesture paths as
representations of the gestures. Different forms of dimen- sets of 2D image projections, which can then be classified
sionality reduction, such as PCA or gradient-based methods, by convolutional neural networks. Here we limited the
could be used to help deal with the large amount of features classification results to gestures performed in the XY plane,
Int'l Conf. IP, Comp. Vision, and Pattern Recognition | IPCV'15 | 9

but the model can be extended to give equal consideration to [18] A. Krizhevsky, I. Sutskever, and G. E. Hinton, Imagenet
all 3 planes of 2D projections, allowing for a wide variety of classification with deep convolutional neural networks, in Advances
in Neural Information Processing Systems 25, F. Pereira, C. Burges,
gesture representations. Despite its good performance, one of L. Bottou, and K. Weinberger, Eds. Curran Associates, Inc., 2012,
the limitations of this model is that it cannot provide online pp. 10971105. [Online]. Available: http://papers.nips.cc/paper/4824-
recognition of gestures in real time. As future work we look imagenet-classification-with-deep-convolutional-neural-networks.pdf
[19] K. Simonyan and A. Zisserman, Very deep convolutional networks
to incorporate an alternative model, such as a hidden Markov for large-scale image recognition, arXiv preprint arXiv:1409.1556,
model, as a segmentation method to determine likely start 2014.
and stop points for each gesture, and then input the identified [20] K. Chatfield, K. Simonyan, A. Vedaldi, and A. Zisserman, Return of
the devil in the details: Delving deep into convolutional nets, arXiv
frames of data into the CNN model for gesture classification. preprint arXiv:1405.3531, 2014.
[21] A. Vedaldi and K. Lenc, Matconvnet convolutional neural networks
for matlab, CoRR, vol. abs/1412.4564, 2014.
References [22] G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and
R. R. Salakhutdinov, Improving neural networks by preventing co-
[1] W. Joy and M. Horton. (1977) An introduction to display editing with adaptation of feature detectors, arXiv preprint arXiv:1207.0580,
vi. [Online]. Available: http://www.ele.uri.edu/faculty/vetter/Other- 2012.
stuff/vi/vi-intro.pdf [23] G. E. Dahl, T. N. Sainath, and G. E. Hinton, Improving deep
[2] A. S.-K. Pang, The making of the mouse, American Heritage of neural networks for lvcsr using rectified linear units and dropout,
Invention and Technology, vol. 17, no. 3, pp. 4854, 2002. in Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE
[3] R. Loyola, Apples magic mouse offers multitouch International Conference on. IEEE, 2013, pp. 86098613.
features, p. 65, 01 2010. [Online]. Available: [24] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and
http://search.proquest.com.ezproxy.rit.edu/docview/231461266?accountid=108 R. Salakhutdinov, Dropout: A simple way to prevent neural
[4] F. Weichert, D. Bachmann, B. Rudak, and D. Fisseler, Analysis networks from overfitting, J. Mach. Learn. Res., vol. 15,
of the accuracy and robustness of the leap motion controller, no. 1, pp. 19291958, Jan. 2014. [Online]. Available:
Sensors, vol. 13, no. 5, pp. 63806393, 2013. [Online]. Available: http://dl.acm.org/citation.cfm?id=2627435.2670313
http://www.mdpi.com/1424-8220/13/5/6380 [25] D. Eberly. (2015) Least squares fitting of data. Geometric Tools, LLC.
[5] L. Rabiner and B.-H. Juang, An introduction to hidden markov
models, ASSP Magazine, IEEE, vol. 3, no. 1, pp. 416, 1986.
[6] M. Elmezain, A. Al-Hamadi, J. Appenrodt, and B. Michaelis, A
hidden markov model-based continuous gesture recognition system
for hand motion trajectory, in Pattern Recognition, 2008. ICPR 2008.
19th International Conference on, Dec 2008, pp. 14.
[7] T. Schlmer, B. Poppinga, N. Henze, and S. Boll, Gesture recogni-
tion with a wii controller, in Proceedings of the 2nd international
conference on Tangible and embedded interaction. ACM, 2008, pp.
1114.
[8] H.-K. Lee and J.-H. Kim, An hmm-based threshold model approach
for gesture recognition, Pattern Analysis and Machine Intelligence,
IEEE Transactions on, vol. 21, no. 10, pp. 961973, 1999.
[9] S. Eickeler, A. Kosmala, and G. Rigoll, Hidden markov model based
continuous online gesture recognition, in Pattern Recognition, 1998.
Proceedings. Fourteenth International Conference on, vol. 2. IEEE,
1998, pp. 12061208.
[10] P. Hong, M. Turk, and T. S. Huang, Gesture modeling and recog-
nition using finite state machines, in Automatic face and gesture
recognition, 2000. proceedings. fourth ieee international conference
on. IEEE, 2000, pp. 410415.
[11] R. Verma and A. Dev, Vision based hand gesture recognition using
finite state machines and fuzzy logic, in Ultra Modern Telecommu-
nications & Workshops, 2009. ICUMT09. International Conference
on. IEEE, 2009, pp. 16.
[12] H. Hasan and S. Abdul-Kareem, Static hand gesture recognition
using neural networks, Artificial Intelligence Review, vol. 41, no. 2,
pp. 147181, 2014.
[13] K. Murakami and H. Taguchi, Gesture recognition using recurrent
neural networks, in Proceedings of the SIGCHI conference on Human
factors in computing systems. ACM, 1991, pp. 237242.
[14] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, Gradient-based
learning applied to document recognition, Proceedings of the IEEE,
vol. 86, no. 11, pp. 22782324, 1998.
[15] A. Graves, M. Liwicki, S. Fernndez, R. Bertolami, H. Bunke, and
J. Schmidhuber, A novel connectionist system for unconstrained
handwriting recognition, Pattern Analysis and Machine Intelligence,
IEEE Transactions on, vol. 31, no. 5, pp. 855868, 2009.
[16] A. Graves and J. Schmidhuber, Offline handwriting recognition with
multidimensional recurrent neural networks, in Advances in Neural
Information Processing Systems, 2009, pp. 545552.
[17] A. Graves, Offline arabic handwriting recognition with multidimen-
sional recurrent neural networks, in Guide to OCR for Arabic Scripts.
Springer, 2012, pp. 297313.

You might also like