Chiron: Interpreting signals from
Capacitive Patterns and Inertial
Sensors for intuitive Shape Modeling.
Abstract
Ansh Verma
In this paper we introduce Chiron (abbr. Chironomia1):
A
C-Design Lab,
wearable device for the hand that reads the digital and
analogous signals from capacitive sensor patterns and
orientation sensors, to interpret user-intent. Here, we
explore two cases – (a) an unconventional and low-cost
method for intuitive shape modeling and control, (b)
ergonomically designing these patterns from conductive
ink, for reading localized finger interactions (swiping or
pinching). We also exploit Chiron’s thumb-based
interaction mechanism and discuss future novel
applications.
Purdue University.
West Lafayette, IN 47907
[email protected]
Gabriel Culbertson
C-Design Lab,
Purdue University.
West Lafayette, IN 47907
[email protected]
Karthik Ramani
Author Keywords
C-Design Lab,
Finger Ergonomics; Gestural Interaction; Capacitive
Sensors; Wearable Computing; Cognitive Load.
Purdue University.
West Lafayette, IN 47907
[email protected]
ACM Classification Keywords
H.5.2. [Information interfaces and presentation]: User
Interfaces – Input devices and strategies; Interactions
styles; Graphical User Interface; User centered design.
Introduction
In human communication, “hand gestures add ‘dynamic
dimension’ to verbal exchanges. They are integrated on
Figure 1. Concept Overview.
Copyright is held by the owner/author(s).
CHI 2014, Apr 26 - May 01 2014, Toronto, ON, Canada
ACM 978-1-4503-2474-8/14/04.
http://dx.doi.org/10.1145/2559206.2581161
1
The art of using gesticulations or hand gestures to good effect in
traditional rhetoric
(a) User Interaction
(b) Capacitive Sensors
2
(c) Hardware Overview
patterns with hand orientations from the inertial
sensors. We use these signals to formulate the user’s
hand posture at any instance. Then our algorithm
reports changes when the user’s hand posture is
altered. We map these changes to actions that are
associated to develop 3D shape models in an
interactive application built using Unity3D game
engine6.
Our approach is to develop Chiron as an input device
that reads signals in real-time. Using low-cost
conductive ink and off-the-shelf electronic components,
we explore a map between user intent and intuitive
shape modeling via touch interactions and hand
movements (Figure 1). We attempt to synergistically
couple the touch-based interactions via the capacitive
Gesture-based interaction methods have been used to
control virtual entities. Colors have been used with
nearest-neighbor approach to track hands at interactive
rate, demonstrating an inexpensive system [1]. Work
at MIT Media Labs [14] showcases the use of Vicon
systems7 to develop tangible interaction with the virtual
entities via gestures. Microsoft Kinect has been used to
provide an interface for developing pottery-based
models [15]. They require camera setups confining the
user to be in front of the camera. A system for creating
Related work.
Capacitive sensor technology has also evolved during
the past several years. Techniques have been
developed to recognize complex configurations of the
human hands using Swept Frequency Capacitive
Sensing [11]. Similar methods were used to measure
the impendence of the human for distinguishing
multiple users [3]. Patterns of capacitive sensors were
implemented to create an innovative ear-based
interaction system [16]. Sensors have been augmented
into gloves for interaction purposes [4,7-10]. However,
few have tried to exploit the sensor patterns to derive
metaphorical meaning for shape modeling.
https://www.leapmotion.com; http://www.softkinetic.com;
http://www.microsoft.com/en-us/kinectforwindows/
3
http://en.wikipedia.org/wiki/Power_Glove ; http://theperegrine.com
4
http://www.mechdyne.com/touch-and-gesture.aspx
6
http://unity3d.com
https://www.thalmic.com/en/myo/
7
http://www.vicon.com/System/Bonita
5
Figure 2. Chiron’s System.
actionable, cognitive and ultimately biological levels”
(pg. 3) [2]. This explains the involuntary response from
the human body - that when we speak, we gesticulate
more often than intended. The motivation here is to
emulate gesticulation on a digital platform effectively.
Recent advances in the depth sensing technology (such
as Leap Motion, SoftKinetic, Microsoft Kinect)2, have
encouraged the use of vision-based algorithms for hand
pose and finger tracking. However, they have two
disadvantages: (a) fatigue caused due to confined
workspaces, (b) robustness against occlusions. Similar
issues can be stated for devices like Microsoft
Research’s Digits [13], which describes the technology
of gesture emulation in a compact depth sensor-based
hardware system. Gaming gloves have also been
developed to emulate controllers by the Mattel &
Peregrine3 but failed to create an impact in market.
Mechdyne’s Pinch Glove4 attempted to capture the
essence of hand gestures based on bending of the
fingers. Thalmic Labs’ Myo5 band is an introductory
product used for giving gesture commands by
interpreting the EMG signals from the arm.
organic 3D shapes in a semi-immersive virtual
environment using tangible tools and hand [17] uses
stereo display device and a low functionality glove
system. In 1987, a sensor-based glove was developed
for simple pick and place manipulation of 3D Shape
models in a CAD environment [6].
microcontroller is based on the i2c serial protocol that
performs measurement and signal analysis.
Sensor Pattern
We implement a sensor pattern to get feedback from
the user. These signals are interpreted as shape
modeling operations using a pre-defined mapping (such
as rubbing of the index finger implies scaling of the
cross-section). Going with the current trend of sketchable electronics, conductive ink9 was used to make
capacitive sensor patterns. Two types of patterns were
fabricated based on the functionalities: pattern for (1)
slider-based recognition (2) ergonomic menus on
fingers (Figure 2b,4). The slider-based pattern is
constructed of 5 pins, defining the resolution of slideractivity. It is designed in a two-column matrix (Figure
4a). The algorithm understands the position of the
finger based on which pins are active. The menu driven
sensors are simple touch points, which are
ergonomically placed based on the finger’s usage area
(Figure 4b). The MPR121 Capacitive Touch Sensor,
which is a multiplexer to sense touch events, uses a
total of 12 electrodes whose capacitance increases
when a finger touches it.
CHIRON – The Prototype
Design Rationale
Chiron implements a naked thumb-based interaction
mechanism. Based on finger ergonomics, patterns are
laid on three fingers: (1)Forefinger (2)Middle Finger &
(3)Ring Finger (Figure 2b, 2c). Since the usage of
forefinger is most comprehensive, due to its available
area for the thumb, we use it to provide a slider-based
output. The middle and ring finger are used for menu
driven outputs. These menus were allocated based on
the ergonomic accessibility of the thumb to those
areas.
Figure 3. Pipeline Overview.
Hardware
The device comprises of the Arduino Nano
microcontroller, MPR 121 Multiplexer, MPU 6050 IMU
and a BlueSMiRF Bluetooth8 (Figure 3), which the user
can wear on his hand. The locations of these
components are defined to provide ergonomic comfort
without compromising any intuitive feedback (Figure
2c). The IMU device’s algorithm outputs 6 values – the
acceleration in the three axis and the Euler angles.
These values are interpreted to define the dynamic
pose of the user’s hand at any instance. The
communication between the IMU and the
8
Figure 4. Sensor Patterns
http://arduino.cc/en/Main/arduinoBoardNano ;
https://www.sparkfun.com/products/9695 ;
https://www.sparkfun.com/products/11028 ;
https://www.sparkfun.com/products/10269
Software
The software of the system has three modules (a)
Microcontroller program, (b) Mapping and (c) Shape
Modeling. The latter two, because of their dependency,
are implemented within the Unity3D application.
Analogous values from the IMU and digital states of the
pin, is being read by the micro-controller and sent to
the modeling application. In the modeling application,
the change in these imported values is associated to
9
http://www.bareconductive.com
modeling functionalities (such as movement of hand
implies extrusion). Based on a mesh algorithm, these
shape-modeling functions are performed. By
interpreting these changes in value, user intent is
mapped to modeling tools.
Usage Scenario.
(a) Scaling Gesture
(b) Pinch Gesture
(c) Extrusion Gesture
Figure 5. Gestures
Chiron recognizes the following actions –
1. Slider action to increment the value.
2. Spatial movement of the hand.
3. Orientation of the hand pose.
4. States of the menu driven electrodes.
For shape modeling, these actions are mapped to the
following operations: (1) Primitive Selection, (2)
Scaling, (3) Extrusion and (4) Pinch.
Primitive Selection
Since the task of selecting a primitive to extrude has a
lower significance, the ring finger was used to house
the menu-driven pattern for this task. It consists of 4
capacitive electrodes, which detects the touch state and
associates each electrode to a primitive shape. The
user via their thumb can reach out to these electrodes
and select the appropriate primitive (Figure 6b). A
smooth interaction is achieved, since the area of
contact for the thumb is comparatively greater on the
finger, making it extremely easy for the user to
understand and use.
Scaling
Scaling of the cross-section area is best emulated in a
slider mechanism for which the forefinger was used.
Based on the slider action, the user can scale up or
down the exposed cross-section area (Figure 5a).
Extrusion
The very top electrode in the middle finger is for
activating the extrusion state. The user has to keep the
state of this electrode active by maintaining a contact.
By moving their arm in the air the object will extrude
(Figure 5c). To map it in an intuitive manner, we
extracted the acceleration value in the z- direction of
the IMU to play as the variable responsible for the
depth of extrusion. Thus the user experiences that they
picked up the primitive and extruded it in air. Hence a
pinch and elongate metaphor was achieved.
Pinch
Based on usage preferences, pinch command comes
after extrusion. The second electrode’s activity is
associated with this aspect. After keeping the state
active, the user can change the angle of the exposed
cross-section by changing the orientation of their hand
as sensed by the IMU (Figure 5b). The user thus
experiences a ‘grab and rotate’ action while performing
this operation. Activating the last electrode in the
middle finger completes the current modeling event
(Figure 5a).
Implications.
Cognitive Load
In a heavy menu driven application; the user develops
a ‘split attention effect’ [18]: dividing their attention
between the task and the control mechanism. By
leveraging proprioception and tactility of the hardware
coupled with a visual mechanism to educate the user of
functionalities (Figure 7), we hypothesize that the user
will be more immersed in the task rather than dividing
a greater attention for the control mechanism.
We predict the learning time to be short provided the
user develops a “muscle” memory that maps the
gestures to the shape. Based on kinesthetic learning
and dexterity of the user, the muscle memory may help
us maximize the Germane Cognitive Load [18]. The
future user study will account for verification of these
claims.
(a) Completion Gesture
(b) Primitive Selection
(c) Gesture Interaction
Figure 6. Gestures
Generality
We are working to develop an input device that reads
localized finger interactions. The pattern of the sensors
provides us with a higher resolution of gesture
detection. The easy ‘customizability’ (because of
conductive ink), empowers this device for prototyping
concepts and quick mapping. By developing suitable
mapping algorithms, the system can be used to derive
various applications centralizing the user’s intent.
Evaluations
The device presently is made for right-handed people,
as the dynamics of a southpaw is still to be studied.
Also it is a completely sensor-based electronic system,
which implies that chances of signal noises to interfere
with regular data are always high. This may cause
annoyance and dissatisfaction to the users. The future
work on Chiron will address the issues regarding the
hand size variation, noise interferences and unintended
interactions by developing filters.
Potential Improvements and Future Work
The most immediate work is to improve the hardware
for comfort, aesthetics and resolution. A
Polydimethylsiloxane (PDMS)- Indium Tin Oxide (ITO)
based flexible transparent circuit pattern is being
considered to replace the fabric-conductive ink system.
Here we discussed Chiron for shape modeling. Other
applications that are being worked upon are:
•
Augmented Reality (AR) Glasses: Hardware
integration of Chiron with AR Glasses (Google,
Epson etc) for hands free interaction.
•
Actuated Systems: Controlling system based
on Electro-Active Polymer actuators for
tangible interactions.
Presently, a different synergistic integration between
the capacitive sensors patterns and inertial
measurement unit is being prototyped for easy shape
modeling. This may provide us with a deeper insight
towards this ideology of intuitive on-the-fly shape
modeling.
Conclusions
In this paper we introduced Chiron and its mechanism
of interpreting the capacitive sensors & orientation
sensor to develop an intuitive mapping for modeling.
The device, which the user can wear, follows a thumbbased interaction mechanism and the modeling
operations are appropriately mapped. A flow cycle
initiating from the point where the user perceives a
shape, to creating what they perceived is implemented.
We thereby seek to gather new insights towards how
human hand gestures could be derived from signals
coming out of a pattern of sensors and mapping the
associated actions to user intent.
Acknowledgements
We thank fellow C-Design Lab members who have
contributed in understanding and making the system.
In particular, Cecil Piya for his notes on cognitive load.
References
[1] Wang, R., Popović, J. 2009. Real-Time HandTracking with a Color Glove. ACM Trans. Graph. 28, 3,
Article 63 (August 2009).
[2]David McNeill: Gesture and Thought , University of
Chicago Press 2007.
(a) Cylinder (Swept)
[3] Harriso, C., Sato, M., Poupyrev,I. Capapcitive
Fingerprinting: Exploring User Differentiation by
Sensing Electrical Properties of the Human Body, In
Proc. Of UIST ’12. 537-544.
[4] Bowman, D. and Wingrave, C. and Campbell, J., Ly,
V. (2001) Using Pinch Gloves(TM) for both Natural and
Abstract Interaction Techniques in Virtual
Environments. Technical Report TR-01-23, Computer
Science, Virginia Tech.
(b) Cube (Swept)
[5] D. J. Sturman and D. Zeltzer, "A Survey of Glovebased input," IEEE Computer Graphics and
Applications, vol: 14(1), pp:30-39, Jan,1994.
[6] Thomas G. Zimmerman, Jaron Lanier, Chuck
Blanchard, Steve Bryson and Young Harvill, “A hand
gesture interface device”, Proceedings of CHI/GI, 189 –
192, 1995.
(c) Prism (Swept)
[7] Niinimäki, M., Tahiroglu, K. AHNE: a novel interface
for spatial interaction. In CHI '12 Extended Abstracts on
Human Factors in Computing Systems. ACM, New York,
NY, USA, 1031-1034.
[8] Witt, H., Janssen, T. Comparing two methods for
gesture based short text input using chording. In CHI
'07 Extended Abstracts on Human Factors in Computing
Systems.ACM, New York, NY, USA, 2759-2764.
(d) Running application.
Figure 7. Screenshots of Application
[9] Krishna, S., Bala, S., McDaniel, T., McGuire, S.,
Panchanathan, S. VibroGlove: an assistive technology
aid for conveying facial expressions. In CHI '10
Extended Abstracts on Human Factors in Computing
Systems. ACM, New York, NY, USA, 3637-3642.
[10] Blaskó, G., Feiner, S. Single-handed interaction
techniques for multiple pressure-sensitive strips. In CHI
'04 Extended Abstracts on Human Factors in Computing
Systems.vACM, New York, NY, USA, 1461-1464.
[11] Sato, M., Poupyrev, I., Harrison, C. Touché:
enhancing touch interaction on humans, screens,
liquids, and everyday objects. In Proceedings of the
SIGCHI Conference on Human Factors in Computing
Systems (CHI ’12) . ACM, New York, NY, USA, 483-492.
[12] Rekimoto, J. SmartSkin: an infrastructure for
freehand manipulation on interactive surfaces. In
Proceedings of the SIGCHI Conference on Human
Factors in Computing Systems (CHI '02). ACM, New
York, NY, USA, 113-120.
[13] Kim, D., Hilliges, O., Izadi, S., Butler, A., Chen,
J., Oikonomidis, I., Olivier, P. Digits: freehand 3D
interactions anywhere using a wrist-worn gloveless
sensor. In Proceedings of the 25th annual ACM
symposium on UIST '12. ACM, New York, NY, USA,
167-176.
[14] T(ether): Tangibe Media Group, MIT Media Lab
http://tangible.media.mit.edu/project/tether/
[15] Vinayaka, Murugappana, S., Liua, H., Ramani K.
Shape-it-Up: Hand gestures based creative expression
of 3D shapes using intelligent generalized cylinders.
[16] Lissermann, R., Huber, J., Hadjakos, A.,
Mühlhäuser, M. EarPut: augmenting behind-the-ear
devices for ear-based interaction. In CHI '13 Extended
Abstracts on Human Factors in Computing Systems.
ACM, New York, NY, USA, 1323-1328.
[17] Schkolne, S., Pruett, M., Schröder, P. Surface
drawing: creating organic 3D shapes with the hand and
tangible tools. In Proceedings of the SIGCHI Conference
on Human Factors in Computing Systems (CHI '01).
ACM, New York, NY, USA, 261-268.
[18] Hollender, N., Hofmann, C., Deneke, M., Schmitz,
B. Integrating cognitive load theory and concepts of
human–computer interaction, Computers in Human
Behavior, Volume 26, Issue 6, November 2010, Pages
1278-1288, ISSN 0747-5632.