Academia.eduAcademia.edu

Eye–hand coordination during dynamic visuomotor rotation

2009, Gait & Posture

S72 Abstracts / Gait & Posture 30S (2009) S26–S74 Physical activity identification with a 6-component inertial sensor A. Kose 1,∗ , L. Laudani 2 , A. Cereatti 1 , M. Donati 3 , U. Della Croce 1 1 Department of Biomedical Sciences, University of Sassari, Sassari, Italy 2 Department of Human Movement and Sport Sciences, University of Rome Foro Italico, Italy 3 Sensorize srl, Rome, Italy Introduction: In western countries, ageing and style of life make crucial to deal with issues related to poor mobility. Therefore, quantifying the amount of movement during daily life becomes essential. For monitoring daily life activities, inertial measurement units (IMU), a combination of miniature tri-axial angular rate and gravity sensors, are a promising tool [1]. The availability of commercial small and light-weighted wearable modules with an extended battery life allow for monitoring human movement for prolonged periods of time and without space limitations. The aim of this preliminary study was to develop an automatic method for assessing intervals of physical activity during the day using a 6-component (accelerometric and gyroscopic) IMU as preliminary processing for physical activity identification. Methods: Movement data were acquired using an IMU (FreeSense, Sensorize® ) featuring a tri-axial accelerometer and two bi-axial gyroscopes (acceleration resolution 0.0096 m/s2 , angular rate resolution 0.2441◦ /s, unit weight 93 g, unit size 85 mm × 49mm × 21 mm, Fig. 1). First, the noise level associated to the IMU acceleration signals during static acquisitions was estimated from the absolute value (a(ti )) of the difference between the measured g-acceleration vector module, obtained from the three acceleration components, and its nominal value. Similarly, the overall noise affecting the gyroscope signals was assessed by determining the module of the measured angular velocity vector (ω(ti )) resulting from the three angular velocity signals in the same static acquisitions. The data acquisition session was carried out on a single healthy elderly subject (70 yrs), while the subject performed his typical routine physical activities. The IMU acquired for 24 h at 50 Hz and was attached at the waist level, with its x-axis pointing downwards, the y-axis pointing forward and the z-axis pointing to the right. The intervals of physical activity were estimated by applying a moving window (window size 1 s, sliding step 0.1 s) and minimum activity thresholds to the a(ti ) and ω(ti ) [2]. When the two estimates did not match, interval of activity was defined as the minimum time interval including them. The abovementioned procedure allowed to dissect the signals into subsequent parts. To validate the method, the subject was also equipped with a commercial physical activity assessment device (IDEEA, Minisun® ) [3]. Results: The rms of a(ti ) was within the limits provided by the manufacturer (0.0005 m/s2 ). The activity intervals were identified with a negligible difference (97% similarity) from those resulting from the validation device (Fig. 2). Fig. 1. The IMU used. Discussion: The presented method showed that a single 6component IMU can be used to distinguish physical activity intervals from inactivity intervals. In the future, appropriate processing techniques will be implemented to identify specific physical activities from the six IMU signals. References [1] Brandes M, Schomaker R, Möllenhoff G, Rosenbaum D. Gait Posture 2008;28(1):74–9. [2] Mathie MJ, Coster ACF, Lovell NH, Celler BG. Med Biol Eng Comput 2003;41:296–301. [3] Zhang K, Werner P, Sun M, Pi-Sunyer FX, Boozer CN. Obesity Res 2003;11:33–8. doi:10.1016/j.gaitpost.2009.07.073 Eye–hand coordination during dynamic visuomotor rotation L. Masia 1,∗ , V. Squeri 1 , G. Sandini 1 , P. Morasso 1,2 1 2 Istituto Italiano di Tecnologia, Genoa, Italy DIST, University of Genoa, Italy Introduction: For many technology driven visuomotor tasks such as tele-surgery, humans face situations, in which the frames of reference for vision and action are misaligned [1]. This misalignment needs to be compensated, in order to perform the tasks adequately and with the necessary precision. However, the chance of success and the level of performance of such systems does not depend exclusively on technological elements but requires the best possible fit with the underlying hand–eye coordination mechanisms and their ability to adapt to a changing environment and misalignments between visual and action frames [2]. Thus, in most cases a human subject must be engaged in a learning/adaptation process, when is exposed for the first time to a new HCI system, and this Fig. 2. Acquisition of 26 s duration. Acceleration a(ti ) (upper plot) and angular velocity ω(ti ) (lower plot). Grey stripes represent activity intervals identified by the IMU used. The activity intervals (red stripes) detected by the validation device are shown at the top of the plot. Abstracts / Gait & Posture 30S (2009) S26–S74 S73 Fig. 1. Different unimodal and bimodal experimental conditions. Fig. 2. Performance of one subject during the different target-sets. implies the emergence of indirect forms of hand–eye coordination. As a result, the mapping between action and perception is changed considerably, hand–eye coordination is disturbed and a relatively long learning curve is required for the brain to adapt to the changed mapping, with a significant reduction in manipulation efficiency. A clear understanding of the computational organization/plasticity of the hand-eye coordination system during such situations is still an open issue. Methods: In this paper we consider a human–computer pointing interface characterized by a time-varying visual transformation. The goal was to verify to which extent the integration in the interface of a proprioceptive feedback, with different degrees of temporal coherence with respect to the visual disturbance, could allow the subjects to recover a normal performance without a lengthy adaptation process. The task was to track the target displayed on the screen moving around the eight-shaped Lissajous figure under six different experimental conditions as shown in Fig. 1: suggest that if an extra degree of freedom is used to input proprioceptive information on the kinematic of the visual scene, a compensation of the rotational misalignment between wrist and retinocentric references seems to automatically emerge during visuomotor transformation. We think that this kind of philosophy can be adopted for the design of more complex HCI systems that involve different body parts, for example arm and wrist or bilateral coordination problems that involve the left and right arms. • Familiarization (F) phase, characterized by the absence of perturbations. • Kinesthetic perturbation (K). A kinematic perturbation was applied to the Pronation Supination DOF of the device by means of an imposed harmonic rotation of known amplitude (0.45 rad) and frequency (0.1 Hz). • Visual perturbation (V). The background of the virtual reality, the target position and the frame of the end-effector were simultaneously rotated according to the same harmonic input described above. • Visuo-kinaesthetic perturbations (VK). Kinaesthetic and visual disturbances were input to the subject by using wrist robot and virtual reality respectively. The perturbation were delivered with different starting phase differences : synchronous (VK+:  = 0); opposite (VK−:  = 180◦ ); quadrature (VKP:  = 90◦ ). Results: Fig. 2 shows the performance of one subject during the different target-sets. It is clear that the F condition is the one with the highest tracking performance. When the disturbances were applied in a unimodal way (K or V) perturbating proprioception (K) or vision (V) the subjects were not able to track the target without correcting their path. The same situation is presented in bimodal application of the visuo-kinesthetic target-sets (VK−, VKP); the tracking performance looks to stabilize and present a better fitting to the reference path during the VK+ condition where kinaesthetic and visual perturbation were synchronously input to the subjects (Fig. 1). Discussion: In this preliminary investigation it emerged how tracking movements can adapt to a rotating environment if suitable proprioceptive and visual feedbacks are provided. The outcomes References [1] Cunningham HA. Aiming error under transformed spatial mappings suggests a structure for visual-motor maps. J Exp Psychol Hum Percept Perform 1989;15(3):493–506. [2] Krakauer JW, Ghez C, Ghilardi MF. Adaptation to visuomotor transformations: consolidation, interference, and forgetting. J Neurosci 2005;25(2):473–8. doi:10.1016/j.gaitpost.2009.07.074 Mean cycle computation: A novel method robust to errors in the detection of foot contact and foot off events A. Merlo LAM - Laboratorio Analisi Movimento, Dip. Riabilitazione, AUSL Reggio Emilia, Correggio (RE), Italia Introduction: Ensemble averaging (EA) of cycle curves obtained from gait analysis is widely used to obtain normative profiles and pathology-specific profiles, when a high number of cycles are available. In the ensemble averaging process, gait data are resampled on a 0–100 basis. Then, mean and standard deviation values are computed at each percentage of the gait cycle (GC). Errors in the identification of both the beginning and the end of the cycle determine an undesired shrink of the resampled curves with a temporal misalignment of peaks. Thus, the average peak value is reduced and the standard deviation of the average profile varies throughout the cycle, being a function of the curve slope. In this work a novel method for the computation of the average profile and its confidence interval (CI) is presented and compared to EA. Methods: The presented algorithm is based on [1] and uses the Median operator on a Sorted Collection of cycles (MSC) to obtain the average profile, without any resampling. Both temporal (t − t0 ) and kinematic (y) data of available cycles are collected into a matrix M ∈ ℜN,2 , where N is the total number of samples of all the collected cycles. M is then sorted by rows, based on the values in the first column. For each consecutive sequence of size NPOINTS (with or without overlap between consecutive sequences) the median value of both coordinates is computed, along with the first and the third quartile for the kinematic variable that provide the confidence interval. The