Academia.eduAcademia.edu

VibroVision

2016

Today, persons with a visual impairment use a cane to explore their surroundings and sense objects in their vicinity. While electronic aids have been proposed to aid them, they communicate limited information or require a fixed position. We propose VibroVision, a vest that projects information about the area in front of the wearer onto her abdomen in the form of a two-dimensional tactile image rendered by an array of vibration motors. This vest enables the user to sense features such as shape, position, and distance of objects in front of her.

Zurich Open Repository and Archive University of Zurich Main Library Strickhofstrasse 39 CH-8057 Zurich www.zora.uzh.ch Year: 2016 VibroVision: An on-body tactile image guide for the blind Wacker, Philipp ; Wacharamanotham, Chatchavan ; Spelmezan, Daniel ; Thar, Jan ; Sánchez, David A ; Bohne, René ; Borchers, Jan Abstract: Today, persons with a visual impairment use a cane to explore their surroundings and sense objects in their vicinity. While electronic aids have been proposed to aid them, they communicate limited information or require a fixed position. We propose VibroVision, a vest that projects information about the area in front of the wearer onto her abdomen in the form of a two-dimensional tactile image rendered by an array of vibration motors. This vest enables the user to sense features such as shape, position, and distance of objects in front of her. DOI: https://doi.org/10.1145/2851581.2890254 Posted at the Zurich Open Repository and Archive, University of Zurich ZORA URL: https://doi.org/10.5167/uzh-124333 Conference or Workshop Item Originally published at: Wacker, Philipp; Wacharamanotham, Chatchavan; Spelmezan, Daniel; Thar, Jan; Sánchez, David A; Bohne, René; Borchers, Jan (2016). VibroVision: An on-body tactile image guide for the blind. In: CHI ’16 EA: Extended Abstracts of 2016 ACM SIGCHI Conference on Human Factors in Computing, San Jose, CA, USA, 7 May 2016 - 12 May 2016, 3788-3791. DOI: https://doi.org/10.1145/2851581.2890254 VibroVision: An On-Body Tactile Image Guide for the Blind Philipp Wacker RWTH Aachen University [email protected] Chat Wacharamanotham University of Zurich [email protected] Daniel Spelmezan University of Sussex [email protected] Jan Thar RWTH Aachen University [email protected] David A. Sánchez RWTH Aachen University [email protected] René Bohne RWTH Aachen University [email protected] Jan Borchers RWTH Aachen University [email protected] Abstract Today, persons with a visual impairment use a cane to explore their surroundings and sense objects in their vicinity. While electronic aids have been proposed to aid them, they communicate limited information or require a fixed position. We propose VibroVision, a vest that projects information about the area in front of the wearer onto her abdomen in the form of a two-dimensional tactile image rendered by an array of vibration motors. This vest enables the user to sense features such as shape, position, and distance of objects in front of her. Author Keywords Vibrotactile information; depth information, wearable computing; tactile image; visual impairment ACM Classification Keywords H.5.2 [Information interfaces and presentation (e.g., HCI)]: User Interfaces Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author. Copyright is held by the owner/author(s). CHI’16 Extended Abstracts, May 07-12, 2016, San Jose, CA, USA ACM 978-1-4503-4082-3/16/05. http://dx.doi.org/10.1145/2851581.2890254 Introduction Visually-impaired persons commonly use white canes to detect obstacles at a range of a few feet around them. To detect obstacles farther away electronic aids can give audible or tactile cues (e.g., the Laser Cane [2], vibrotactile belts [4], or sparse arrays (4 × 4 motors) that output a coarse tactile image on the chest [6]). Those aids can in- dicate the direction and distance to an obstacle and do not require the person to actively scan the surroundings. Unlike a cane, with which a blind person can accurately determine the characteristics of an object (e.g., its size and shape), sound and vibrotactile cues, or a coarse tactile image cannot effectively communicate this information. Figure 1: Depth information from an object is displayed as a vibrotactile image on the abdomen. We believe that a 2D tactile display worn on the torso could help blind persons recognize objects better and avoid obstacles more precisely, thereby enabling them to walk alone more securely and more confidently. When moving towards an obstacle, or when the obstacle approaches the person, a large tactile image could give continuous feedback about the object sliding across the body, allowing the person to extrapolate more quickly and more accurately where the object might go next. To explore this idea we designed a wearable vibrotactile array (8 × 16) that maps depth information about objects to large vibrotactile patterns on the abdomen (see Figure 1). Our work is inspired by the Tactile Television [5], a tactile vision substitution system (TVSS) that enabled blind persons to recognize static and moving objects from tactile images (20 × 20 pins) presented to their back [1]. The images of objects were captured with a camera and impressed on the skin either as outlines or as silhouettes (filled-in) [5]. Blind persons were able to quickly learn to interpret and to recognize with high accuracy tactile representations of lines, geometric shapes, more complex images of everyday objects, and to track moving objects. The TVSS, however, was not wearable, and apart from a few other application scenarios (e.g., for gaming [3]) no attempt was made to design a wearable high-resolution tactile system for blind persons. Design Considerations For sparse arrays the amount of information that is simultaneously presented to the person is low, which is most useful for signaling the direction and the approximate distance to an obstacle or to avoid running into an object. Large arrays could provide more details about the object and assist with object identification. However, a large stimulation area on the body may overwhelm the blind person in environments with multiple obstacles and in cognitively demanding situations (e.g., when walking in an unknown area, or when crossing a street). We aim at exploring the fundamental properties of human perception of large two-dimensional tactile sensations on the abdomen. These sensations could represent the tactile image of a static object or of a moving object. Our goal is to find design guidelines on how to effectively map information about the object (e.g., its distance, size, shape, movement) to a tactile representation, and to explore a blind users’ ability to perceive and interpret tactile images in a static situation as well as in a mobile situation when walking (e.g., to discriminate changes in vibration intensities and in the vibration area that could represent changes in the distance and the object’s size, or changes in the object’s orientation). Interaction Design VibroVision comprises of a depth camera, a processing unit, and a vibration rendering vest. The depth camera captures the scene in front of the wearer. The depth image is processed into vibration levels for an array of motors mounted around the torso of the user (Figure 2. Based on the distance of the object to the wearer, the intensity of the vibration is adjusted. This allows for closer objects to be indicated with more intensity while objects further away are represented with less vibration. Therefore, the vibration pattern enables the user to recognize the object as well as the distance to the object. As the wearer moves through her environment the vibrations of each motor adjust accordingly and display a tactile image on the wearer’s belly. This can help the user to create a mental representation of the surrounding environment and the obstacles in her path. Our prototype can be adjusted for different body sizes, yet still presses the motor array tightly to the torso. For this, we used a fabric with 2% elastane and 98% cotton as a base fabric. The diameter of the vest is adjustable by velcro strips on the side of the torso. Figure 3 shows the array of motors inside of the vest. Prototype no vibration strong vibration Figure 2: The depth image (top) is mapped to vibration levels (bottom). Closer objects produce stronger vibration. In this scenario, the gradient of vibration represents the tilted body of the person in front of the camera. Figure 3: The array of vibration motors mounted inside the vest. Vibration rendering: To render a 2D vibration image, we use an array of 16 × 8 Eccentric Rotating Mass (ERM) pager motors. Each motor is oriented to vibrate along the frontal axis of the wearer’s body1 . Each motor was soldered onto a PCB board with a 2N7002 N-channel MOSFET, amplifying the PWM signal to drive the motor, a flyback diode (1N4148 high-speed diode) and a 100 nF capacitor to reduce electromagnetic interferences. The schematic and a photo of a PCB unit are shown in Figure 4. Each motor PCB is encapsulated in a 24 mm × 24 mm square, 10mm thick 3D-printed housing. As a result, the motors are spaced around 35 mm from each other, which is the two-point threshold for the belly area [7]2 . The frequency of vibration is between 183–233 Hz which lies within the optimum sensitivity of human skin (50–250 Hz) [7]. To control the vibration level, we used eight NXP PCA9685 driver, each of which can control 16 motors with individual levels of Pulse Width Modulated signals. 1 An ERM pager motor vibrates on two axes, but we prevent the propagation of vibration along the coronal plane by motor spacing. See the future work section for potential improvements. (ERM coin motors would vibrate on longitudinal and transverse axes, which is not suitable for our use.) 2 The discrimination threshold for vibration may differ from the twopoint (touch) threshold due to the propagation of vibration. Unfortunately, we are not aware of such a threshold in the literature. Scene sensing: An ASUS Xtion PRO LIVE camera is mounted on the front of the vest at chest height to capture objects on the floor as well as on head height. The camera streams depth images to a Raspberry Pi 2, which performs the image processing and signals the motor drivers. The image processing and motor control is coded in Python using the OpenCV library. The original depth image (640 × 480 pixels) is cropped to match the aspect ratio of the vibration motor array. We used the depth information in the range between 0.8 to 2 m from the camera. This range is linearly scaled to our PWM output range3 , where near objects are mapped to high PWM values. Then, the image is scaled down to 16 × 8 and sent to the motor array. The image processing, depth capturing, and PWM actuation run on separate threads to maximize usage of all Raspberry Pi’s CPU cores. The effective frame rate of our current image processing pipeline is 10 Hz, without GPU acceleration4 . Form factor: Our prototype is embedded in a single vest with a controller box (Raspberry Pi and motor drivers) and battery attached to the lower back as shown in Figure 5. It can be operated untethered on a a 5A LiPo battery embedded in the controller box. 3 In our system, we have 2980 possible PWM levels. According to our measurement, the depth frame rate on a Raspberry Pi 2 with OpenCV using Python is 18 fps. This could potentially be improved in the future once OpenCV supports GPU acceleration on Raspberry PI. 4 Conclusion and Future Work VibroVision is a prototype 2D vibrotactile rendering system on the wearer’s torso. The system aims to enrich the awareness of their environment for blind users. The current prototype records depth images in front of the wearer and renders them to 16 × 8 tactile images. 1 cm Figure 4: Motor unit PCB We plan to improve our hardware in two ways. Firstly, current ERM pager motors vibrate in both frontal and transverse axis. In the next version of the prototype, we plan to use Linear Resonant Actuator (LRA) motors that vibrate along a single axis to prevent vibration interference between adjacent output pixels. Secondly, we plan to reduce the number of cables by adding a controller on each motor unit, such that we need only one data signal line for all motors in addition to the two power wires. This can be, e.g., done with a Worldsemi WS2811 controller, which has the advantage that no programming of the controller is necessary. However, if one motor unit would break, no data will be provided to its successors. Another option will be using a programmable micro controller and let all motor units listen in parallel to the data signal. This will improve reliability, but increase set-up costs. The simplified wiring will furthermore allow to switch to a conductive textile path as wiring, reducing the mechanical coupling between each motor unit and the weight of the vest, improving wearability. As an additional benefit, the control box on the backside will become smaller (less plugs and no driver PCB needed anymore). As for the software, a more sophisticated image processing algorithm could make small objects that are closer to the wearer more distinct, as well as making the algorithm more robust to gaps in the depth image. Figure 5: VibroVision in use We plan to use an improved prototype to elicit psychophysical functions of tactile image rendering and to tune the mapping from depth input to vibration output. Finally, we plan to test the prototype with blind users in simulated navigation tasks as well as in real-world navigation. Acknowledgments This work was funded in part by the German B-IT Foundation. References [1] Paul Bach-y-Rita, Carter C. Collins, Frank A. Saunders, Benjamin White, and Lawrence Scadden. 1969. Vision substitution by tactile image projection. Nature 221 (1969), 963–964. [2] J. M. Benjamin. 1974. The Laser Cane. Bulletin of Prosthetics Research (1974), 443–450. [3] Sean Benson. 2015. 3D Haptic Vest for Visually Impaired and Gamers. (August 2015). https://hackaday.io/ seanbenson [4] Alvaro Cassinelli, Carson Reynolds, and Masatoshi Ishikawa. 2006. Augmenting spatial awareness with haptic radar. In Proc. of ISWC 2006. IEEE, 61–64. [5] Carter C. Collins. 1970. Tactile television-mechanical and electrical image projection. IEEE Trans. HumanMach. Syst. 11, 1 (1970), 65–71. [6] Dimitrios Dakopoulos, Sanjay K. Boddhu, and Nikolaos Bourbakis. 2007. A 2D vibration array as an assistive device for visually impaired. In Proc. of BIBE ’07. IEEE, 930–937. [7] Eric R. Kandel, James H. Schwartz, Thomas M. Jessell, and others. 2000. Principles of neural science. Vol. 4. McGraw-Hill New York.