Papers by Oussama Metatla
PeerJ Computer Science, 2016
Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems - CHI '16, 2016
Journal on Multimodal User Interfaces, 2016
CoDesign, 2015
ABSTRACT Methods used to engage users in the design process often rely on visual techniques, such... more ABSTRACT Methods used to engage users in the design process often rely on visual techniques, such as paper prototypes, to facilitate the expression and communication of design ideas. The visual nature of these tools makes them inaccessible to people living with visual impairments. In addition, while using visual means to express ideas for designing graphical interfaces is appropriate, it is harder to use them to articulate the design of non-visual displays. In this article, we present an approach to conducting participatory design with people living with visual impairments incorporating various techniques to help make the design process accessible. We reflect on the benefits and challenges that we encountered when employing these techniques in the context of designing cross-modal interactive tools.
This paper reflects on how Web cognition is experienced by blind users employing screen-readers f... more This paper reflects on how Web cognition is experienced by blind users employing screen-readers for Web interaction. Many of the differences in Web interaction between sighted users and users of screen-readers arise from the serial way in which Web pages are rendered by screen-readers. We begin by examining the ways in which these differences are brought about through the functionality
Lecture Notes in Computer Science, 2013
Lecture Notes in Computer Science, 2013
Journal of the Audio Engineering Society, 2012
Although research on non-visual access to visually represented information is steadily growing, v... more Although research on non-visual access to visually represented information is steadily growing, very little work has investigated how such forms of representation could be constructed through non-visual means. We discuss in this paper our approach for providing audio access to relational diagrams using multiple perspective hierarchies, and describe the design of two interaction strategies for constructing and manipulating such diagrams through this approach. A comparative study that we conducted with sighted users showed that a non-guided strategy allowed for significantly faster interaction times, and that both strategies supported similar levels of diagram comprehension. Overall, the reported study revealed that using multiple perspective hierarchies to structure the information encoded in a relational diagram enabled users construct and manipulate such information through an audio-only interface, and that combining aspects from the guided and the non-guided strategies could support greater usability.
This paper describes an approach to support non-visual exploration of graphically represented inf... more This paper describes an approach to support non-visual exploration of graphically represented information. We used a hierarchical structure to organize the information encoded in a relational diagram and designed two alternative audio-only interfaces for presenting the hierarchy, each employing different levels of verbosity. We report on an experimental study that assessed the viability of our proposed approach as well as the efficiency and learnability of each interface. Our results show that the relational information encoded in a diagram could be non-visually navigated and explored through a hierarchy, and that substituting verbal descriptions of parts of such information with nonverbal sounds significantly improve performance without compromising comprehension.
Developments in Human-Computer Interaction have always exploited screen space, basing solutions a... more Developments in Human-Computer Interaction have always exploited screen space, basing solutions around and emphasising visual output. In situations where the user's eyes are occupied, lack of screen space, or visual impairment, vision cannot always be relied on for optimum performance, in which cases information must be communicated through other means of presentation.
Every day our brains receive and combine information from different senses to understand our envi... more Every day our brains receive and combine information from different senses to understand our environment. For instance when we both see and hear someone speaking we associate the words spoken with the speaker. The process of coordinating information received through multiple senses is fundamental to human perception and is known as cross-modal interaction (Drive and Spencer 1998).
ABSTRACT Although research on non-visual access to visualisations is steadily growing, very littl... more ABSTRACT Although research on non-visual access to visualisations is steadily growing, very little work has investigated strategies for constructing such forms of representation through non-visual means. This paper describes the design of two interaction strategies for constructing and manipulating relational diagrams in audio. We report on a study that compared the two strategies, and discuss their advantages and disadvantages in terms of how efficiently they support the activity of constructing diagrams in an audio-only interface.
ABSTRACT We present the design of a cross-modal tool for collaborative diagram editing. The tool ... more ABSTRACT We present the design of a cross-modal tool for collaborative diagram editing. The tool was designed to address the challenge of supporting collaborators who access a shared interactive space through different sets of modalities. This was achieved by augmenting a visual diagram editor with auditory and haptic views to allow simultaneous visual and non-visual interaction.
ABSTRACT We describe the design of a collaborative cross-modal tool that supports visually-impair... more ABSTRACT We describe the design of a collaborative cross-modal tool that supports visually-impaired and sighted coworkers to access and edit shared diagrams in real time and a case study of its use in a real world workplace environment. Our findings highlight the potential of cross-modal collaboration to improve workplace inclusion and identify initial challenges and opportunities for cross-modal design.
ABSTRACT We survey Digital Audio Workstations (DAWs) and issues relating to their access by visua... more ABSTRACT We survey Digital Audio Workstations (DAWs) and issues relating to their access by visually impaired musicians and sound engineers. Analysis of accessibility problems reveals common issues across a range of systems and audio editing tasks. We outline how results from auditory and haptic display research could be leveraged to address these issues, enabling access to training and jobs for visually impaired people in the creative industries.
ABSTRACT The questions involved in the design of an interactive, audio only computer-based footba... more ABSTRACT The questions involved in the design of an interactive, audio only computer-based football game are explored. The game design process starts by exploring basic questions such as size of playing area, orientation, awareness of team mates and opponents and basic navigation.
This report presents a series of explorations into the feasibility of using low cost devices to d... more This report presents a series of explorations into the feasibility of using low cost devices to design support for non-visual interaction with diagrams. These explorations are a follow up to the Collaborative Cross-modal Interfaces project (CCmI) 1, which explored the potential of using multimodal input and output technologies (audio, haptics, graphics) to improve the accessibility of collaboration between visually-impaired and sighted individuals when using diagrams in the workplace.
ABSTRACT We present a series of explorations into the feasibility of using low-cost devices to de... more ABSTRACT We present a series of explorations into the feasibility of using low-cost devices to design support for non-visual interaction with diagrams. These explorations are a follow up to the Collaborative Cross-modal Interfaces project (CCmI1), a Digital Economy Research in the Wild project which investigated the use of multimodal input and output technologies to improve the accessibility of collaboration between visually-impaired and sighted individuals when using diagrams in the workplace.
Uploads
Papers by Oussama Metatla