Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1998, ACM SIGGRAPH 98 Conference abstracts and applications on - SIGGRAPH '98
…
1 page
1 file
Proceedings Computer Graphics International
The recent developments in image-based rendering have enabled a representation of virtual environments based on a simulation of panoramas, which we call virtual panoramas. Current virtual panorama systems do not provide a natural and immersive interaction with the environment. We propose a new system that uses hardware and software components to provide a natural and immersive interaction with virtual panoramas. As part of the system we propose a specific representation for the interactions in a virtual panorama. This representation can be used as a basis for the design of a high-level language for the creation of such environments.
1995
Traditionally, virtual reality systems use 3D computer graphics to model and render virtual environments in real-time. This approach usually requires laborious modeling and expensive special purpose rendering hardware. The rendering quality and scene complexity are often limited because of the real-time constraint. This paper presents a new approach which uses 360-degree cylindrical panoramic images to compose a virtual environment. The panoramic image is digitally warped on-the-fly to simulate camera panning and zooming. The panoramic images can be created with computer rendering, specialized panoramic cameras or by "stitching" together overlapping photographs taken with a regular camera. Walking in a space is currently accomplished by "hopping" to different panoramic points. The image-based approach has been used in the commercial product QuickTime VR, a virtual reality extension to Apple Computer's QuickTime digital multimedia framework. The paper describes the architecture, the file format, the authoring process and the interactive players of the VR system. In addition to panoramic viewing, the system includes viewing of an object from different directions and hit-testing through orientation-independent hot spots.
Proceedings of the ACM symposium on Virtual reality software and technology 1998 - VRST '98, 1998
We propose a manipulation technique to intuitively control the "bird's eye" overview display of an entire large-scale virtual environment in a display system that present a user with both overviews (global views) and a life-size virtual environment (local view) simultaneously. It enables efficient navigation even in enormous and complicated environments using both global and local views. The motion of the bird's eye viewpoint interlocks with the relative motion of the user's viewpoint and his/her hand, therefore, the user can control the "bird's eye" viewpoint by intuitive manipulation. Sophisticated display techniques are obtained based on the proposed method by introducing some constraints on the parameters of the "bird's eye" viewpoint. Experimental results show a combination of the bird's eye overview image and the life-size local image is displayed to the user by reflecting his/her intuitive manipulation.
1993
This paper describes an interface which helps people maintain a sense of spatial context while navigating virtual real-world scenes. First, a single panoramic image of the entire space is constructed from the separate partial, but detailed, images which constitute the original video sampling of the scene. The user can then navigate through this real-world data by manipulating either the panoramic overview or the original detailed views appearing in a separate window. Clicking or dragging the cursor over regions in the panoramic overview updates the corresponding detailed view. Using the panorama in this way frees the user from the traditional linear modes of interacting with virtual real-word scenes. In addition, interacting with the detailed view highlights the corresponding region in the panoramic overview and leaves a "trail" of the user's path through the space. These methods of visualizing and interacting with digital video described in this paper can also be applied to collections of digital video which do not correspond to a physical space such as standard linear movies. . The overview image shown coupled with the detailed view. As the user navigates using the detailed view, a single box is highlighted in the panorama showing her location in the space. In trace mode the highlighted swatches correspond to the areas of the room most recently explored.
In this dissertation, the potential of the human body will be investigated, with the aim to design, develop, and analyze new spatial interaction methods which surpass performance or application possibilities of currently available techniques. In contrast to desktop interfaces, spatial interaction methods potentially make use of all six degrees of freedom and are generally referred to as 3D user interfaces (3DUIs). These interfaces find wide applicability in a multitude of different kinds of Virtual Environments, ranging from those techniques that allow for free movement through a room with large, possibly stereoscopic displays, up to the usage of helmet-like or full-encompassing ("immersive") display systems. Due to the experimental characteristics, most of the presented techniques can be labeled as being unconventional, even though many of the techniques can find great applicability in the more traditional work environments. Hence, through investigation of human potential, the design space of 3DUIs can be broadened. More specifically, the basics of 3D User Interfaces and related terminology will be explored (chapter 1), after which an extensive and detailed look will be taken at the possibilities of the different human "input and output channels," relating the psychophysiological possibilities to technology that is currently existent, or will be developed in the foreseeable future. A reflection on possible applications is included (chapter 2). In chapter 3, issues that are specific to designing and developing unconventional 3DUIs are investigated, ranging from the boundaries of human performance, specific humancomputer interface matters, to social and technical issues. Following (chapter 4), a total of seven case studies illuminate multiple sides of designing, developing, and analyzing unconventional techniques, looking at both pure spatial and unconventional setups, and so called hybrid interface techniques. More specifically, Shockwaves and BioHaptics explore the usage of alternative haptic feedback, either through usage of audio and airbased shockwaves, or neuromuscular stimulation. Also dealing with haptics, Tactylus explores multisensory binding factors of a device using coupled visual, auditory, and vibrotactile feedback. The fourth study, Cubic Mouse, explores a prop output (control) device, resembling a coordinate system, in order to find specific performance advantages or flaws in comparison to generally used spatial controllers. It, thereby, makes use of a new spatial trajectory analysis method. The final three studies all focus on hybrid interfaces, integrating 2D and 3D I/O methods. ProViT deals with integrating a PenPC with a spatial pen device, and the Cubic Mouse to control engineering applications, focusing, foremost, on flow of action factors. Capsa Arcana are two consoles used in museum applications that integrate MIDI controllers and desktop devices to allow for more interesting and potentially unconventional control. Finally, with Eye of Ra, a new input device form is presented. The Eye of Ra has been specifically designed for closely combining the control of 2D and spatial actions for use in medical scenarios. The final chapter concludes this dissertation by providing a short summary and reflection, including a road map of open issues and fields of further research.
… and Development, 2002. …, 2002
… VR Workshop: New Directions in 3D …, 2005
Proceedings of the 15th Annual Acm Symposium, 2002
This paper describes StyleCam, an approach for authoring 3D viewing experiences that incorporate stylistic elements that are not available in typical 3D viewers. A key aspect of StyleCam is that it allows the author to significantly tailor what the user sees and when they see it. The resulting viewing experience can approach the visual richness and pacing of highly authored visual content such as television commercials or feature films. At the same time, StyleCam allows for a satisfying level of interactivity while avoiding the problems inherent in using unconstrained camera models. The main components of StyleCam are camera surfaces which spatially constrain the viewing camera; animation clips that allow for visually appealing transitions between different camera surfaces; and a simple, unified, interaction technique that permits the user to seamlessly and continuously move between spatial-control of the camera and temporal-control of the animated transitions. Further, the user's focus of attention is always kept on the content, and not on extraneous interface widgets. In addition to describing the conceptual model of StyleCam, its current implementation, and an example authored experience, we also present the results of an evaluation involving real users.
Proceedings of the …, 2002
This paper describes StyleCam, an approach for authoring 3D viewing experiences that incorporate stylistic elements that are not available in typical 3D viewers. A key aspect of StyleCam is that it allows the author to significantly tailor what the user sees and when they see it. The resulting viewing experience can approach the visual richness and pacing of highly authored visual content such as television commercials or feature films. At the same time, StyleCam allows for a satisfying level of interactivity while avoiding the problems inherent in using unconstrained camera models. The main components of StyleCam are camera surfaces which spatially constrain the viewing camera; animation clips that allow for visually appealing transitions between different camera surfaces; and a simple, unified, interaction technique that permits the user to seamlessly and continuously move between spatial-control of the camera and temporal-control of the animated transitions. Further, the user's focus of attention is always kept on the content, and not on extraneous interface widgets. In addition to describing the conceptual model of StyleCam, its current implementation, and an example authored experience, we also present the results of an evaluation involving real users.
Journal of the Centre for Buddhist Studies, Sri Lanka, 15: 1–22, 2018
In this article I reply to criticism raised by Wynne (2018) of my examination of the two paths theory, according to which the early discourses reflect two conflicting approaches to liberation, one of which is based on intellectual refection, the other on absorption.
International Journal of Multidisciplinary Research and Analysis
Tramas y Redes, 2023
PLAN ESTRATÉGICO FINANCIERO, 2018
Altai Hakpo 31, 2021
In J. Kreiner & G. Wrightson (eds), Ancient Warfare, Volume II: Introducing Current Research (Cambridge Scholars Press), 2024
ICES Journal of Marine Science, 2020
Bulletin of the Chemical Society of Ethiopia
Apuntes de Bioética, 2021
Window of Public Health Journal
Environmental Technology and Science Journal, 2023
Liño: Revista anual de historia del arte, 1982
IEEE Students Conference, ISCON '02. Proceedings.
arXiv (Cornell University), 2018
IEEE Journal of Oceanic Engineering, 2006