[Modi*, 5(3): March, 2016]
ISSN: 2277-9655
(I2OR), Publication Impact Factor: 3.785
IJESRT
INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH
TECHNOLOGY
STUDY PAPER ON EDUCATION USING VIRTUAL REALITY
Anamika Modi*, Ayush Jaiswal, Princy Jain
Computer Science & Engineering, Acropolis Institute of Technology and Research, Indore (M.P.)452001, India
DOI:
ABSTRACT
This report provides a short study of the field of virtual reality, highlighting application domains, technological
requirements, and currently available solutions. In today’s market, virtual reality is playing an crucial role for the
humans. If we consider the foreign countries than using virtual reality they try to create the same feelings not only
for the school children’s as well as for the upper education. In this paper, we have study the technologies used in
virtual reality.
KEYWORDS: Virtual reality, application domain, technologies.
INTRODUCTION
Virtual reality is today the latest technology where a person can feel everything is happening in surroundings. It
includes latest software and hardware that provide us or we can feel that we are in real enviroment. It helps to
provide us digital created space by the the use of some latest computer machines and upgraded or developed
software we can able to feel the same. Virtual Reality provides a different way to see and experience information.
For example, we have played so many games in the malls, we can feel the same atmosphere like suppose if we are
playing the car racing game than if the car collide, we will get the same feeling.
HISTORY
The first traces of virtual reality came from the world of science fiction. Stanley G. Weinbaum’s "Pygmalion's
Spectacles" is recognized as one of the first works of science fiction that explores virtual reality. Morton Heilig
wrote in the 1950s of an "Experience Theatre" He built a prototype of his vision dubbed the Sensorama in the year
of 1962, along with five short films to be displayed in it while engaging multiple senses (sight, sound, smell, and
touch). In 1968, Ivan Sutherland, with the help of his student Bob Sproull, created what is widely considered to be
the first virtual reality and augmented reality (AR) head-mounted display (HMD) system
Requirements
The goal of virtual reality is to put the user in the loop of a real-time simulation, immersed in a world that can be
both autonomous and responsive to its actions. The requirements for virtual reality applications are defined by
analyzing the needs in terms of input and output channels for the virtual world simulator.
User input
The input channels of a virtual reality application are those with which humans emit information and interact with
the environment. We interact with the world mainly through locomotion and manipulation, and we communicate
information mostly by means of voice, gestures, and facial expressions. Gestural communication as well as
locomotion make full body motion analysis desirable, while verbal communication with the computer or other users
makes voice input an important option. As stated in the 1995 US National Research Council Report on Virtual
Reality , because human beings constitute an essential component of all [synthetic environment] (SE) systems, there
are very few areas of knowledge about human behavior that are not relevant to the design, use, and evaluation of
these systems.
http: // www.ijesrt.com
© International Journal of Engineering Sciences & Research Technology
[911]
[Modi*, 5(3): March, 2016]
ISSN: 2277-9655
(I2OR), Publication Impact Factor: 3.785
Discussion
The analysis of the requirements in terms of input and output channels has highlighted fidelity and performance
requirements for the bare simulation of existence of synthetic objects. Successful virtual reality applications must
combine new input and output devices in ways that provide not only such an illusion of existence of synthetic
objects, but also the interaction metaphors for interacting with them. An ACM CHI workshop on the challenges of
3D interaction [16] has identified five types of characteristics that 3D user interfaces must have to exploit
the perceptual and spatial skills of users.
These are summarized as follows:
1. Multiple/integrated input and output modalities.
2. Functional fidelity.
3. Responsiveness
4. Affordances
5. Appeal to mental representation
Sensory Feedback
Our sense of physical reality is a construction derived from the symbolic, geometric, and dynamic information
directly presented to our senses. The output channels of a virtual reality application correspond thus to our senses:
vision, touch and force perception, hearing, smell, taste. Sensory simulation is thus at the heart of virtual reality
technology.
Visual Perception
Vision is generally considered the most dominant sense, and there is evidence that human cognition is oriented
around vision [21]. High quality visual representation is thus critical for virtual environments. The major aspects of
the visual sense that have an impact on display requirements are the following:
1. depth perception: stereoscopic viewing is a primary human visual mechanism for perceiving depth.
However, because human eyes are located only on average 6.3 centimeters apart, the geometric benefits of
stereopsis are lost for objects more distant than 30 meters, and it is most effective at much closer distances.
Other primary cues (eye convergence and accommodation) and secondary cues (e.g. perspective, motion
parallax, size, texture, shading, and shadows) are essential for
far objects and of varying importance for near ones;
2. accuracy and field-of-view: the total horizontal field of vision of both human eyes is about 180 degrees
without eye/head movement and 270 7 with fixed head and moving eyes. The vertical field of vision is
typically over 120 degrees. While the total field is not necessary for a user to feel immersed in a visual
environment, 90 to 110 degrees are generally considered necessary for the horizontal field of vision [49];
when considering accuracy, the central fovea of a human eye has a
resolution of about 0.5 minutes of arc [20];
3. critical fusion frequency: visual simulations achieve the illusion of animation by rapid successive
presentation of a sequence of static images. The critical fusion frequency is the rate above which humans
are unable to distinguish between successive visual stimuli. This frequency is proportional to the luminance
and the size of the area covered on the retina [9, 23]. Typical values for average scenes are between 5 and
60 Hz [49]. A rule of thumb in the computer graphics industry
suggests that below about 10-15 Hz, objects will not appear to be in continuous motion, resulting in
distraction [27]. High-speed applications, such as professional flight simulators, require visual feedback
frequencies of more than 60 Hz [4].
Sound Perception
Analyzing crudely how we use our senses, we can say that vision is our privileged mean of perception, while
hearing is mainly used for verbal communication, to get information from invisible parts of the world or when vision
does not provide enough information. Audio feedback must thus be able to synthesize sound, to position sound
sources in 3D space and can be linked to a speech generator for verbal communication with the computer. In
humans, the auditory apparatus is most efficient between 1000 and 4000 Hz, with a drop in efficiency as the sound
http: // www.ijesrt.com
© International Journal of Engineering Sciences & Research Technology
[912]
[Modi*, 5(3): March, 2016]
ISSN: 2277-9655
(I2OR), Publication Impact Factor: 3.785
frequency becomes higher or lower [49]. The synthesis of a 3D auditory display typically involves the digital
generation of stimuli using location-dependent filters. In humans, spatial hearing is performed by evaluating
monaural clues, which are the same for both ears, as well as binaural ones, which differ between the two eardrum
signals. In general, the distance between a sound source and the two ears is different for sound sources outside the
median plane.
Position, Touch and Force Perception
While the visual and auditory systems are only capable of sensing, the haptic sense is capable of both sensing what it
is happening around the human being and acting on the environment. This makes it an indispensable part of many
human activities and thus, in order to provide the realism needed for effective applications, VR systems need to
provide inputs to, and mirror the outputs of, the haptic system. The primary input/output variables for the haptic
sense are displacements and forces. Haptic sensory information is distinguished as either tactile or proprioreceptive
information. The difference between these is the following. Suppose the hand grasps an object.
Olfactory Perception
It exists specialized applications where olfactory perception is of importance. One of these is surgical simulation,
which need to: provide the proper olfactory stimuli at the appropriate moments during the procedure. Similarly, the
training of emergency medical personnel operating in the field should bring them into contact with the odors that
would make the simulated environment seem more real and which might provide diagnostic information about the
injuries that simulated casualty is supposed to have incurred [22] The main problem about simulating the human
olfactory system is, indeed, that a number of questions on how it works remain unanswered.
ENABLING TECHNOLOGY: HARDWARE
Currently, a set of devices, hand measurement hardware, head-mounted displays, as well as 3D audio systems,
speech synthesis or recognition systems are available on the market. At the same time, many research labs are
working on defining and developing new devices such as tactile gloves and eye-tracking devices, or on improving
existing devices such as force feedback devices, head-mounted displays and tracking systems
Position Tracking
Head tracking is the most valuable input for promoting the sense of immersion in a VR system. The types of trackers
developed for the head also can be mounted on glove or body suit devices to provide tracking of a user’s hand or
some other body part. Many different technologies can be used for tracking. Mechanical systems measure change in
position by physically connecting the remote object to a point of reference with jointed linkages; they are quite
accurate, have low lag and are good for tracking small volumes but are intrusive, due to tethering and subject to
mechanical part wear-out. Magnetic systems couple a transmitter producing magnetic fields and a receiver capable
to determine the strength and angles of the fields; they are quite inexpensive and accurate and accommodate for
larger ranges, the size of a small room, but they are subject to field distortion and electromagnetic interference.
Eye Tracking
Eye trackers work somewhat differently: they do not measure head position or orientation but the direction at which
the users’ eyes are pointed out of the head. This information is used to determine the direction of the user’s gaze and
to update the visual display accordingly. The approach can be optical, 13 electroocular, or electromagnetic. The first
of these, optical, uses reflections from the eye’s surface to determine eye gaze. Most commercially available eye
trackers are optical, they usually illuminate the eye with IR LED’s, generating corneal reflections.
Full Body Motion
There are two kinds of full-body motion to account for: passive motion, and active self-motion. The first is quite
feasible to simulate vehicles with current technology. The usual practice is to build a “cabin” that represents the
physical vehicle and its controls, mount it on a motion platform, and generate virtual window displays and motion
commands in response to the user’s operation of the controls. They are usually specialized for particular application
(e.g., flight simulators) and they represented the first practical VR applications for military and pilots’ training use.
http: // www.ijesrt.com
© International Journal of Engineering Sciences & Research Technology
[913]
[Modi*, 5(3): March, 2016]
ISSN: 2277-9655
(I2OR), Publication Impact Factor: 3.785
Visual Feedback
Humans are strongly oriented to their visual sense: they give precedence to the visual system if there are conflicting
inputs from different sensory modalities. Visual displays used in a VR context should guarantee stereoscopic vision
and the ability to track head movements and continually update the visual display to reflect the user’s movement
through the environment. In addition, the user should receive visual stimuli of adequate resolution, in full color, with
adequate brightness, and high-quality motion representations.
Haptic Feedback
At the current time, tactile feedback is not supported in practical use, that is, tactile systems are not in everyday use
by users (as opposed to developers). Tactile stimulation can be achieved in a number of different ways. Those
presently used in VR systems include mechanical pins activated by solenoid, piezoelectric crystal where changing
electric fields causes expansion and contraction, shape-memory alloy technologies, voice coils vibrating to transmit
low amplitude, high frequency vibrations to the skin, several kinds of pneumatic systems (air-jets, air-rings,
bladders), and heat pump systems.
Sound Feedback
The commercial products available for the development of 3-D sounds (sounds apparently originating from any
point in a 3-D environment) are very different in quality and price. They range from low-cost, PC-based, plug-in
technologies that provide limited 3-D capabilities to professional quality, service-only technologies that provide true
surround audio capabilities.
ENABLING TECHNOLOGY: SOFTWARE
The difficulties associated with achieving the key goal of immersion has led the research in virtual environments to
concentrate far more on the development of new input and display devices than on higher-level techniques for 3D
interaction. It is only recently that interaction with synthetic worlds has tried to go beyond straightforward
interpretation of physical device data [17].
Man-machine communication
Interactive programs have to establish a bidirectional communication with humans. Not only they have to let
humans modify information, but they have to present it in a way to make it simple to understand, to indicate what
types of manipulations are permitted, and to make it obvious how to do it. As noted by Marcus [26], awareness of
semiotic principles, in particular the use of metaphors, is essential for researchers and developers in achieving more
efficient, effective ways to communicate to more diverse user communities. As a common vocabulary is the first
step towards effective communication, user-interface software development systems should assist developers by
providing implementations of standard interaction metaphors.
Iterative construction
Good user interfaces are “user friendly” and “easy to use”. These are subjective qualities, and, for this reason, as
stated by Myers, the only reliable way to generate quality interfaces is to test prototypes with users and modify the
design based on their comments.
Parallel programming
Interactive applications have to model user interaction with a dynamically changing world. In order for this to be
possible, it is necessary for applications to handle within a short time real-world events that are generated in an order
that is not known before the simulation is run. Thus, user interface software is inherently parallel, and some form of
parallelism, from quasiparallelism, to pseudo-parallelism to real parallelism has to be used for its development.
CONCLUSION
Virtual environment technology has been developing over a long period, and offering presence simulation to users
as an interface metaphor to a synthe- 21 sized world has become the research agenda for a growing community of
researchers and industries. Considerable achievements have been obtained in the last few years, and we can finally
say that virtual reality is here, and is here to stay. More and more research has demonstrated its usefulness both from
http: // www.ijesrt.com
© International Journal of Engineering Sciences & Research Technology
[914]
[Modi*, 5(3): March, 2016]
ISSN: 2277-9655
(I2OR), Publication Impact Factor: 3.785
the evolutionary perspective of providing a better user interface and from the revolutionary perspective of enabling
previously impossible applications. Examples of applications areas that have benefited from VR technology are
virtual prototyping, simulation and training, telepresence and teleoperation, and augmented reality. Virtual reality
has thus finally begun to shift away from the purely theoretical and towards the practical. Nonetheless, writing
professional virtual reality applications remains an inevitably complex task, since it involves the creation of a
software system with strict quality and timing constraints dictated by human factors. Given the goals of virtual
reality, this complexity will probably be always there.
REFERENCES
[1] Applied virtual reality. In SIGGRAPH Course Notes 14. ACM SIGGRAPH, 1998.
[2] APPINO, P., LEWIS, J. B., KOVED, L., LING, D. T., RABENHORST, D. A., AND CODELLA, C. F. An
architecture for virtual worlds. Presence: Teleoperators and Virtual Environments 1, 1 (1992), 1–17.
[3] BIER, E. A., STONE, M. C., PIER, K., BUXTON, W., AND EROSE, T. Toolglass and Magic Lenses: The
see-through interface. In Computer Graphics (SIGGRAPH ’93 Proceedings) (Aug. 1993), J. T. Kajiya, Ed.,
vol. 27, pp. 73–80.
[4] BRYSON, S. T., AND JOHAN, S. Time management, simultaneity and time-critical computation in
interactive unsteady visualization environments. In IEEE Visualization ’96 (Oct. 1996), IEEE. ISBN 089791-864-9.
[5] BUTTOLO, P., OBOE, R., AND HANNAFORD, B. Architectures for shared haptic virtual environments.
Computers and Graphics 21, 4 (July–Aug. 1997), 421–432.
[6] CARD, S. K., ROBERTSON, G. G., AND MACKINLAY, J. D. The information visualizer, an information
workspace. In Proceedings of ACM CHI’91 Conference on Human Factors in Computing Systems (1991),
Information Visualization, pp. 181–188.
[7] CODELLA, C., JALILI, R., KOVED, L., LEWIS, J. B., LING, D. T., LIPSCOMB, J. S., RABENHORST,
D. A., WANG, C. P., NORTON, A., SWEENEY, P., AND TURK, G. Interactive simulation in a
multiperson virtual world. In Proceedings of ACM CHI’92 Conference on Human Factors in Computing
Systems (1992), Toos & Architectures for Virtual Reality and Multi-User, Shared Data, pp. 329–334. 24
[8] CONNER, D. B., SNIBBE, S. S., HERNDON, K. P., ROBBINS, D. C., ZELEZNIK, R. C., AND VAN
DAM, A. Three-dimensional widgets. Computer Graphics 25, 2 (Mar. 1992), 183–188.
[9] DAVSON, H. Physiology of the Eye, fifth ed. Pergamon Press, New York, NY, USA, 1994.
[10] FALBY, J. S., ZYDA, M. J., PRATT, D. R., AND MACKEY, R. L. NPSNET: Hierarchical data structures
for real-time threedimensional visual simulation. Computers and Graphics 17, 1 (Jan.– Feb. 1993), 65–69.
[11] GOBBETTI, E. Virtuality Builder II: Vers une architecture pour l’interaction avec des mondes synthtiques.
PhD thesis, Swiss Federal Institute of Technology, Lausanne, Switzerland, 1993.
[12] GOBBETTI, E., AND BALAGUER, J.-F. VB2: An architecture for interaction in synthetic worlds. In
Proceedings of the ACM SIGGRAPH Symposium on User Interface Software and Technology (Conference
held in Atlanta, GA, USA, 1993), Virtual Reality, ACM Press, pp. 167–178.
[13] GOMEZ, J. E., CAREY, R., FIELDS, T., AND VAN DAM, A. Why is 3D interaction so hard, and what
can we really do about it? Computer Graphics 28, Annual Conference Series (1994), .
[14] GOULD, J. D., AND LEWIS, C. Designing for usability: key principles and what designers think.
Communications of the ACM 28, 3 (Mar. 1985),
[15] HAND, C. Survey of 3D interaction techniques. Computer Graphics Forum 16, 5 (Dec. 1997), .
[16] HERNDON, K., VAN DAM, A., AND GLEICHER, M. The challenges of 3D interaction: A CHI’94
workshop. SIGCHI Bulletin 26, 4 (Oct. 1994),.
[17] HERNDON, K. P., ZELEZNIK, R. C., ROBBINS, D. C., CONNER, D. B., SNIBBE, S. S., AND VAN
DAM, A. Interactive shadows. In 26 Proceedings of the ACM Symposium on User Interface Software and
Technology (1992), 3D User Interfaces, .
[18] HUDSON, T. C., LIN, M. C., COHEN, J., GOTTSCHALK, S., AND MANOCHA, D. V-COLLIDE:
Accelerated collision detection for VRML. In VRML 97: Second Symposium on the Virtual Reality
Modeling Language (New York City, NY, Feb. 1997), R. Carey and P. Strauss, Eds., ACM SIGGRAPH /
ACM SIGCOMM, ACM Press. ISBN 0-89791-886-x.
http: // www.ijesrt.com
© International Journal of Engineering Sciences & Research Technology
[915]
[Modi*, 5(3): March, 2016]
ISSN: 2277-9655
(I2OR), Publication Impact Factor: 3.785
[19] JACOBSON, L., DAVIS, C., LAUREL, B., MAPLES, C., PESCE, M., SCHLAGER, M., AND TOW, R.
Cognition, perception and experience in the virtual environment: Do you see what i see? In Proceedings
SIGGRAPH (1996), .
[20] JAIN, A. Fundamentals of Digital Image Processing. Prentice-Hall, Englewood Cliffs, NJ 07632, USA,
1989.
[21] KOSSLYN, S. Image and Brain: The resolution of the imagery debate. MIT Press, Cambridge, MA, USA,
1994.
[22] KRUEGER, M. W. Olfactory stimuli in virtual reality for medical applications. In Interactive Technology
and the New Paradigm for Healthcare (1995), .
[23] LANDIS, C. Determinants of the critical flicker-fusion threshold. Physiological Review 34 (1954), .
[24] LESTON, J., RING, K., AND KYRAL, E. Virtual Reality: Business Applications, Markets and
Opportunities. Ovum, 1996.
[25] MACKINLAY, J. D., ROBERTSON, G. G., AND CARD, S. K. The perspective wall: Detail and context
smoothly integrated. In Proceedings of ACM CHI’91 Conference on Human Factors in Computing Systems
(1991), Information Visualization,
http: // www.ijesrt.com
© International Journal of Engineering Sciences & Research Technology
[916]