Augmented Reality
Augmented Reality
Augmented Reality
1.User: The most essential part of augmented reality is its user. The
user can be a student, doctor, employee. This user is responsible for
creation of AR models.
2.Interaction: It is a process between device and user. The word
itself consist of its meaning some action perform by one entity as
result in creation or some action performed by other entity.
Contd…
Link:
https://www.youtube.com/watch?v=QMATJIlKnyE
Marker-Based Augmented Reality
•In this type, predefined visual markers (such as QR codes) trigger
augmented experiences.
•Imagine scanning a QR code during a self-guided tour to learn
more about a historical site.
•The app recognizes the marker and overlays relevant information
based on its programming.
•These markers are also known as fiducial markers.
Example of Marker-Based AR
1) Google Maps AR Navigation: Imagine walking down the street
while your phone overlays real-time directions onto your view.
This system uses GPS and visual recognition to guide you.
1) Range Measurement
• All SLAM solutions include some kind of device or tool that
allows a robot or other vehicle to observe and measure the
environment around it.
2) Data Extraction
• After the range measurement, SLAM system must have some
sort of software that helps to interpret that data.
• All of these “back-end” solutions essentially serve the same
purpose though: they extract the sensory data collected by
the range measurement device and use it to identify
landmarks within an unknown environment.
Depth Sensing
•Structured Light: Projects patterns onto the scene and analyzes their
deformation to compute depth.
•Time-of-Flight (ToF): Measures the time taken for light to travel to
the object and back.
•Stereo Vision: Compares images from two cameras to calculate depth.
•Depth from Monocular Images: Infers depth from a single image
using machine learning or geometric cues.
Use Cases of Depth Sensing
1) Object Occlusion:
• Occlusion refers to accurately rendering virtual objects behind
real-world objects.
• For instance, consider placing a virtual object (let's call it
"Andy") near a wooden trunk. Without occlusion, Andy might
overlap unrealistically with the trunk.
• By leveraging depth information, we can render Andy with
proper occlusion, making it seamlessly blend into its
surroundings.
Contd…
2) Scene Transformation:
• Depth enables us to create immersive scenes where virtual
elements interact with real-world objects.
• Imagine rendering virtual snowflakes settling on the arms and
pillows of a user's couch or casting a living room in misty fog.
3) Distance and Depth of Field:
• The Depth API helps show depth cues like distance.
• By measuring distances, we can apply depth-of-field effects,
such as blurring the background or foreground of a scene.
Machine Learning in AR