Deepthi
Deepthi
Deepthi
CHAPTER 1
INTRODUCTION
1.1. 3D Television
Recently there have been rapid advancements in 3D techniques and technologies. Hardware has both
improved and become considerably cheaper, making real-time and interactive 3D available to the hobbyist,
as well as to the researcher. Old techniques have been improved, and new ones have been developed. 3D
television sets are becoming cheaper and cheaper and available for an average user. Numerous 3D systems
are granted patents each year, but very few systems move beyond the prototype stage and become
commercially viable. In this abstract I am going to focus mainly on the different technologies of displaying
3D content and on some details of human perception of depth, on so called depth cues.
Stereoscopy is the most widely accepted method for capturing and delivering 3D video. It
involves capturing stereo pairs in a two-view setup, with cameras mounted side by side and
separated by the same distance as is between a person's pupils. If we imagine projecting an object
point in a scene along the line-of-sight for each eye, in turn; to a flat background screen, we may
describe the location of this point.
Stereoscopic 3D television was demonstrated for the first time on 10 August 1928, by john
Logie in his company's premises at 133 Long Acre, London. Baird pioneered a variety of 3D
television systems using electro-mechanical and cathode-ray tube techniques. The first 3D TV was
produced in 1935, and stereoscopic 3D still cameras for personal use had already become fairly
common by the Second World War. Many 3D movies were produced for theatrical release in the
US during the 1950s just when television started to become popular. .
Addition of graphical elements (such as a scoreboard, timers, or logos) to a 3D picture must place
the synthesized elements at a suitable depth within the frame, so that viewers can comfortably
view the added elements as well as the main picture. This requires more powerful computers to
calculate the correct appearance of the graphical elements
3D TVs alternate rapidly between images shot from two different perspectives. In active shutter
glasses, LCDs over each eye alternate between clear and opaque in synch with the TV. You see
through only one eye at any given moment, but the alternation happens fast enough that you
perceive a single 3D image.
A TV set that renders stereoscopic 3D content using active or passive eyeglass technology. Active
3D TVs use shutter glasses that synchronize with the TV set to open and close each lens at the right
time. Passive 3D TVs use less costly polarized glasses that separate the stereo frames by
polarization. Although they have yet to become mainstream, there are 3D technologies that do not
require.
A three-dimensional (3D) television (TV) is a device that is capable of displaying special video
recordings that contain extra visual information. Various techniques are used to create three-
dimensional video recordings, each of which corresponds to different 3D TV technology. These
special televisions can then use that extra visual information to create a somewhat realistic image
that either appears to have depth or seems to project into a three-dimensional space in front of the
TV set. Some 3D TV technology is built right into the television, while other TV sets are referred to
as "3D ready" since they require additional equipment to render a three-dimensional image.
The concept of 3D imaging has been around since at least the 1890s, when the first patent was filed for a
three-dimensional film process. Test reels of 3D footage were produced in the early part of the 20th century,
though a different process was used to film the popular 3D movies of the 1950s. Three-dimensional
television also dates to the early part of the 20th century, though it wasn't until the 21st century that new
technologies and distribution systems were introduced to create the modern 3D TV.
1.3.1. Limitations of 3D TV
There are two main categories that each 3D TV can be separated into based on the equipment that the
device includes. A 3D-ready television is capable of producing three-dimensional images only if extra
equipment is purchased and installed. This often means a 3D signal adapter that can be plugged into the
television and one or more sets of active shutter glasses. The adapter is then used to activate the glasses at
the proper intervals to display the 3D image. Other televisions do not require this adapter, as they come
equipped with the proper hardware to operate the glasses.
3D Television certainly isn’t new, most of us have experienced the ritual of donning cheap polarised
rather dorky cardboard (showing my age here!) or plastic glasses to watch the latest action flick at the
cinema. The actual technology though goes back much further to the 1920’s. Television manufacturers
have though been inspired by the more recent success of 3D movies at the cinema with films like Beowulf.
The home entertainment industry is set on making 3D TV the next big thing in your living room and all the
major consumer electronics companies like Panasonic, Samsung, Toshiba, Sharp, LG, Philips and Sony are
getting in on the action.
Quantum entanglement is the state where two systems are so strongly correlated that gaining
information about one system will give immediate information about the other no matter how far apart
these systems are. This phenomenon baffled scientists like Einstein who called it “a spooky action at a
distant” because it violates the rule saying that no information can be transmitted faster than the speed of
light.
According to the research two particles are connected together and if we perform some changes in one
particle then it also reflects on the other particle and this action is happening so quickly(greater than the
speed of light) event they are very far from each other.
For example, it is possible to prepare two particles in a single quantum state such that when one is
observed to be spin-up, the other one will always be observed to be spin-down and vice versa, this despite
the fact that it is impossible to predict, according to quantum mechanics, which set of measurements will
be observed. As a result, measurements performed on one system seem to be instantaneously influencing
other systems entangled with it.
So now scientists are basically trying to use this concept to transfer the data from 1 point to another and
this data transfer will be very quickly even greater than the speed of light which make the revolution in
the existing technologies. In quantum computers, changing the state of an entangled qubit will change the
state of the paired qubit immediately.
Depth perception is the ability to see things in three dimensions (including length, width and depth),
and to judge how far away an object is.
For accurate depth perception, you generally need to have binocular (two-eyed) vision. In a process
called convergence, our two eyes see an object from slightly different angles and our brain compares and
processes the two sets of information to form a single image. When both eyes see clearly and the brain
processes a single image effectively, it is called stereopsis.
Vergenve.
Accomidation.
stereoscopic
Occlusion - The overlapping of one object by another
CHAPTER 2
The stereeoscope was first invented by Sir Charles Wheatstone in 1838. It showed that when two pictures
are viewed stereoscopically, they are combined by the brain to produce 3D depth perception. The
stereoscope was improved by Louis Jules Duboscq, and a famous picture of Queen Victoria was displayed
at The Great Exhibition in 1861. In 1865 the Kinematoscope was invented. In the late 1890s, the British
film pioneer william filed a patent for a 3D movie process. On 10 June 1915, former Edison Studios chief
director Edwin S porter and William E. Waddell presented tests in red-green anaglaphy to an audience at the
Astor Theater in New York City and in 1922 the first public 3D movie The power of love was displayed.
Stereoscopic 3D television was demonstrated for the first time on 10 August 1928, by john logie in his
company's premises at 133 Long Acre, London. Baird pioneered a variety of 3D television systems using
electro-mechanical and cathode-ray tube techniques. The first 3D TV was produced in 1935, and
stereoscopic 3D still cameras for personal use had already become fairly common by the Second World
War. Many 3D movies were produced for theatrical release in the US during the 1950s just when television
started to become popular. The first such movie was Bwana devil from United Artists that could be seen all
across the US in 1952. One year later, in 1953, came the 3D movie House of Wax which also
featured stereophonic sound. Alfred Hitchcock produced his film Dial M for Murder in 3D, but for the
purpose of maximizing profits the movie was released in 2D because not all cinemas were able to display
3D films. In 1946 the Soviet Union also developed 3D films, with Robinzon Kruzo being its first full-length
3D movie. People were excited to view the 3D movies, but were put off by their poor quality. Because of
this, their popularity declined quickly. There was another attempt in the 1970s and 1980s to make 3D
movies more mainstream with the releases of Friday the 13th Part III (1982) and Jaws 3-D (1983).[6]
Matsushita Electric (now Panasonic) developed a 3D television that employed an active shutter 3D
system in the late 1970s. They unveiled the television in 1981, while at the same time adapting the
technology for use with the first stereoscopic video game, Sega's arcade game SubRoc-3D (1982). 3D film
showings became more popular throughout the 2000s, culminating in the success of 3D presentations
of Avatar in December 2009 and January 2010
Though 3D movies were generally well received by the public, 3D television did not become popular until
after the CES 2010 trade show, when major manufacturers began selling a full lineup of 3D televisions,
following the success of Avatar Shortly thereafter, consumer and professional 3D camcorders were released
to the public by Sony and Panasonic. These used two lenses, one for each eye. According to DisplaySearch,
3D television shipments totaled 41.45 million units in 2012, compared with 24.14 in 2011 and 2.26 in
2010 In late 2013, the number of 3D TV viewers started to decline, and in 2016, development of 3D TV is
limited to a few premium models. Production of 3D TVs ended in 2016.
The practical uses of quantum computers are still being researched and tested. In the future, it is possible
that quantum computers will be able to solve problems that have been impossible to solve before. For
example, they have the potential to be used for modeling molecules or predicting how a molecule will
behave under different conditions.
We should also remember that a quantum computer is not faster than a regular computer - it's just more
powerful. That means that "running" a program on a quantum computer will take just as long as on a
regular computer - but with much better results because of their increased power. Quantum
computers will allow for the storage and processing of data in ways that we cannot even comprehend
today. They also offer more complex calculations than traditional computers and therefore can
easily solve problems that would take years to solve on a traditional computer.
Some experts believe that they could be used to calculate complex formulas with no time limit,which will
make them an invaluable tool in medical science, AI technologies, aeronautical engineering,
and so on. So far, quantum computing has been used to solve optimization problems,which are too
complex for traditional computer models. It's also been used to study protein folding and drug interactions
within the body.
Quantum computers are powerful computers that work on the principles of quantum mechanics.They use
qubits, not bits to represent data and they can access potentially more than two values at the same time.
Quantum computers will be able to break all of the encoding and encryption we have today. Quantum
computing is changing the world of cybersecurity. Quantum computers are capable of running
sophisticated simulations in parallel, making them much faster than classical computers. The ability to run
simulations in parallel means that quantum computers can quickly find solutions to difficult problems.
Quantum computers will disrupt many industries like finance, healthcare, and education.
While it's still unclear how big of an impact quantum computing will have on marketing in the future,
there are already some significant uses happening now. One example is in ad targeting where companies
can analyze customer behavior with astounding precision by processing large amounts.
CHAPTER 3
SIMILAR TECHNOLOGIES
For some computer scientists, the solution lies in building a quantum computer—devices that take
advantage of the inexplicable weirdness of atomic-level physics. The only downside? Quantum
computers require cool, carefully tended environments that are beyond our current technological
capabilities. But Massimiliano Di Ventra, a physicist and computer scientist at the University of
California, San Diego, has an entirely different solution. He and a team of his colleagues have just
designed and built the first brain- like computer prototype that bypasses certain structural limits of our
modern electronics. Called the memcomputer, it's the first computer to store and process info
simultaneously.
According to Di Ventra, despite his new technology's futuristic promise, "memcomputers can be built
with standard technology and operate at room temperature. This puts them on a completely different level
of simplicity and costs in manufacturing compared to quantum computers."
In short, a big problem with modern computers is that they store data and solve problems with it in two
entirely different areas: the memory, and the central processing unit (CPU). And all that shuffling
back and forth takes its toll, says Di Ventra. "To make a quick comparison: our own brain expends
about
20 watts to perform 10^16 operations per second," he says, while a supercomputer would
require 10 million times more power to do the same number of operations. "A big chunk of that power is
wasted in the back and forth transfer of information between the CPU and the memory," says Di
Ventra. Di Ventra's memcomputer sprung out of an easy-to-understand thought experiment from the
1970s. To build his memcomputer, Di Ventra and his colleagues had to physically rebuild and reprogram
acomputer from its most basic components. Rather than classical silicon transistors (the building blocks
that combine to build all electronics), at the core of Di Ventra's machine are what he calls memprocessors.
Di Ventra's simple computer uses 6 of them.
Here's how they work. A classical transistor's job basically boils down to one thing, either letting energy
through or not, depending on one what it's been told to do. A memprocessor does this exact same job, but
it also physically changes some of its properties ("such as its [electrical] resistance,"says Di Ventra)
depending on how much energy is trying to move through. Even when the memprocessor
loses power, it stores that change. In this way, while totally functioning as a classical, data-
crunching CPU, memprocessors can also be coded to store resistance-laden information at the
same time. No more back and forth.
Classical computing relies, at its ultimate level, on principles expressed by Boolean algebra, operating
with a (usually) 7- mode logic gate principle, though it is possible to exist with only three modes (which
are AND, NOT, and COPY). Data must beprocessed in an exclusive binary state at any point in time -
that is, either 0 (off / false) or 1 (on / true). These values are binary digits, or bits. The millions of
transistors and capacitors at the heart of computers can only be in one state at any point. While the time
that the each transistor or capacitor need be either in 0 or 1 before switching states is now measurable in
billionths of a second, there is still a limit as to how quickly these devices can be made to switch state. As
we progress to smaller and faster circuits, we begin to reach the physical limits of materials and the
threshold for classical laws of physics to apply.
The Quantum computer, by contrast, can work with a two-mode logic gate: XOR and a mode we'll call
Q01 (the ability to change 0 into a superposition of 0 and 1, a logic gate which cannot exist in classical
computing). In a quantum computer, a number of elemental particles such as electrons or photons can be
used (in practice, success has also been achieved with ions), with either their charge or polarization acting
as a representation of 0 and/or 1. Each of these particles is known as a quantum bit, or qubit, the nature
and behavior of these particles form the basis of quantum computing. The two most relevant aspects of
quantum physics are the principles of superposition and entanglement.
Chapter 4
APPLICATION AND LIMITATIONS
becoming even more vulnerable to these threats. Quantum computing with the help of machine
learning can help in developing various techniques to combat these cybersecurity threats.
Additionally, quantum computing can help in creating encryption methods, also known as,
quantum cryptography.
Financial Modelling
For a finance industry to find the right mix for fruitful investments based on expected returns, the
risk associated, and other factors are important to surviving in the market. To achieve that, the
technique of ‘Monte Carlo’ simulations are continually being run on conventional
computers,which, in turn, consumes an enormous amount of computer time. However, by
applying quantum technology to perform these massive and complex calculations, companies can
not only improve the quality of the solutions but also reduce the time to develop them. Because
financial leaders are in the business of handling billions of dollars, even a tiny improvement in the
expected return can be worth a lot for them. Algorithmic trading is another potential application
where the machine uses complex algorithms to automatically trigger share dealings by analyzing
the market variables,which is an advantage, especially for high-volume transactions.
Logistics Optimisation
Improved data analysis and robust modeling will indeed enable a wide range of industries
to optimize their logistics and scheduling workflows associated with their supply-chain
management.The operating models need to continuously calculate and recalculate optimal
routes of traffic management, fleet operations, air traffic control, freight, and distribution, and
that could have asevere impact on applications. Usually, to do these tasks, conventional computing
is used; however,some of them could turn into more complex for an ideal computing solution,
whereas a quantumapproach may be able to do it. Two common quantum approaches that can be
used to solve suchproblems are — quantum annealing and universal quantum computers.
Quantum annealing is anadvanced optimization technique that is expected to surpass traditional
computers. In contrast,universal quantum computers are capable of solving all types of
computational problems, not yetcommercially available.
Weather Forecasting
Currently, the process of analyzing weather conditions by traditional computers can sometimes
takelonger than the weather itself does to change. But a quantum computer’s ability to crunch
vastamounts of data, in a short period, could indeed lead to enhancing weather system
modelingallowing scientists to predict the changing weather patterns in no time and with excellent
accuracy— something which can be essential for the current time when the world is going under a
climate change
Weather forecasting includes several variables to consider, such as air pressure, temperature, and
airdensity, which makes it difficult for it to be predicted accurately. The application of
quantummachine learning can help in improving pattern recognition, which, in turn, will make it
easier forscientists to predict extreme weather events and potentially save thousands of lives a
year. Withquantum computers, meteorologists will also be able to generate and analyze more
detailed climatemodels, which will provide greater insight into climate change and ways to
mitigate it.
Quantum computing offers many potential benefits to the organizations of tomorrow. This new
conceptualization of computing power will result in three main benefits: increases in computing power,
advances in security, and the ability for firms to use the sci-fi concept of teleportation. Each of these
opportunities can overcome the limitations of the current computational paradigm.
Quantum Computation:
Increase in Computing Power Utilizing quantum parallelism, a quantum computer can calculate
or factor any huge number that is currently infeasible to be analyzed on a classical computer. For
example, factoring a number with 400 digits will take the existing fastest supercomputers billions
of years to accomplish. A quantum computer can obtain the answer within a year. Therefore,
quantum computers well serve the purpose of searching information in unsorted databases or
performing difficult mathematical calculations that are impossible using semiconductor
computers.
Linked with the first benefit (the increase in computing power) then comes the possibility for
advancements in computing security. Quantum cryptography allows two parties to exchange
public keys in a private channel and thus secure privacy in quantum communication. The technical
aspect of quantum cryptography requires tremendous amount of physics knowledge; the basic idea
is that quantum mechanics will not allow any eavesdropper to obtain the private key. Two
legitimate parties will reveal a random subset of the key bits and check the error rate to test for
eavesdropping. In so doing, even though eavesdropping will not be prevented, any attempt,
regardless how subtle and complicated, to break into the communication channel will be detected.
Teleportation:
Perhaps the most astounding of the claimed for benefits of quantum computing is teleportation, the
favoured local transportation mechanism in Star Trek episodes. Teleportation is the capability to
make an object or a person disintegrates in one place while a perfect replica appears in another. In
physics, teleportation has never been taken seriously because of the uncertainty principle.
According to the uncertainty principle, theduplicating process will disturb or destroy the original
objects; the more an object is duplicated, the more it is destroyed. The detail information regarding
how the duplication is made and how the original object is destroyed is unknown. Therefore, it
will reach a point where one cannot extract enough information from the original to make a
perfect replica.
Software limitations
If we somehow remove the hardware limitations after that also it is very difficult for our developerto
make an algorithm for Quantum computers. Because along with the knowledge of computerscience we
also have to be concerned about quantum physics. While a classical algorithm can bedeveloped along the
lines of the Turing machine, to develop an algorithm for Quantum computers,the developer has to base it
along the lines of raw physics with no simple formulas that would link itto logic.The critical issue in such
a design is always scalability. Designing a program to operate on largerdata with more processing power.
Very little information is available to develop such algorithms forquantum computing. Most of the
development is therefore intuitive. Most known Quantumalgorithms suffer from a proviso of specific
simulations that limit their practical applicability and itbecomes difficult to develop models that can have
a significant impact on machine learning. Thethird limitation in quantum computing is that the number of
qubits one can have on a quantumcircle is limited. Though these limitations are applicable to quantum
computing in general, theaugmentations of fields such as machine learning can grab more eyeballs and
push the field in theright direction.
CHAPTER 5
SUMMARY OF STUDY
Quantum computing is a technology that uses the laws of quantum mechanics to solve problems that are
too complex for classical computers. In our classical computers, we use bits to store the information
which can take a value of 0 or 1.Whereas in qubits, it represents 0s, 1s, and a superposition of both the
values. That means it can beused to represent the combination of 0s and 1s in quantum computing.When
bit storing the information of binary digits, qubits store the combination of binary digits,which helps the
qubits in quantum computing work three times as fast as a conventional computer system. The
information stored and the data transfer is huge, which helps to transfer the information faster.When the
problem is to be solved on the computer, bits approach the problem as if in a hit and trial run. This is due
to the fact that one value is considered at a time, and parallel processing is not happening when the
problem has to be solved. When the same problem has to be solved using quantum computing, it is
approached with parallel processing by supporting all four values at a time and solving it at a faster
pace.For example, if we want to make a drug by mixing some elements by some context with some hit
and trial method (Brute force algorithm) then in such case our classical computers will take too much
time to go through each and every case but it will be very easier for our quantum computers.
REFERENCES
[1] Practical Quantum Computing for Developers By Vladimir Silva, 2018 Edition
[3] Introduction to Quantum Computing WRITTEN BY Surya Teja Marella and Hemanth Sai
Kumar Parisa. Submitted On: August 23rd, 2020, Reviewed On: September18th, 2020, Published On:
October 29th, 2020 DOI: 10.5772/intechopen.94103 https://www.intechopen.com/chapters/73811
[4] Qubits vs Bits By Priya Pedamkar https://www.educba.com/qubits-vs-bits/