Academia.eduAcademia.edu

Before and after cinema: reconnecting the virtual with the analog

It is now beginning to seem more and more that the cinema, rather than being an enduring medium with fixed boundaries and practices, was merely one (remarkably stable) configuration that nonetheless dominated twentieth-century media. Under the pressure of new and ever-advancing technologies, this configuration is finally starting to mutate and break apart. Various pre-cinematic media technologies (nickelodeons, stereoscopes, panoramas, etc.) are suddenly taking on a new significance. First, they are providing the prototypes for various new mediums like virtual reality (VR) and augmented reality (AR). Second, they can point to as-yet-unexplored avenues of research, as old devices are transmuted by digital technologies. Finally, there are fascinating parallels between the pre-cinematic era and our own post-cinematic moment, both of which provide unforeseen openings for idiosyncratic invention and artistic production.

Before and after cinema: reconnecting the virtual with the analog Perry Hoberman School of Cinematic Arts Division of Media Arts and Practice University of Southern California [email protected] presented at SECAC 2016, Roanoke Virginia, 10/21/16 on the panel Vision Machines and Pre-Cinematic Optical Devices http://www.secacart.org It is now beginning to seem more and more that the cinema, rather than being an enduring medium with fixed boundaries and practices, was merely one (remarkably stable) configuration that nonetheless dominated twentieth-century media. Under the pressure of new and ever-advancing technologies, this configuration is finally starting to mutate and break apart. Various pre-cinematic media technologies (nickelodeons, stereoscopes, panoramas, etc.) are suddenly taking on a new significance. First, they are providing the prototypes for various new mediums like virtual reality (VR) and augmented reality (AR). Second, they can point to as-yet-unexplored avenues of research, as old devices are transmuted by digital technologies. Finally, there are fascinating parallels between the pre-cinematic era and our own post-cinematic moment, both of which provide unforeseen openings for idiosyncratic invention and artistic production. First, a caveat: this talk will focus primarily on virtual reality, but I should emphasize that VR is only one of a myriad of techniques, technologies and practices that are radically transforming cinema, media and culture. Other key technologies, just to name a few, would include drones, 3D printing, LED arrays, laser projection, biosensors, and all manners of artificial intelligence. The 19th century saw a wide assortment of optical toys and devices, with little distinction made between their status as serious scientific demonstrations and as frivolous popular amusements. The discovery that images could be more than static fixed representations, that they could be projected, magnified, animated, combined, reflected - was both a revelation and a source of continuing fascination. And while an account of these devices can be fashioned that organizes them along a linear path of increasing technological sophistication and practical !1 application, leading inevitably to their culmination in the invention of publicly projected motion pictures, this is hardly the only (or the most fruitful or accurate) way to understand them. Each device was likely experienced primarily as a self-sufficient novelty, and only dimly (if at all) as a harbinger of some as-yet-undefined more advanced (or even universal) media. If we take a look at the current state of virtual reality, we can get an idea of the sort of multiple life led by a developing device technology: each VR app exists as an immediate (hopefully compelling) immersive experience; it locates itself somewhere in the current state-of-the-art (it’s hopefully better than what has come before); and it offers tantalizing hints of what’s coming in the future. These hints often lead us to jump to conclusions about what direction VR is ultimately headed, usually to a vision of lightweight (or non-existent) eyewear that can provide an experience indistinguishable from reality, with full mobility, natural interaction, ultra super high resolution and dynamic range, etc. I want to argue that this vision is both predictable and premature, and that it threatens to close off myriad paths and possibilities in the name of a market-friendly standardization. We’ve seen this pattern before, with parallels to the era when the cinema was being invented and formulated, as well as more recent histories such as the botched rollout of stereoscopic 3D over the past decade. The most direct and obvious ancestor of virtual reality is the stereoscope, and in fact VR wholly subsumes the stereoscope as a key part of its apparatus. As Jonathan Crary discusses in Techniques of the Observer, the stereoscope was part of a broad reconfiguration of perception in the early 19th century, with a breaking down of the division between subject and object, a new understanding of perception based on physiology (binocular vision), and a new integration of man and machine (a device for viewing). The invention of the stereoscope by Charles Wheatstone in 1838 represented a realization that visual representation could be matched to the particulars of human physiology, specifically binocular vision, an insight that had been available but unexplored for hundreds of years. With the (nearly simultaneous but independent) invention of photography and the improved stereoscope designs of David Brewster (1849) and Oliver Wendell Holmes (1861), the stereoscope !2 became a mass medium that dominated the second half of the 19th century, and was arguably the most “immersive” experience before VR. Virtual reality results from a similar insight into the relationship between visual media and human physiology; the realization that a visual display can be matched to much more than just binocular vision: it can also take into account peripheral vision, head rotation, proprioception, the position and motion of the body, and that this display can be coordinated with stimuli for the other senses, especially hearing. These realizations were prefigured in numerous science fiction narratives, and were then prototyped starting in the 1960s by Ivan Sutherland. But to become more than a expensive high-end technology used in academia, industry and the military, VR had to await the development of a number of key technological components: small high resolution displays, miniaturized sensors such as gyroscopes and accelerometers, powerful cameras and machine vision techniques, etc. Much of this tech first became available in one key device, the iPhone 4, introduced in 2010, which included a high resolution (Retina) display, a 3-axis gyroscope and other sensors, as well as multiple cameras. The realization that the screen resolution and size made it suitable as the display for a stereoscope, and that the sensors made it workable as a platform for VR, led quickly to developments such as the Hasbro my3D viewer, as well as our own work in USC’s Institute for Creative Technology MxR Lab (FOV2GO, VR2GO) that ultimately led to Google Cardboard and GearVR. Meanwhile, further parallels can be drawn between two key discoveries, one that led to the development of cinema, the other that led to our current post-cinematic era. A common misconception is that, prior to the invention of cinema, image media was entirely devoid of motion, which was absent until the discovery that it could be synthesized by rapidly presenting a series of still images in sequence. On the contrary, moving images date back to at least 1659, when Christiaan Huygens invented the magic lantern. During the two centuries preceeding cinema, image motion was produced by numerous methods: by glass slides containing moving parts and clockwork mechanisms, by moving or spinning the projector, by dissolving between successive images, by projecting on fabric and smoke, by slides with !3 multiple layers that could be moved independently, by adjusting the optical elements and the light source, and so on. But none of these methods were universal: specific techniques had to be used to produce specific motions. The discovery by Joseph Plateau in 1828 that an illusion of motion could be produced by a rapid succession of still images was an entirely different animal: now any and all varieties of motion could be simulated by a single standardized technique. Combined with the invention of photography, followed by chronophotography and then cinematography, this illusory motion (as opposed to the direct presentation of moving elements in the pre-cinema era) quickly led to the cinema as we know it: a kind of universal medium that can make claims to represent the whole of human experience. However, I would guess that initially, motion by rapid image sequence just seemed like one technique among many; its full implications would only reveal themselves over time, as the technique was refined and improved by numerous inventions and innovations, most crucially the invention of celluloid roll film in about 1889. Returning to the 20tth century, and the development of digital computation, we can draw some analogies to the history just described. Like the discovery of apparent motion that led to the cinema, analog-to-digital conversion has facilitated a universal system of exchange in which any image can be duplicated, transformed, enhanced, recombined, synthesized and otherwise altered. But it would be an obvious mistake to think that these kinds of operations were absent before digital media. There were countless manual, optical, mechanical and chemical methods for working with analog images, facilitating sophisticated special effects and techniques. However, like the pre-cinematic methods for creating motion, analog image operations required specific methods for specific effects. And, like the initial discovery of illusory motion, the early use of digital techniques and technologies fit into and complemented an exisiting landscape of highly refined analog methods, hardly seeming like the unstoppable force that would eventually replace every last one of them, ushering us into a media landscape where, essentially, anything that can be imagined can be represented. !4 So we find ourselves at what appears to be a key moment in the history of media and media technology, in which several distinct possible futures present themselves, with the likelihood of one or another of them coming to pass depending at least partially on the kinds of explanations and strategies we embrace concerning our understanding of the recent and more remote past. The development of cinema into a dominant force starting in the early twentieth century more or less wiped out many of the media technologies, devices, inventions and platforms that preceded it throughout the nineteenth century. Some of these survived as toys and novelties (notably, the stereoscope became the ViewMaster), but there was little chance of any of these formats reaching the kind of critical mass that they would have needed to remain culturally relevant. As an example drawn from personal experience, as a child I owned a battery-powered Super-8 movie viewer, along with a collection of 8mm reels from Castle Films, mostly abridged (and silent) comedies and cartoons. Being able to view movies as a personal private experience was completely unlike watching television or going to the cinema. The viewer was hand cranked, and the variable ramping between still and motion pictures never lost its fascination for me. But there was no context for this experience; it was just a toy, like the ViewMaster and its seven-picture discs, which I also owned and treasured. One can only conjecture as to why single-user lens-equipped devices like stereoscopes and other personal image viewers have rarely been seen as a viable narrative medium on a par with screen-based media. They are somehow disreputable, perhaps because they cut us off from shared experience. They are seen as uncomfortable, and fatiguing, and even dangerous. We can see these attitudes in the rejection of 3D (complaints about having to wear glasses), and even today in the not-uncommon rejection of VR. In any case, I would argue that personal viewers were always a potentially rich and promising media type, and that many avenues were left unexplored due to the lack of tbe kind of sustained interest and attention enjoyed by screen-based media. We can see aspects of our current situation that could easily lead to a similar homogenization of media and a corresponding neglect of other promising less-traveled avenues. !5 The field of virtual reality is currently exploding, with major players vying for position, along with a steady stream of new devices, techniques and applications, accompanied by numerous attempts to establish standards, lay down rules, and police the boundaries of this nascent medium. There will undoubtedly be winners and losers as the field goes through a familiar process of innovation and competition, with particular platforms, models and applications attaining various positions of dominance, while others fade away to irrelevance. But for those of us who see this medium as more than just another business opportunity, there is little to be gained, and much to be lost, by encouraging this kind of survival of the fittest cage match between the various flavors of immersive media. Nonetheless, there are plenty of so-called experts dismissing particular techniques and technologies as inferior, worthless or simply falling outside the boundaries of “true” VR. The rhetoric of “best practices” is commonly used to make sweeping statements and prohibitions that are often little more than self serving opinions and self-fulfilling prophecies. While we may aspire to the full immersion, interactivity, ubiquity and invisibility of something like the (fictional) holodeck, it doesn’t follow that VR that falls short of this ideal is necessarily inferior, or doesn’t stand an equal chance of being developed into a distinct platform with its own conventions and grammar. For example, there is currently an explosion of interest in 360 video, designed to be viewed on a HMD and promoted as “cinematic VR”, with a steady stream of new cameras, production and post-production software, applications and movies. These movies may allow only limited interactivity (or none at all), they may be stereoscopic or 2D, they may be essentially just traditional cinema with the addition of full immersion; and yet even if that’s all they are, it’s still enough to make them a radical departure from the cinema as we know it, requiring the invention of new strategies and techniques, new modes of composition and storytelling, even new kinds of stories and narratives. And yet there is no shortage of pundits declaring that 360 movies are not true VR, that they shouldn’t be taken seriously, that they may even be dangerous because when audiences are exposed to them they will be disappointed and will be put off from more advanced types of VR. !6 This is nonsense. I would not be at all surprised if non-interactive 360 cinematic VR develops into a fullfledged platform in itself, with its own specific cinematic language and celebrated examples and even masterpieces. This is not to say that more “traditional” VR, based on real-time textured geometry and running in a 3D graphics engine, won’t also lead to compelling experiences and even classics. All VR shares certain characteristics that make it a radical departure from cinema and other screen-based media. First, the frame is gone. Composition, understood as the arrangement of elements within a rectangular window, is no longer relevant. All the tricks of the trade: negative space, carefully placed props to guide the eye, visual balance achieved by time-honored compositional strategies like the golden rectangle - all of these have to be reconsidered, reformulated or even abandoned. Next, you are present, all the time. You are in the exact center of the image, whether or not anything or anyone in the narrative acknowledges you. The familiar voyeuristic stance of the cinema, with its ever-shifting positions of subjectivity and objectivity, is gone. If a character looks at the camera, they are looking at you. You are embedded in the space, every bit as much as anything else in the image. Third, everything is life-sized. Screen-based media lets us imagine that the actual size of anything that we see represented is not dependent on its actual measurement onscreen. This is one of the things that makes montage possible; we have no trouble accepting that a cut to a closeup of an actor’s face doesn’t imply that they’ve suddenly become a giant. Essentially, the perceived size of an object in cinema is determined by a subtle interplay between our understanding of object scale, distance from viewpoint to object, and overall image size. In VR however, objects are in the same space as you are, and can only be understood in terms that space, setting their scale to a fixed relationship to that common space. A smaller than expected object is perceived as a scale model, and a sudden closeup would be interpreted as the object ballooning to giant size. Fourth, virtual space is real space. That is, there is a one-to-one correspondence between every threedimensional point that we perceive in VR with a point in real space; even if that correspondence is subjective, !7 temporary and subject to change, at any given moment, a virtual environement is coextensive with the actual space that I inhabit as I experience it. These are just a few of the facets of VR that define it as a medium; but I want to emphasize that these characteristics are present in all HMD-based VR, even if the application doesn’t allow for interactivity, free navigation or any other presumed prerequisite for “true” VR. In short, this should be a time to encourage open exploration and experimentation. We should resist the temptation to close off possibilities in a misguided bid for aesthetic and academic consistency and respectability, or for economic success. After a decades-long effort to lay the foundations for VR, we have only recently witnessed the arrival of viable, reliable low-cost development platforms for VR, and we should grant ourselves the time, the space and the freedom to push the boundaries this nascent medium as far as it can go before it settles down (as it probably and inevitably will) into a clearly defined (and marketed) medium. ©2016 Perry Hoberman !8