Preferred Citation: Henderson, Brian, and Ann Martin, editors. Film Quarterly: Forty Years - A Selection. Berkeley:  University of California Press,  c1999 1999. http://ark.cdlib.org/ark:/13030/ft5h4nb36j/


 
PART FIVE— TECHNOLOGIES

PART FIVE—
TECHNOLOGIES


331

The changing technologies of film production and of film/video screening have been the subject of a number of Film Quarterly articles over the years. The topics treated in these pieces are necessarily of interest to the "mad keen" film lovers who read Film Quarterly . The art to which they devote so much of their time is almost constantly changing, both in the mode of its production and in the systems of delivery that convey it. Change occasions queries, not unmixed with anxiety, in any devotee. Will a changed cinema still love us? Will we still love it?

Charles Shiro Tashiro's article on videophilia is unusually well informed. A graduate of the UCLA film production program, he was at the time of writing a Ph.D. student in Critical Studies at the University of Southern California. Moreover, as a former producer for the Criterion Collection of videodiscs, including their edition of Lawrence of Arabia , Tashiro understands the technologies involved first-hand. He concedes the usefulness of home systems for the close study of films, but notes that "this apparent windfall has usually been embraced with little attention to the technical issues raised by the movement of a text from one medium to another or to the consequences of film evaluation based on video copies."

To enhance this argument, Tashiro provided Film Quarterly with stills of three formats of Lawrence: scanned, letterbox, and full film frame. The contrast between the second and the third is revealing because the depth evident in the full film frame is almost completely absent in the letterbox version. The value that Charles Barr saw in CinemaScope—a greater sense of depth than the conventional frame—is in fact negated by video or laserdisc letterboxing, which refocuses attention on the flatness of the image and hence accentuates the composition rather than, as Barr argued, effacing it.

The pieces by Charles Eidsvik and Jean-Pierre Geuens concern what Eidsvik calls "changes in film technology in the age of video." Eidsvik's exceptionally clear-eyed view is that "there is very little that is esthetically revolutionary in


332

the new technologies, and nothing that would upset the basic film-making power structure. Changes have been conservative, a defense against inroads and threats brought by very rapidly evolving video technologies." Within this framework, Eidsvik discusses new improved film stocks, including high-speed ones, which, supplemented by six emulsions and also by advances in postproduction sound enhancements, make possible low light-level filming and, indeed, night-for-night filming. Eidsvik argues that, oddly, these changes have not affected film style. Venturing into narrative theory, however, he suggests that they may have "a discernible effect" on story and plot construction, because they allow "freer use of the kinds of settings that can be easily shown," rather than left as gaps. This is a stimulating suggestion, but later he notes more cautiously that the developments he has discussed have made "story construction a bit different in potential." Indeed, one of the conclusions of the piece is that "large theoretical claims must be put on hold," to which Eidsvik adds, very sensibly, that theory "must limit itself to a little bit of history at a time." The developments that Geuens elaborates in such elegant detail and cultural depth were anticipated by Eidsvik:

The most obtrusive technical change outside of the area of special effects in the last decade has been in camera movement. The Steadicam, Louma-type crane, Camrail, and jibbed dolly systems that have allowed us our current period-style of perpetually moving cameras are all consequences of fitting video viewfinders to film cameras, thus making them remote-controllable.

Geuens's article on the video assist begins with a discussion of Martin Heidegger on technology and proceeds to quote the techno-perspectives of Andrew Feenberg and Herbert Marcuse as well. He traces the prehistory of movie camera viewfinders up until 1936, when the Arnold and Richter Company of Germany introduced continuous reflex viewing with its new Arriflex 35mm camera. He also discusses Peeping Tom (1960); Vivian Sobchack; Gilles Deleuze; independent filmmakers, including Direct Cinema practitioners, who direct and photograph their own films; Heidegger again; and Emmanuel Levinas. He then surveys the history of the use of video to support film production, which culminates in the introduction of video assist. As in Geuens's earlier article on the Steadicam, it is only after carefully building theoretical and historical contexts that he allows himself some doubt. Video assist would not have benefited Ingmar Bergman very much, but the films of James Cameron or Robert Zemeckis would have made little sense without it. In their films, "the device itself is no more than an advanced representative" of the other technologies that will be introduced in postproduction. Toward


333

the end of the video assist piece, Geuens quotes with approval Eidsvik's account of new technologies as the industry's "defensive maneuvers" with regard to video technologies.

Stephen Prince's article contains a wealth of information about another important new technology: digital imaging or, as it is called in the industry, computer-generated imagery (CGI). Prince conducted telephone interviews with a number of practitioners in the field, research that adds significantly to the value and detail of his discussions. He distinguishes between films in which the conspicuous use of digital processes makes them evident to viewers—True Lies, Jurassic Park , and Forrest Gump —and others that use digital processes of which the viewer is unaware. In both categories, however, the ability of CGI to simulate movement, location, lighting, and other features creates the perceptual patterns of photographically realistic cinema.

Unlike the writers of the articles discussed above, Prince is not critical of the new technologies he discusses, either for their role in the cinema-video competitions or for their other functions in the media industry and in the national and global economies. He turns his research in a different direction—toward film theory and toward overcoming what has been called by some a realist versus formalist opposition in film theory. Given that the line between real and not real will be increasingly blurred, Prince asks, "How should we understand digital imaging in theory? How should we build theory around it?" His answer, developed at length and cogently, is what he calls a correspondence-based model of cinematic representation: film shares many of the perceptual codes that structure our everyday perception. Although CGI by definition has different origins than photographic-based imagery, the two equally correspond to our normal perception, and hence are both experienced as perceptually realistic.

Michael Dempsey's two-page manifesto against colorization is the classic statement on this galling issue. It is also classical in that one could analyze its superb rhetoric as one does a Cicero oration. He begins on a moment of rest after exhausting conflicts: "Whatever gamuts American movies have had to run during production, once made they are supposed to be secure." Studio interference, the hobbling of censorship, and other compromises impede the making of films; but there are also postrelease hazards such as pan-and-scan prints, the fading of color prints, and, most serious of all, the colorization of black-and-white films by those who own them. These include Ted Turner, who owns the MGM film library; the Hal Roach Company; and Color Systems Technology—all of whom "produce new prints of black-and-white movies with color added." (As in Cicero, the perpetrators of scandalous behavior are named.) After answering the arguments of the colorizers such as that the makers of black-and-white films


334

could not afford color, Dempsey lists the agencies and individuals that are working to preserve black-and-white films.

Dempsey argues that the colorizers are motivated by greed and concludes, "But talking to the colorizers about things like moods of elation and reconciliation is pointless." Here the writer doubts the power of his words and arguments to have any effect on those responsible for damaging the nation's film heritage. This is known in classical rhetoric as an aporia, a point in an oration or brief in which the writer questions how to continue. This doubt itself can be used against the writer's opponents and may also reorient the argument as a whole. Thus the failure of words or arguments to have any effect upon the colorizers is at the same time his most damning indictment of them. Since money is all they understand, moreover, the writer urges his readers not to screen or broadcast colorized films: don't buy them, rent them, or watch them.


336

Colorization

Michael Dempsey

Vol. 40, no. 2 (Winter 1986-87): 2–3.


337

Whatever gamuts American movies have had to run during production, once made they are supposed to be secure. This, naturally, has not been the case. Circulating prints of Chaplin, Keaton, and Laurel and Hardy silent comedies have been corrupted with cutesy, moronic noises. TV stations concoct pan-and-scan prints of wide-screen films, destroying their compositions. Color negatives of the past three decades are subject to fading.

Now our film heritage has a new nemesis: "colorization." Using computers, such entrepreneurs as Ted Turner (who now owns the MGM film library, which he bought as fodder for his Atlanta "super station"), the Hal Roach Company, and Color Systems Technology produce new prints of black-and-white movies with color added.

Various rationales have been advanced for this disgusting cultural vandalism: black-and-white films can't draw huge TV audiences; many video store customers turn up their noses at them; "the kids" aren't interested. Shrugging "philosophically," some apologists point out that the original black-and-white negatives remain untouched. Others would protect the "classics" (these sensitive souls know all the classics intimately, of course) but let the colorizers have, say, Republic Pictures potboiler Westerns or Abbott and Costello comedies. Besides, one defense of colorization runs, most American studio pictures were shot in black-and-white only because color was too expensive. The clear implication is that black-and-white is a primitive form of cinematography which "lacks" color, and now these technocrat/hustlers will correct that deficiency. One of them, Earl Glick, the board chairman of Hal Roach Studios, has even had the gall to state that his colorizers have improved Joseph Walker and Joseph Biroc's black-and-white work on It's a Wonderful Life .

Color may have been too costly for most American studio movies during the 1930s and 1940s, but once black-and-white photography was chosen, the movies were designed, costumed, and lit accordingly. However, even bothering to refute arguments like these grants them undeserved dignity when in fact


338

they are just contemptuous coverups for the one and only motive behind this rush to colorization: raw greed.

And a rush it is. Already, colorized cassettes of, for example, Yankee Doodle Dandy, The Maltese Falcon, Topper , and It's a Wonderful Life are not only flooding video stores, they are also inexorably driving the black-and-white originals into the ghettos of occasional museum or revival theater screenings in cities where such forums exist. If this situation is not reversed, no American black-and-white motion picture may ever again live in regular showings as its makers intended.

Defenders of black-and-white movies are not sitting idle. The Directors Guild has decried colorization on artistic and cultural grounds and has gone to court over the issue of copyright infringement. RKO has done the same in an effort to protect the films produced under its own name. Numerous directors, among them Billy Wilder, John Huston, Fred Zinnemann, Woody Allen, Martin Scorsese, Bertrand Tavernier, Nicholas Meyer, Peter Hyams, Martha Coolidge, and Frank Capra, have expressed outrage. James Stewart has eloquently described the grief he felt when he tried and failed to watch a colorized print of It's a Wonderful Life to the end. Having seen a colorized effigy of this movie's climax, I can testify that if this is how the picture is going to be presented from now on, then It's a Wonderful Life , in effect, no longer exists; the added color annihilates the mood of elation and reconciliation that Frank Capra and his collaborators originally sought and achieved.

But talking to the colorizers about things like moods of elation and reconciliation is pointless. Whether you are an individual viewer or a more influential person (say, a buyer or a programmer for television), the urgent message is the same: don't screen or broadcast colorized films, don't rent them, don't buy them, don't watch them. We are dealing with people who are unreachable by cultural, artistic, or social appeals because they don't care about anything except money. Therefore, let us hurt them in the way most painful to their shriveled sensibilities, by depriving them of every dollar that we can. If we do not, their bottomless avarice will deprive us and future generations of infinitely more.

[The above views are passionately endorsed by the Film Quarterly editorial board.]


340

Machines of the Invisible:
Changes in Film Technology in the Age of Video

Charles Eidsvik

Vol. 42, no. 2 (Winter 1988–89): 18–23.


341

Until the early 1970s, critical discussion of film technology and practice was a preserve monopolized by film-makers and by theorists such as André Bazin and Jean Mitry who were in close contact with film-making communities and often served as intellectual spokesmen for views commonly held by film-makers. The film-making community, in trade journals such as American Cinematographer and J.S.M.P.T.E ., traded secrets, discussed craft, and celebrated its lore, myths, and mystique. Theorists and historians such as Bazin and Mitry—Mitry was himself a film-maker—built film-makers' perspectives into their views of how new technology catalyzes change in film history. This view, which permeates Mitry's Esthétique et psychologie du cinéma and can also be found in essays such as "The Myth of Total Cinema" by Bazin, posits an "Idealist" and "technologically determinist" view of history, with film technology allowing film-makers ever greater potential for recreating reality.[1] Though technological determinism is an understandable belief among film-makers, whose jobs depend on machines and for whom belief in technological determinism is anxiety-lessening, the position is hardly intellectually respectable.[2] Once Althusserian Marxism began to explore relationships between ideology and technology, an attack on Idealist and technologically determinist positions was inevitable. The attack, led by Jean-Louis Comolli, J.-L. Baudry, and Stephen Heath, attempted to critique technology within a "materialist" approach to cinema. Soon joined by feminist film critics such as Teresa de Lauretis, the analysis of technology and ideology has become a mainstream approach to technology at least within academe.

Though the academics (and Comolli himself) are prone to gaffes when discussing specific technological practice,[3] one cannot quarrel with their intentions or intelligence. Nevertheless, insofar as the job of historians is to account for change, their approach has little future, not because their methods—the search, for example, for codes to which technology speaks—are weak, but because they have chosen to write and work from the "position of the spectator," from what can be seen and heard on movie screens, rather than on


342

"tainted" film-maker-generated technical histories.[4] This would be fine except for a simple problem. Not only do most movies, in George Lellis's terms, "seek to hide the methods by which they produce their illusions,"[5] new production practices often are deliberately made invisible and inaudible to film spectators. Information on new technical practices is only briefly hinted at in trade journals but primarily is passed on through actual film-making.

If the last decade is any indication of how change occurs in cinema, a lot goes on below the realm of the easily perceivable. The central fact of recent cinema is the film industry's attempt to survive in the face of overwhelming competition from video. Film can compete with video only as a producing and large-screen exhibition medium. As a producing medium it can compete only on the basis of "quality," with quality defined as something film is not trying to achieve but already has . Technological innovation has largely served the purpose of making that quality either "better" or easier to achieve, but not different basically from the quality that already exists. New technology thus has expanded what can be filmed but not (deliberately at any rate) how we are meant to see films. Except in the area of special effects, an area in which mainstream film-makers have been able to use the old Hollywood ploy of turning big budgets and technical prowess into a publicity stunt, conceptually conservative technological innovation has been the norm. In understanding this innovation, perhaps the only relevant theorist would be Michel Foucault, whose approach to power struggles is relevant to just about any study of technical change.[6]

But the power struggle has been basically defensive. In the last decade, the majority of technical developments in the film industry have been aimed at facilitating extant production practices rather than at changing the "look" or sound of commercial films. Just about every new product has been advertised as something that makes film-making cheaper and easier, usually by allowing smaller crews or less schlepping of equipment on location. For each problem to be solved—light levels needed for shooting, the problem of equipment weight, problems of camera mobility, or the difficulty of getting good sound on location—different companies have offered competing solutions. For low-light filming, for example, Kodak, Fuji, and Agfa have offered faster film stocks; Zeiss, Angenieux, Cooke, and Panavision have offered faster, sharper lenses; and various makers of lighting equipment have developed lights and light-control equipment that require little electricity and are highly portable. Alone, each new technology has had little effect. But in aggregate, the dozens of new technical possibilities made available have radically altered the construction and implied worlds of commercial narrative films. In terms of David Bordwell's "style-syuzhet-fabula" triad,[7] the technical developments have had surprisingly little effect on style, but a discernible effect on syuzhet (plot) construction. This has occurred because the new technologies allow more on-location film-making


343

control, and thus freer use of the kinds of settings that can easily be shown, rather than left as syuzhet gaps, in fiction films. I will return to this issue later, after a review of the major recent changes in film technology.

How film-makers get images has been directly affected by changes in film-stock technology, lenses, and cameras meant for location use. But the primary change in visuals has been indirectly created through Automated Dialogue Replacement (ADR) in postproduction. ADR masks its own existence so well that it is not audibly detectable to a film viewer. It has been radically liberating as a catalyst for other shifts in technical practice.[8]

The most important of the visual-technology shifts has been in an expected area, film-stock technology.[9] Until the mid-1970s Eastman's 5254/7254 negative was standard for narratives; when the new 5247/7247 stock came in, films changed visually and film-making got easier: the stock had such fine grain and wide exposure latitude (7 to 10 stops of light acceptance) that it became a new standard, one still more or less prevalent. Since then Eastman, in addition to unpublicized refinements of 5247, has produced three generations of high-speed stock, a fine-grained and contrastier replacement for 7247 in 16mm (7291), a daylight-balanced version of 5247 for use with the new high-efficiency "metalhalide" arc lamps known as "HMIs," and a stock designed purely for matte work in special effects film-making. Though the newest high-speed stocks have six-layer emulsions[10] and flattened-molecule technology (which combine to allow high-speed film-making without visible grain), each stock intercuts smoothly with the basic "47." The new high-speed stocks are rated at ASA 320 (compared to 47's ASA 100) and can be rated faster, even without extended lab development ("pushing"). For example, Full Metal Jacket was shot with the film rated at ASA 800.[11] To the viewer, almost nothing has changed in a decade. But because of the increase in film speed without increase in grain, now very low-light scenes can be filmed easily; because of compatible tone and grain-structure architecture, interiors and even night exteriors are similar in "look." Eastman, Fuji, and Agfa stocks can coexist as stylistic variants even within a single film without a viewer noticing.[12] The effect has been on the kinds of shots that can be incorporated into narratives smoothly. Night-for-night filming is now relatively easy, provided the new "superspeed" lenses are used.

Low-light filming problems also were "solved" by lens and lighting manufacturers unobtrusively. Quicker and sharper zoom and prime lenses enhance the possibilities of fast stock without introducing their own "look." Where it used to take 100 footcandles of light to get sharp images a decade ago (because older lenses were only really sharp stopped down) now 25 footcandles or even 10 is common. (In Eastman's demonstration film for film-to-video transfer techniques, one romantic candle-lit scene is lit with only one ordinary candle; it looks fine.) Not only is frying no longer an occupational hazard for actors;


344

syuzhet construction now has very few light limitations. And because lighting problems in narrative film-making are in good part problems in schlepping lights and light-control equipment, and in getting juice to the lights, more efficient units such as HMIs have become popular. (An HMI is around five times as efficient as a tungsten lamp, twice as efficient as a carbon arc, and is day-light-temperature.) Quicker lighting set-ups with less generated heat and smaller electricity requirements expand location possibilities.

The additional location flexibility made possible by new visual tools made location work cheaper and easier; it also made story construction a bit different in potential. More low-light locations could be used, and they could be used in new ways. The city night locations of a Desperately Seeking Susan or After Hours were predicated on the new tools and stocks. Certainly night exteriors are not new; the ease with which they can be put into films is.

Complementing and accelerating the changes brought by stocks, lenses, and lights are post-production sound developments. ADR, based on "insert" electronic technology (which "ramps" the onset of the bias tone so that sounds can be inserted in a track without pops or other recording artifacts), makes it possible to clean up location sound tracks or unobtrusively to replace location sound entirely in post-production. Now so ubiquitous that almost every feature film lists ADR credits, the art of sound replacement and remixing is an unsung but central contemporary film- and video-making craft. But except for the remarkable intelligibility of dialogue made possible by ADR and new versions of sound tools such as radio microphones, the main effect of new sound technology has been to free up crews on location. No longer is a take spoiled by bad sound; no longer need a boom shadow be in the way; no longer must the sound of a moving camera be so carefully masked. But the use or non-use of ADR in a scene is undetectable.

Curiously, a by-product of ADR has been in characterization and acting styles. Actors such as Robert DeNiro now often just mumble their lines on-location, and depend on ADR sessions to get the right intonation and subtextual subtlety into the final film.[13] Before ADR, Europeans (such as Bergman) frequently "matched" dialogue either because of bad recording conditions or to re-do performance nuances;[14] with ADR this technique has become common, even everyday. Actors who do not have to project their voices can present different aspects of character than those who must be heard clearly by location microphones. Potentially this could cause large shifts in story and character construction. But the trick to the technique working is for the actor and film-maker not to get caught by viewers. Acting styles have changed since ADR. But only those within the industry know how or why.

The cameras used to shoot also have changed, and similarly, it is impossible to tell what camera has been used in any recent normal-format film. Which of the


345

four generations of Arri 35BL or two generations of Moviecam or myriad generations of Panavision/Panaflex cameras a film was shot with is in no way visible. (Similarly it is impossible to tell what camera recent 16mm films were shot on.) Each generation in each manufacturer's line has become quieter, more reliable, more adaptable to video viewfinders, and more versatile, particularly for location filming, but no recent camera has advertised its existence to the viewer.

The most obtrusive technical change outside of the area of special effects in the last decade has been in camera movement. The Steadicam, Louma-type crane, Camrail, and jibbed dolly systems that have allowed us our current period-style of perpetually moving cameras are all consequences of fitting video viewfinders to film cameras, thus making them remote-controllable. The earliest uses of these tools were obtrusive: in Bound for Glory when the camera glided through a crowd smoothly and in ways not conceivable with a boom or dolly, the effect was startling; so was the camera smoothness in An Unmarried Woman when the camera went up flights of stairs with the actors; so were the hallway and maze and stairs moving-camera scenes in The Shining . But the Steadicam has become just part of current film technique, and the different devices for moving a camera by remote control are used in films almost interchangeably, usually without calling attention to themselves. The basic principle behind all the devices is that a camera can be moved more freely if its 50-lb. weight can be separated from the weight of the operator and focus-puller. Remote control and videotapes solve the problem: in the Steadicam by physically isolating the camera from the "handholding" operator; in the Louma and jib-based rigs by putting the controls at a console and locating the camera at the end of some sort of boom, with mechanical, hydraulic, or electronic servocontrol systems that allow manipulation of all camera controls.

Are developments in moving-camera technology revolutionary? They seemed so in the 1970s; now the situation is less clear. As the mobile camera became more common, the stylization apparent in a film such as The Shining has blended into a repertory of mobile-camera/stationary-camera paradigms. But these paradigms are not so much the consequence of technologically created opportunity as of an economics- and video-driven loss of other esthetic options. A decade ago, a film-maker could use the edges of the frame as part of compositional graphics—to lead the eye, to counterbalance other visual elements. But now cable and video distribution is the financial heart of the media storytelling business, so film-makers have to keep essential information away from the edges of the screen, and have to forget about using the graphic potentials of 2.3:1, 1.85:1 or 1.65:1 frame formats. All films must be composed for what the Europeans call "amphibious" life, for viewability both on theater and on television screens. Without control of the shape or edge of frames, visual control must be done kinetically—especially because TV screens do not carry enough visual information for long-held static shots to retain viewer attention. Glance Esthetics, our contemporary period-style,


346

has almost completely replaced Gaze Esthetics, in which film-makers left time for the viewer to contemplate the mise-en-scène . Glance Esthetics (perhaps seen in purest form on music videos) requires the moving camera. But it seems far less than obvious how one might analyze stylistic changes forced by economic changes that themselves reflected new technologies and broader-scale power struggles within society. And the longer the new camera-moving technologies are with us, the less radical they seem—the more they seem mere successors to the dolly-shot esthetics championed by Max Ophuls and a whole batch of New Wave film-makers.

The sum of the technical shifts in the last decade has been to increase the possibilities of location film-making and to free film-makers from some logistical and financial production hassles. Though it would take statistical analysis to prove or disprove my impression that location exterior (and especially low-light) scenes are much more common now than they were a decade ago, and that they now more frequently form parts of the syuzhet rather than syuzhet gaps, the major drawback to such scenes (their cost) has been lessened. The film industry's ability "to turn the world into a story" (to use Mitry's famous phrase) has been increased in that more kinds of "natural" scenes can now be appropriated for fiction. But there is very little that is esthetically revolutionary in the new technologies, and nothing that would upset the basic film-making power structure. Changes have been conservative, a defense against inroads and threats brought by very rapidly evolving video technologies. Pressure from the outside rather than forces within the film industry has given us the new toys we work with on location. Each of these toys also plays to the extant power structure within the film-making community.

To grasp how new technology functions it is perhaps helpful to outline the economic and professional interests each technical shift favors. Low-budget film-makers, pushed out of the "industrial" market by video, mostly switched to video, bankrupting a lot of small 16mm equipment manufacturers and labs in the process. But at the higher budget levels, the mystique of film quality was promoted heavily by everyone from Eastman on down. Those with the most to lose by competition with video pushed the new technologies hardest. Crafts-people with a life invested in film technique were eager to try any new film tool that would make them more competitive. Rental houses could make money renting out new "top of the line" tools that changed quickly enough to keep film-makers from wanting to buy them, but still were rentable because they "worked like" older tools. Equipment manufacturers exploited film-makers' desire to survive and the willingness of rental houses to buy their stuff as a nudge to bring out "ever-better" tools. Accommodation was made to eventual video use by promoting the use of film as an originating medium and accepting the reality of Rank-Centel or Bosch video transfer. Driven by the nightmare of hearing the


347

phrase "we could just as well have done it on tape," the film community made its internal power accommodations and promoted its mystique of quality in order to survive in the higher-budget ends of the industry. In a weird sense, the threat from video was approached with a triage mentality. What was irrevocably lost was simply accepted—films would be transferred to and shown on video. What was seen as "working" all right without change, that is, film's basic rhetoric and "tradition of quality," was deliberately not undermined by technical shifts but instead was reinforced. What was changed were production practices and technology. The changes made here were meant to expand the domain of fictionalized establishment practices into areas in which video could not compete well, such as location film-making. Video has (at least at present) real problems in dealing with on-location light contrasts. Film's light acceptance range makes it unbeatable on-location. A battery of technical changes were gradually instituted so that film's on-location advantages could be maximized. The last decade has been a power struggle between factions in the entertainment industry. Technology has simply been a tool for gaining or retaining financial power.

What can be said about the relationships between change in cinema and technical change on the basis of recent film history? Nothing very global. There have been some changes in what we see and hear and how actors act. But each change, as it came, was so subtle, so well masked, that no major change ever was "felt" by audiences. The film industry's defensive maneuvers of the 1970s and 1980s are far different from the flaunting of color, 3-D, and wide-screen in the 1950s (and judging by industry finances, far more successful). But the changes that have occurred are still not fully played out, so to argue either the parallels or differences between the last decade and preceding ones would be to deny the complexity (and complex approaches to the craft) of film as a technological medium and art form. In film, as Ingmar Bergman put it, "God is details." So large theoretical claims must be put on hold, or at least balanced with one another in recognition of the different perspectives from which cinema can be seen.

The basic problem in theorizing about technical change in cinema is that accurate histories of the production community and its perspectives, as well as of the technological options that face film-makers, must precede the attempt to theorize. And theory itself must limit itself to a little bit of history at a time. It is not that we do not need theory that can help us understand the relationships between larger social and cultural developments, ideology, technical practice, and the history of cinema. Rather it is that whatever we do in our attempts to theorize, we need to welcome all the available sources of information, from all available perspectives, tainted or not, and try to put them in balance. Anything less than that approach lessens us as students of cinema by denying the complexity of the art we study.


348

Videophilia:
What Happens When You Wait for It on Video

Charles Shiro Tashiro

Vol. 45, no. 1 (Fall 1991): 7–17.


353

Since the early 1980s, there has been a steady increase in the revenue generated by marketing of theatrical films on videocassette and disc. This mass dissemination has been a boon to those interested in close study of film texts as well as to those simply interested in owning a copy of their favorite films. However, this apparent windfall has usually been embraced with little attention to the technical issues raised by the movement of a text from one medium to another or to the consequences of film evaluation based on video copies.

This discussion is meant as a broad overview of home video, and much of it is relevant to both videocassette and videodisc. However, I have concentrated on the latter, since it has evolved into the "quality" video medium, with a greater focus on duplicating the cinematic experience and an increased sensitivity to the technical requirements of film. (As a former producer for the Criterion Collection, including their edition of Lawrence of Arabia , I have some insight into the factors that go into disc production.) In particular, more attention to visual matters has popularized the transfer of wide-screen films at full horizontal width, with the resulting "letterbox" shape.[1] Videodisc publishers' attempted fidelity to film originals, the theoretical problems raised by such an attitude, and its relevance to film viewing and analysis are the focus of this paper.

The Videodisc Medium

To some extent, videodiscs would appear to be the film enthusiast's dream come true. They are light, portable, easy to store. With the growth of the market, a larger catalogue of titles is available.[2] While not cheap, the retail price is well below fees for print rental, not to mention the astronomical sums for purchase. Moreover, discs are (at least in theory) permanent, unlike either videotape or film, which deteriorate with each use.


354

Film never wears out faster than when run through a flatbed editing machine, the condition best suited for close analysis. Videotape offers fast-forward and rewind, but is much slower than the nearly instantaneous access available with videodisc players. Consumer-level VCRs, in addition, cannot offer the true freeze frame that a CAV videodisc offers. Disc players can also interact with computers and offer higher picture resolution than most commercially available tape gauges.[3] And there is, finally, the greater attention paid to the video transfer true of at least some videodisc publishers.

Still, with videodiscs there are trade-offs and underlying ideological assumptions. For example, unlike compact audiodisc players (a related technology), which usually have a feature to play songs in random order, videodisc players cannot randomly "scramble" the chapter encoding included on some discs. Presumably this lack of scrambling ability is based on the assumption that the film viewer will not be interested in mixing up the linear flow of the narrative. The players also do not have a feature to play sound at anything other than regular speed, which obviously assumes that only the picture is worthy of multispeed analysis.[4]

These features are designed into (or out of) the medium. Some are more beneficial to the user than others; all are ideologically dictated. But the limitations of the machinery itself and the assumptions that go into its design must be considered (if only in the background) in any discussion of the use of discs for pedagogical, analytical, or substitute cinematic viewing purposes. We must also consider the strategies of moving the text from film to video.

Transfer/Translation

The term "film-to-video" transfer is itself an ideological mask. Its connotation of neutral movement from one location (projection in a theater) to another (viewing at home) hides the reconfiguration of the text in new terms. A more accurate expression would be "translation," with its implicit admission of a different set of governing codes. While film and video share common technical concerns (contrast, color, density, audio frequency response, etc.), their means of addressing those concerns differ. The conscientious film-to-video transfer is designed to accentuate the similarities and minimize the differences, but the differences end up shaping the video text.

We might call the ease of translating a particular film to video its "videobility." A film with high videobility translates relatively easily, perhaps even gaining in the process. (Which is to say that there are elements in the film that come through more clearly on video. Subtlety of performance, intricacy of design, for example, may be lost in the narrative drive of the one-time-only cinematic setting, but en-


355

figure

Lawrence of Arabia  (1962)

hanced at home.) A film of low videobility translates with more difficulty. There are two components to videobility: technical and experiential. Technical differences of image between film and video center around three issues: 1) brightness and contrast range, 2) resolution, and 3) color.[5] As for the sound, a sound track mixed for theatrical exhibition may, when transferred to video, have tracks that will not balance "properly" at home. (For example, dialogue tracks may be drowned out by ambience tracks, etc.)[6]

Consider the following hypothetical example. A young couple, with their baby daughter, sits next to a window covered by horizontal blinds. Next to the window is an open doorway, leading out into a garden ripe with daffodils in summer sunlight. A butterfly flits across the flowers, attracting the attention of the baby, dressed in a bright red dress. She toddles out into the sun to chase the butterfly as her parents remain in the alternating shadows and shafts of light caused by the horizontal blinds. The mother looks at the father, then says "I think it's time we called it quits" at just the moment their daughter, as she reaches for the butterfly, trips and falls giggling into the flowers.


356

As we work to translate this image into video, problems arise immediately. First, there is the brightness range between the garden in sunlight and the parents in shade. Film records this juxtaposition without difficulty. But as the telecine operator exposes the video for the father and mother, the baby, butterfly, and flowers disappear into a white blaze; correcting for the baby, the parents disappear into murky shadow.

A choice has to be made, but which is more important? Attention to narrative would dictate exposing for the most significant action. Reasoning that the overall film is about the couple's divorce, the operator decides that the line "I think it's time we called it quits" is more important and thus chooses to expose for the interior. The baby's giggle seems to come out of nowhere; even if the juxtaposition between the line and the baby's giggling were not there, letting the flowers go to blazes runs the risk of losing the sensual detail. This detail may not dominate a film, but its cumulative effect is certainly a powerful influence on our perception.

The operator decides to make an overall adjustment in contrast to bring all the brightness ranges into midrange, thus making the image more "acceptable" to video. As a result, the alternating light and shadow are readable as a pattern and the baby in the flowers reappears out of the white sun.

Just about everything is visible now, but the sacrifice has been to change all the tonal values into the middle greys. Vividness of color and detail are lost, and the image looks as if it's been washed with a dirty towel. (As an example of just such a "dirty towel" transfer, see the video release of Joseph Losey's Don Giovanni .) The video image is acceptable within the limitations of the medium but unsatisfactory as a reproduction of the film image. In other words, the overall contrast of the image can be "flattened" to conform to the technical limitations of video, but the visual impact has been flattened as well. Thus, films photographed in a low-key or contrasty manner might be said to have low videobility because of the difficulty in reproducing their visual styles.

But there is another problem with our scene. The horizontal blinds read perfectly well on film because of its resolving power. But on video, they produce a distracting dance as the pixels inadequately resolve the differences between the blinds and intervening spaces. In other words, film can read the interstices between the blinds and reproduce that difference; video, trying to put both the blind and the space into the same pixel, cannot. (This is why TV personalities do not wear clothing with finely detailed weave or patterns.) The only way to compensate for this "ringing" effect is to throw the image slightly out of focus.

The resolving power of the film image is almost always greater than that of video. It is this greater resolution that enables the film image to be projected great distances. It is also this resolution that allows the greater depth


357

and sensory detail that we associate with the filmgoing experience. Therefore, a film dependent on the accumulation of fine details also has low videobility. (For example, in the MGM/UA letterboxed video release of Ben-Hur's chariot race, the thousands of spectators become a colorful flutter; the spectacle of Lawrence of Arabia is also significantly reduced by the low resolution of background detail.)

And what about color? Although photography and video color reproduction are fundamentally different (one is a subtractive process, the other additive), it is the limitations of the video image that present the greatest problems, particularly the handling of saturated reds. Too vibrant or dense, and the signal gets noisy. But since red is often used to attract attention, it cannot be muted too much in video without violating visual design. Thus, color balance on the baby's dress would have to be performed carefully to allow the red to "read" without smearing. (For examples of dissonant reds, see Juliet's ball dress in the Paramount Home Video release of Zeffirelli's Romeo and Juliet; also note the scenes inside HAL's brain in the MGM/UA release of 2001: A Space Odyssey .)

On the other hand, the relative imprecision of video does have some advantages, or, at least, it can be exploited. For example, optical effects in film, such as dissolves, "announce" themselves because of a noticeable shift in visual quality as the optical begins. This shift results from the loss of a generation involved in the production of the optical effect. To some extent, because the video image lacks the same resolution, the differences between the first-generation film image and the second-generation optical image can be lessened. In effect, the difference takes advantage of video's inferior resolving power to make the first-generation image look more like the second-generation image.

There are other problems, though, that result indirectly from the relatively low fidelity of the video image when compared with the high-fidelity sound reproduction possible with only a modest home stereo. Classical narrative is structured on the notion of synchronization between image and sound. This synchronization has a temporal component: we expect words to emerge from lips at the moment they form the letters of those words; when a bomb goes off, we expect to hear an explosion, etc. But there is also a qualitative component to synchronization. A big image of an explosion should be loud; a disjuncture occurs if the audio "image" remains large when that big image is reduced to a small screen. Imagine attending the opera and sitting in the last row of the upper balcony but hearing the music as if sitting in orchestra seats.

Home stereo is not equal to a theater. But subjectively, it is much closer in effect to the theatrical experience than a television image is to a projected film image. Moreover, when the sound tracks maintain some aspects of theatrical


358

figure

A spinner in  Blade Runner

viewing/hearing that are easy to maintain in audio but impossible to duplicate in picture, we're once again conscious of the differences, not only between picture and sound, but between video and film. For example, in the opening scene of Blade Runner , a spinner (flying car) appears in the background, flies toward the foreground, then disappears camera left. As it retreats into the distance behind us, the sound continues (at least in those theaters equipped with surround stereo), fading into the distance, even though the image is no longer on the screen.

When this effect is duplicated in the Criterion Collection's letterboxed edition of the film, the audio decay of the spinner goes on too long or not long enough, depending on where you've placed your speakers. While the speakers can be moved, doing so runs the risk of throwing other sounds out of "synch." Even if it doesn't affect other sounds, however, the labor of moving speakers around for each viewing session takes the home video experience a long way from the passive enjoyment of sitting in a darkened theater, allowing yourself to be worked over by sight and sound.

"Improving" the film original by correcting optical effects, "fudging" the video when it can't handle the superior resolving power of film images, "flattening" the contrast ratio in order to produce an image that registers some version of the information contained in the original, together with audio that by its technical superiority reinforces our awareness of the video image—at what point do these differences produce a product no longer a suitable signifier of the film signified? Colorizing, for example, while damned as an obvious distortion


359

of the film, can also be defended as improving the original. Is the conscientious transfer any less of a distortion? Preserving the "original" film text may prove as elusive a goal as the "unobtrusive" documentary camera.

The Disintegrating Text:
Videodiscs as Classical Ruins

Reconstructions of ancient architecture can be attempted from the fragments scattered across a landscape. But a rebuilt Parthenon is still a product of the archaeology that researched it. Videocassettes and discs are like large shards—hints of the original. But discs are not just the ruins of their forebears, they are the guns that destroy the temple by taking the archaeological process further, breaking the flow of a film into sides, segmenting the programming into "chapters," halting it altogether with freeze frames, encouraging objective analysis.

Film viewing, of course, is not genuinely continuous, since a feature film is divided into several reels. The theatrical experience, however, represses the disruption of reel breaks by quick changeovers of projectors, producing an illusion of continuous action. Videotape maintains that flow, at least for average length films. Discs cannot,[7] and publishers are thus faced with the problem of where to break the narrative. The decisions are governed by two concerns: 1) length limitations of the side—one hour for a CLV (Constant Linear Velocity) disc, 30 minutes for a CAV (Constant Angular Velocity) disc, and 2) suitability of the break.

Choice of breaks is not as simple as it might seem. For example, with a 119-minute film, it is not just a matter of putting 60 minutes on one side, and 59 minutes on another. If the 60-minute mark occurs in the middle of dialogue or a camera movement, then the break has to be pushed back to the previous cut. If there's an audio carryover over that cut (particularly a music cue), then other problems arise. If aesthetic considerations suggest going back before the 59-minute mark, it will no longer be possible to fit the film on a single disc, which means a rise in production costs. Faced with such an alternative, aesthetic considerations become secondary. (For example, consider the break between sides one and two on the Criterion CAV Lawrence of Arabia , which occurs in the middle of a dissolve between Lawrence and Tafas in the desert and their retrieval of water from a well. This break subverts the linkage function of a classical dissolve, here intended to bridge two disparate times and locations. On the disc, the desert and the well remain distant, separated by the time necessary to change sides.)

The disruption to narrative is inevitable, though, wherever the break is placed. It can be ignored , but it cannot be overcome . The jolt created by the side breaks becomes an integral part of the text. Moreover, the passive watching of


360

the theatrical experience is replaced by one involving labor, however minimal (getting up and switching sides), encouraging a literal, physical interaction with the medium. (Imagine what it would be like if in the middle of every theatrical viewing you had to wait a few seconds before the film continued; imagine further what it would be like if you were responsible for continuing the experience.) This physical interaction involves the proletarianization of the video viewer by forcing him/her to become, in effect, a projectionist. And any suppression of the knowledge of technology thus requires a conscious activity: we cannot pretend that the discourse will proceed without us, because it won't until we get off the couch and flip sides.

This fragmentation of the viewing experience gets reinforced by the chapter encoding (although most discs are still produced without chapters). By their very name, chapters call attention to the hybrid nature of the medium. The obvious comparison is with a book. But book chapters are chosen by their authors; however much they segment the narrative, that choice arises at the moment of composition. As such, they are an integral part of the book's form.

Videodisc chapters are not cinematic composition, they are videodisc imposition. They aren't chosen at the point of film production, but after the fact, a voice from outside the text.[8] While common sense might lead one to expect chapters to be equivalent to the cinematic "sequence," in fact they often do not conform to any breakdown of the cinematic action, and there is no single pattern or rationale for their placement. They do, however, encourage the user to think of the text as something other than an unrolling, uninterruptible narrative. (For this reason, at least one well-known producer/director refuses to allow chapter encoding on disc releases of his films.)

Furthermore, while the chapter metaphor evokes books, their function is more similar to the track or cut of an LP or CD. The visual appearance of the videodisc, obviously intended to evoke the LP,[9] reinforces this hybrid association. Scenes or segments of the film end up getting treated like individual pop songs on a record or CD: no longer related to their immediate surroundings, they become isolated as discrete units. Chapter stops run like a mine field under the linear development of classical narrative. Fans of a film no longer have to sit through the parts they don't like; they can jump to their favorite scenes, in whatever order they choose. Imagine how different an experience it would be to enter a movie theater and be able to skip the tedious parts or scramble the order of the reels. "I came for the waters . . ." zip "We'll always have Paris." And I suspect even the viewer interested only in watching the movie will use the chapter encoding for quicker access. Isn't one of the consequences of the repeated viewings encouraged by home video boredom? The significance of chapters is that viewers are beginning to think in these terms, to feel in control of a film's tedium.


361

If chapters evoke books and records, freeze frames turn a film into a sequence of stills or paintings. In so doing, they further destroy linear development. A single CAV side contains 54,000 frames. That's 54,000 possible points of fixation, alternative entries into an imagistic imaginary. The film's characters and story can be discarded in favor of new narratives inspired by the images. Just as photographs and paintings arrest our gaze and inspire us to invent, so too the frozen film image, isolated in time, loses its context and creates a new one.

With motion removed, the film image becomes subject to a different critical discourse. No longer is it enough to talk about an image getting us from point A to B (the narrative prejudice). Criticism of the image's frozen form, composition, lighting, color are invited. Individual images can be subjected to the standards of photography and painting. Of course, few film images can withstand such scrutiny, since most are composed in movement.

Of course cinema cannot be reduced to its still frames and the semiotic system of cinema cannot be reduced to the systems of painting or of photography. Indeed, the cinematic succession of images threatens to interrupt or even to expose and to deconstruct the representation system which commands static paintings or photos. For its succession of shots is, by that very system, a succession of views.[10]

To the extent that they encourage a criticism based on alternative codes, freeze frames threaten the very basis of classical narrative, in effect reversing the semiotic power relationship noted by Dayan.

"You Have to See It in a Theater"

An undergraduate film-appreciation class at the University of Southern California is taught on the basis that films, in order to be understood fully, must be seen under theatrical conditions. Great expense is taken to obtain good prints; screenings occur in a large facility analogous to first-run theaters in the Los Angeles area. Stress is placed on the larger-than-life aspect of filmgoing. And yet, when a scheduled film is unavailable in 35mm, dirty, murky, 16mm prints are used. Is this part of the theatrical experience?

Yes, although in ways not likely to be on the minds of anyone prejudiced toward theatrical exhibition. This attitude implies that theatrical viewing conditions, even at their worst, are preferable to viewing a decent video version at home. But film exhibition is subject to a range of factors—print quality, film gauge, optical vs. magnetic sound, stereo vs. mono, screen size, aspect ratio,


362

figure

High "videobility":  The Wizard of Oz

the quality of the reproductive machinery—beyond the control of the consumer. So—which violates the film more, a good video or a bad print?

Most video transfers are made from technically superior film sources. At their best, the resulting tapes or discs have a uniform gloss that is generally not true of theatrical prints outside of initial runs. The benefit of this uniformity is a standardization of presentation, dependent only on the hardware used for reproduction. Of course, all forms of standardization involve loss as well as gain. The variability of theatrical projection can have unintended benefits, when elements not noticed in one circumstance show up under others. But it seems unlikely that anyone would prefer a scratchy, inaudible reduction print made from a third generation negative to a video copy made carefully from an early generation source.

Earlier, I introduced the concept of videobility to describe the ease of translating a film into video. But videobility involves more than just questions of whether or not a decent video image can be produced. Some films have high videobility (The Wizard of Oz probably seems more familiar on video than in a theatrical screening, since most of us know the film through television broadcast). Others strike us as impossible to imagine on video without significant loss (Bondarchuk's War and Peace , for example). Is there, then, something in the viewing experience that depends on theatrical conditions for full effect of a given film? Or, more properly, what does video lack that film possesses that makes the theatrical experience "essential"?


363

In "The Work of Art in the Age of Mechanical Reproduction," Walter Benjamin wrote that

The cult of the movie star, fostered by the money of the film industry, preserves not the unique aura of the person but the "spell of the personality," the phony spell of a commodity.[11]

Is the quasi-religious aspect of film viewing, induced by capital or not, "phony"? If a film succeeds in moving us to ecstasy, does it matter in experiential terms whether or not it is a "true" sensation? The ecstatic component of (some) filmgoing cannot be dismissed, particularly when discussing it in relationship to home video. For this religious aspect of filmgoing is clearly lacking in home video viewing.

One obvious reason for this lack is the difference in scale. As the cliché has it, film is larger than life, television smaller. And yet the difference between video and film experiences is not scale as such , but the depth that greater size gives to film's sensory extravagance. It is that richness, sensual saturation, and euphoria that video cannot duplicate. But if video is excluded from the Dionysian, it gives access to the excess that creates ecstasy through the capacity to repeat, slow, freeze, and contemplate. Savoring replaces rapture.

Letterboxing, Mon Amour

The problem of scale has, from the first, been linked to the related issue of aspect ratio. CinemaScope and other wide-screen processes were developed (along with high-fidelity stereo sound) with the purpose of overwhelming viewers with an experience not available on their televisions at home. On the other hand, sale of broadcast and video rights of theatrical features represents a lucrative source of revenue, necessitating a means of squeezing wide-screen images into the TV frame. But you cannot fill the TV frame without either cutting off edges of the film picture, or through anamorphic compression, turning the films into animated El Grecos.

In recent years, there has been a growing interest in maintaining the theatrical aspect ratio for video viewing. Unfortunately, this interest has bred the fallacious notion that there is a single "correct" aspect ratio. In fact, it is the rule , not the exception, that there is no single "correct" aspect ratio for any wide-screen film. For example, during photography, it is common for directors and cinematographers to "hard matte" some, but not all, of their shots if they expect to exhibit in 1.85 or 1.66. If you examine the negative, some shots would be matted for 1.66, say, and others at full frame 1.33. Which is "correct"?


364

figure

Full reproduction of  Lawrence  frame, without matte

Frequently, too, the ratio of photography will be altered when a film changes gauges. A film might be shot in nonanamorphic 70mm at 2:1, reduced to anamorphic 35mm at 2.35:1, then reduced to 16mm at 1.85:1. (The Lawrence of Arabia disc, for example, was produced from a 35mm source, meaning a slight loss of vertical information.) Then there are those processes, like VistaVision, that were designed to be shown at different ratios. As if that weren't complex enough, most projectionists show everything at 2:1. Is "correct" based on intention, gauge, exhibition, breadth of distribution, amount of visual information, . . .?

While it might be more prudent to think of an "optimal" aspect ratio, rather than a "correct" one, who should choose the optimum? Asking the director or cinematographer perpetuates the auteurist mystique while assuming that the filmmaker knows best how a film at home should be watched. This approach further assumes that these people are best equipped to translate film images into video images. To privilege film technicians, then, subordinates video to film.

Prior to the involvement of the film's technicians, optimality was visually determined by concentrating on significant dramatic action and sacrificing composition and background detail (by cropping the edges of the frame). When composition made such reframing impossible (when, for example, two conversing characters occupied opposite edges of the frame), then a "pan-and-scan" optical movement was made; or the frame was edited optically into two shots.


365

figure

"Full frame" TV image with pan-and-scan

figure

"Full frame" TV image with letterboxed full film frame


366

Pan-and-scan transfers are performed largely to preserve narrative and to approximate the theatrical experience by keeping the entire television frame filled. There is an implicit assumption that the vertical dimensions of the film frame must be maintained. Letterboxing maintains the full horizontal dimension of the wide-screen image. In effect, pan-and-scan transfers privilege the television (thus subordinating film to video). It is more important to fill the TV frame than it is to maintain cinematic composition. Letterboxing reverses that priority by preserving the cinematic framing.

But a transformation occurs in maintaining composition. (If it didn't, letterboxing wouldn't be controversial.) In his essay "CinemaScope: Before and After," Charles Barr writes:

But it is not only the horizontal line which is emphasized in CinemaScope. . . . The more open the frame, the greater the impression of depth: the image is more vivid, and involves us more directly.[12]

If Barr is correct, letterboxing, by merely maintaining the horizontal measurement of the 'Scope frame, cannot duplicate the wide-screen experience. Letterboxing equates the shape of the CinemaScope screen with its effect .

In fact, while letterboxing subordinates the TV screen to cinematic composition, it simultaneously reverses that hierarchy. If film is usually considered larger and grander than TV, wide-screen film letterboxed in a 1.33 TV frame subjects film to television aesthetics by forcing the film image to become smaller than the TV image. Thus, in the act of privileging film over video, video ends up dominant. (The movement from 70mm theatrical exhibition to 19-inch home viewing is one long diminuendo of cinematic effect.)

Moreover, letterboxing is an ambiguous process, with all the resistance ambiguity encounters. A letterboxed image is neither film nor TV. Its diminished size makes it an impossible replacement for the theatrical experience; at the same time, the portentous black bands at the top and bottom of the screen remind the video viewer not only of the "inferiority" of the video image to the film original (it can only accommodate the latter by shrinking it) but also of a lack. What is behind those black bars? Edward Branigan makes the point that the frame is "the boundary which actualizes what is framed" and that "representation is premised upon, and is condemned to struggle against, a fundamental absence."[13]

The absent in film is everything outside the visual field. In a letterboxed transfer of 'Scope films, the matte hides the bottoms and tops of the outgoing and incoming frames. Viewing the film without the matte would make it impossible for us not to be aware of the "cinematicness" of the image, since we


367

would be viewing frame lines in addition to the picture. The mattes for "flat" wide-screen films (1.85:1 and 1.66:1) frequently blot out production equipment such as microphones, camera tracks, and so on, that the director or cinematographer assumed would be matted out in projection.

Both frame lines and extraneous equipment are part of the repressed production process. To see them ruptures the classical diegesis. And the fact that such a violence to our normal cinematic experience is necessary in video would call attention once again to the differences between the media. A double exposure of ideology would occur: of the repressed aspects of cinematic projection (frame lines, equipment)[14] and of the presumed neutrality of the transfer procedure.

Yet there is no useful alternative to letterboxing.[15] Form and composition are important; useful analysis of films on video cannot be performed when 43 percent of the image has been cropped, and certainly no one can claim to have seen(!) the film on video under such circumstances. If maintaining the horizontal length of the image creates the fiction that the cinematic experience has been approximated, it is nonetheless a fiction worthy of support. Besides, letterboxing introduces aesthetic effects of its own.

The frame created by the matte contributes one more effect toward treating the cinematic image as an object of analysis. Just as the frame of a painting directs our gaze toward the painting enclosed, so too the letterbox calls attention to the aesthetic qualities of the image framed. But that may be the problem; if people object to letterboxing, it's because it turns their classical narratives into formalist galleries. (Consider how the ponderously pseudo-epic qualities of Lawrence of Arabia get lost in a background blur on video, refocusing attention on the flatness of the image and compositional precision.) In fact, letterboxing does precisely the opposite of what Barr likes about wide-screen:[16] it ends up accentuating composition, rather than effacing it.

"I'll Wait for It on Video"

Who, after becoming used to the flexibility of home video, has not wanted to fast-forward past bits of a boring or offensive theatrical film? Doesn't this desire suggest a transformation of the cinematic experience by home video? What we once might have endured, we now resent. Hollywood continues to offer plodding, linear narratives wilted with halfhearted humanism as the staple of its production. But doesn't our itchy, reflexive reaching for the remote control suggest a complete saturation by classical narrative?

Whether we like it or not, home video turns us all into critics. Instead of being engulfed by an overwhelming image that moves without our participation,


368

we're able to subject film texts to our whims. And by allowing the viewer greater insight into an object of cultural production, home video starts to break the hold of individual texts and, possibly, of cinema in general. This conscious participation in film viewing can only be helped by the widespread dissemination of film texts, even in hybrid form.

We're back to Benjamin again. Having a good reproduction of the Mona Lisa does not substitute for the actual painting but it "enables the original to meet the beholder halfway," and in so doing, the copy "reactivates the object reproduced."[17] Well-produced home video performs the same function for film texts, which, the "phoniness" of the theatrical experience notwithstanding, are invested with an aura by classical practices of obfuscation, suppression, and capitalist investment in the commodity of the image. As home video allows us to meet the film text halfway, it does to film what film-makers have done to the world for years: turns it into an object for control.

At the same time, a conscious video consumer must confront the reality that home video is a luxury, that the possession of the equipment results from a position of privilege, thus perpetuating the very economic relations the active viewership (might) help undermine. Does this reality turn any video viewing into a guilty pleasure? One answer to this dilemma may reside in the writings of Epicurus, whose philosophy of pleasure derived from moral calculation may be the best guide for the aware consumer:

The flesh perceives the limits of pleasure as unlimited and unlimited time is required to supply it. But the mind, having attained a reasoned understanding of the ultimate good of the flesh and its limits . . . supplies us with the complete life, and we have no further need of infinite time: but neither does the mind shun pleasure.[18]

Videodiscs make us into proletarians and encourage criticism through physical interaction and segmentation. But, produced with care to maintain some aspect of the scopic pleasure of the cinematic image, they make possible a connoisseurship of form that theatrical viewing discourages. Videodisc viewing sits at an awkward juncture between criticism and experience, analysis and ecstasy, progress and privilege. As we participate in this ambiguous vacillation between oppositions, we become a post-modern contradiction: the Proletarian Epicure.

Moreover, home video gives us a means of almost literally "deconstructing" films, helping us remake them to our own ends. Even those who deny their proletarian position by viewing these film/videos in a linear fashion end up, as they change sides or put the VCR in pause, participating in the creation of an


369

alternative text. Videodiscs, as a hybrid medium dedicated to reproducing an experience alien to it, standardizes, fragments, commodifies, objectifies, and segments that experience. You can "wait for it on video," but "it," like Godot, will never arrive, because the discs' high-tech insouciance offers, despite their truckling to the capitalist realities, a revolutionary hope: the destruction of classical cinema.


370

Through the Looking Glasses:
From the Camera Obscura to Video Assist

Jean-Pierre Geuens

figure

The original video assist apparatus put together by Bruce Hill in 1970

Vol. 49, no. 3 (Spring 1996): 16–26.


373

The studio is finally quiet. The actors are restless. The crew is ready. "Sound." "Camera." The slate is taken. A voice calls "Action." A voice? Is this really the director, "with his back to the actors,"[1] looking at the scene on a little video monitor? Isn't the director, at least the solid Hollywood professional of old, supposed to sit just next to the camera, facing the action? What's happening here?

Following the trajectory that led from the old-fashioned parallax viewfinders to the contemporary use of video-assist technology, I will argue that "looking through the camera" is never a transparent activity, that each configuration has distinctive features whose design and implementation resonate beyond the actual use of the device. In his still seminal essay "The Question Concerning Technology,"[2] Martin Heidegger warned us that "technology is no mere means,"[3] that the adoption of a new method of production often expresses more than the simple substitution of one tool by another. In Andrew Feenberg's words, "modern technology is no more neutral than medieval cathedrals or the Great Wall of China; it embodies the values of a particular civilization. . . ."[4] Herbert Marcuse is even more radical. For him, "specific purposes and interests of domination are not foisted upon technology 'subsequently' and from the outside; they enter the very construction of the technical apparatus. Technology is always a historical social project: in it is projected what a society and its ruling interests intend to do with men and things."[5] Thus, as far as the camera is concerned, the very appearance of a novel gizmo could itself be significant of cultural or economic changes that have taken place in the film industry prior to the use of the new technology and, in turn, the actual practice of the supplemental device may help shape a different kind of cinema.

In the first years of cinema, getting access to the image that was to be recorded on film was no easy matter. The early cameras could never provide such necessary information. Indeed, not only the pioneer cameras of the 1890s and the


374

1900s but also the first truly professional cameras used by Holly-wood—the Bell and Howell 2709 and the Mitchell Standard Model—had to resort to peeping holes, miscellaneous finders, magnifying tubes, swinging lens systems, and rack-over camera bodies to give any information at all about the image produced by the lens.[6] At best, the operators were allowed to survey the scene before or after actually shooting it. Crucially missing from their arsenal was the capability to check on exact framing, focusing, lighting, depth of field, and perspective while filming. Although a lens could be precisely focused on an actor's position ahead of time, what happened during the shot, especially if there was any movement, remained a mystery. The operators, in effect, were shooting blind. As they watched through the parallax viewfinder on the side of the camera, a device that produced but a pallid, lifeless, uninviting substitute for the real thing peeked at seconds earlier, they remained outsiders to what was truly going on inside the apparatus. In a way, the mystery of what happened inside the camera during the shooting acted as a synecdoche for the further magic that would be worked on the film in the lab, where it was to be chemically treated and its content at last exposed to view. Only at the screening of the dailies could one know for sure whether the scene was good or needed to be reshot. Such a daunting situtation therefore required steady professional types and, indeed, this is how the "operative cameramen" were described by their peers in the American Society of Cinematographers: "They must be ever on the watch that no unexpected or unplanned action by the players or background changes from the originally planned movement and lighting on the set, occur during shooting. They sit behind the camera, like the engineer at his throttle, ever watching for danger signals."[7] These brave men behind the camera, despite their vigilance, thus stood in a hermeneutic relation to their instrument. The otherness of the machine remained unassailed, its viewing apparatus a numinous, hermetic object standing as a third party between the operator and the world. The best one could do was stand next to the thing, maybe controlling its mishaps or its surges, but, throughout, acknowledging the actual film process as a thorough enigma.

The situation changed in 1936, when the Arnold and Richter Company of Germany introduced continuous reflex viewing with its new Arriflex 35mm camera. The solution was truly elegant: by mirroring the side of the shutter that was facing the lens and tilting it at a 45-degree angle, the light that was not used by the film when the latter was intermittently moving inside the camera was now made available to the operator for viewing purposes. Suddenly, the deficiencies that had marred the early camera systems were eliminated as operators, looking through the lens during the filming, gained maximum control over the images they were shooting. In fact, the smoothness of the Arriflex


375

solution hid a paradox. Even though the operator may believe he or she sees what the film gets, technically speaking one never actually witnesses the same instant of time that is recorded on film because of the fluctuating movement of the shutter—when the operator gets the light, the film does not, and vice versa. More importantly, this means that the access to the lens is punctuated by the blinking presence/absence of the mirrored shutter. In my view, this flickering implies more than a simple technical chink; it radically transforms the linkage between the operator, the camera, and the world by literally embodying the eye within the technology of the apparatus itself.

Indeed, if we go back to the early years of still photography for a moment, there was always a sense of awe when the operator's head finally disappeared under a large black cloth in order to take the picture. "What do you have there: a girlfriend?" a model asked of Michael Powell's protagonist in Peeping Tom (1960), a comment that clearly exposes the prurience of the act. In a similar fashion, on the motion picture set, the view through the reflex viewfinder quickly became fetishized, the actual practice exceeding the useful aspect of checking on the parameters of the scene. Crew hierarchy determined who got to take a peek. Yet the static image one could witness when the camera was at rest had finally little to do with what happened during the real shooting, when the operator alone received the full force of the system. Then the impact was truly stirring; due to the saccadic nature of the shutter's rotation, the effect on the eye was nothing less than phantasmagoric. Because the other eye of the operator remained closed during the filming, the flickering light on the ground glass became thoroughly hypnotic, even addictive.[8] For the time of the shot, with only one eye opened onto the phantastic spectacle on the little screen, the operator was very much lost in another world, a demimonde, a netherworld not unlike a dream screen for the wakened.

It is not so much that the frame provides for the operator a "synoptic center of the film's experience of the world it sees," as Vivian Sobchack has suggested,[9] but that what is being seen and the way it is seen combine in bringing forth a unique experience for the person at the camera. Let us consider for a moment the ramifications of what is actually taking place. A scene is rehearsed, then shot a number of times until the director declares him/herself satisfied. Through it all, the same general actions are performed with little change by the actors (basic gestures are duplicated, more or less the same lines enunciated) and the crew moves in sync with the action—a swinging of the boom that keeps abreast of an actor; a short dolly movement that accompanies an action; a dimming of the light level at the proper moment by an electrician; a change of focus by the camera assistant; and, for the operator, maybe a pan or other small readjustment that keeps the scene within the frame. What we have here, then, is no less than a ritual, a ceremony of sorts that also involves repetition, reenactment, and


376

specific gestures carried out by "practitioners specially trained."[10] On a macro scale, the effect of a ritual is to bond a group, to create a sense of communitas when all participants find themselves sharing an experience. And, characteristically, this is a well-known effect experienced by all in a film crew as the constant repetition of specific actions, performed with only minimal variations, gives each member the sense both of cultic participation in a grand project and of sharing in a larger, collective identity. For the operator who most intimately experiences it all as an eye mesmerized by the spectacle on the little screen, the effect is even more hallucinatory. The sense of time is altered; there is no past or future any more, only a flux, a duration, an endless synchronic moment with actions many times repeated, an epiphany punctuated only by "eternal poses," to use Gilles Deleuze's descriptive words.[11] Because it stands outside mechanical time and physical space, the experience recalls the "oceanic" early moments of life. During that moment, the operator, neither here nor there, stands liminally between two worlds. As he or she merges, to some extent, with the phantom action on the little screen, a communion takes place that integrates the self within an ideal reality. Not surprisingly a certain Ekstase can be reached. The effect then is not unlike that of a trance in a ritual, an experience that also momentarily transforms the individual. No wonder that, after the shot, different members of the crew turn toward the operator and ask: "How was it?"

Others have been sensitive to the reflex feature of the camera for distinct reasons as well. For instance, independent film-makers functioning simultaneously as directors and operators have worked both in fiction and in documentary. Among others, Nina Menkes, Ulrike Ottinger, and Werner Schroeter have always insisted on controlling the camera. In their kind of moviemaking, it makes a lot of sense not just to be present but also to participate in the moment of creation and help deliver the scene through the camera. For Direct Cinema practitioners, however (Richard Leacock, D. A. Pennebaker, and Albert Maysles in the heyday of the movement), the situation is somewhat different. As the subject here belongs not to fiction but to the real world, and the situation, by choice, cannot be rehearsed, there is no question of experiencing a ritual. Instead, the film-maker and the camera seem to merge into one persona that absorbs the scene and responds to it. For the Drew team, for instance, not only is the scene "unscripted, it's unrehearsed . . . for the first time the camera is a man. It sees, it hears, it moves like a man."[12] In other words, through its heartbeat, the pulse of its shutter, the camera now breathes as a human being. And the film-maker, operating like the expert craftsperson of old, carves up the world for the benefit of the viewer, dereifying the structures of daily life, eventually revealing what was either unseen or just obscure moments before. In this case, therefore, the look through the camera functions very much as an example of what Heidegger refers


377

to as techne , the Greek practice of the craftsperson which brings forth poesis through the work. For Heidegger, the decisive factor is not the tool itself but the "unconcealment" of the world that results from its use.[13]

However, because the rigid division of labor in the Hollywood cinema forbids it, the typical director is almost never the person behind the camera (often sitting, instead, just underneath it). For him or her, therefore, nothing really changed in the substitution of the original apparatus by the reflex camera. The director remains exterior to the camera's process. After orchestrating everybody else's actions, the director gauges the results of the take instantly, in vivo , by gut instinct. Precisely because such directors do not look through a viewing screen during the filming, they literally function as metteurs-en-scène: their scene indeed is the stage, the space where fellow beings move about. What they must be sensitive to is the human intercourse at hand, the social space between people, the presence of objects as well as the flesh of the individuals. All the senses of a director are imbricated in this evaluation. Although the scene is shot in pieces and staged to be captured in a certain way on film, the dramatic action has a reality of its own. It is thus experienced by the director as (to borrow another notion from Heidegger) Zuhandenheit , the ready-at-hand, an involvement with the world through technique that actually supersedes the use of the equipment.[14] Expressly because the director is not looking through the camera, the technology associated with directing remains somewhat in the background, only a subordinate accessory. For Heidegger, what is experienced in this fashion is ontologically quite different from what could be observed through Vorhandenheit , the present-at-hand, the contemplation of a decontextualized subject matter. Directors functioning in the traditional mode thus depend mostly on human rather than exclusively cinematic skills: this does not feel right, that timing is a little off, this character would not really do that.

Furthermore, if we listen to Emmanuel Levinas for a moment, when a face-to-face encounter among human beings takes place, the contact involves more than a mere recording of an action by the eyes.[15] It embodies the most fundamental mode of being-in-the-world. A face, for Levinas, expresses the vulnerability of the being; it is an appeal, a call. The face solicits a human contact beyond cold rationality or calculative thinking. Its sheer presence impinges on the other person's autocratic tendencies. In this light, the director's "vision" of the scene becomes compounded by his/her own presence among the actors. Sharing a unique moment of time, the director becomes thoroughly wedded to the players as fellow human beings who carry their load of pain or distress. Can the director in these conditions (to recall well-known cases in our cinema) remain unaware of the wooden leg of one actor even if it remains off-camera? Can the director not respond to the cancer that is eating up this other actor? Even if we abandon these dramatic examples, is it really possible for the director to leave entirely behind the


378

lunch shared with some actor, the conversations that went on, the hopes that were disclosed or the fears that were expressed? To go back to Heidegger, the director here does more than take a look (Sicht ) at the scene professionally; emotion is involved as well (Ruchsicht ), a look that involves sympathy, concern, and responsibility. Furthermore, the sharing of a human space and the mutual recognition that takes place between people automatically involve moral claims. One individual temporarily gives something of him/herself to another. Trust matters deeply. Ethics are involved. As a result, the director functions both as a participant in a shared exchange and as a shaman who guides others through a difficult process of shedding off. For such a director, the scene clearly takes place in front of his or her eyes, not behind where the camera is. After the take, the information that originates from the crew is certainly important, but it is purely technical in nature: did the action remain in focus, was the pan smooth, did the mike get in the shot, was the jolt to the dolly noticeable?

A radical departure to this long-standing mode of directing came about as a result of Jerry Lewis's introduction of video as a guide for the director to evaluate the quality of a take. There were of course good reasons for Lewis to do so: this was a logical solution to the problem of the actor/director, who was otherwise unable to check his performance. Buster Keaton would surely have been an ardent practitioner of the new technology. What Lewis did was elegant in its simplicity: he positioned a video camera as close as possible to the film camera, allowing him to view what he had just shot on playback. Although the technology was primitive and the equipment, at the time, heavy and cumbersome, Lewis persevered, and others eventually picked up on the idea. As early as 1968, some motion picture cameras that incorporated plumbicon tubes in the viewfinder (thus splitting the light that normally would go to the operator alone), were used to film a tennis championship in Australia.[16] The next year, videotape playback was used in the film Oliver! (Carol Reed, 1968) to check on the lip sync or the movement of performers. If the tape showed the actor or dancer to be in sync after all, it saved the retake of a difficult and expensive dance number.[17] By most accounts, though, credit for the integration of the video "camera" within the motion picture camera by means of a pellicle (a thin, partial mirror that split the light coming to the operator) goes to Bruce Hill, an engineer/tinkerer who had worked at Fairchild and Mitchell.[18] By 1970, working independently, Hill had modified a Mitchell BNCR and used a one-inch-videotape recording and playback system by Ampex. The subsequent image could be observed on a 17-inch monitor. A variation of this package was used for the helicopter sequences of The Towering Inferno (John Guillermin and Irwin Allen, 1974), the first time such a device was used by the Hollywood establishment.


379

Not surprisingly, directors shooting commercials were the first to embrace the new technique, for in their work in particular it is very important to check on the exact placement of a product in relation to many other coordinates. With the help of video, minute details could be discussed between representatives of the advertising agency and the technicians. Today, practically all commercial productions use video assist and playback on the set. In contrast, feature directors were distinctly slower in adopting the new apparatus: only 20 percent or so of the productions in the early 1980s used video assist. And although today most do, no more than 40 percent of the shoots bother with a playback system.[19]

On the surface, the use of video assist on the set provided only positive benefits for the director and the crew. For directors, being able to see the picture of the scene being rehearsed meant gaining back some of the control that historically had been lost to operators. For a crew, the advantages could be measured in terms of efficiency. During a shoot, questions keep flying to the operator: is the boom in the shot, where is the frame line, do I need to prop that area, are these people in the shot, how high do I need to light that wall, etc.? A lot of production time is lost as the operator attempts to make clear the parameters of the shot to the gaffer, the assistant director, the boom person, or the prop master. Once video assist becomes available and a large monitor is provided for the various crew members, all they have to do is look at it to answer their own question. In a similar fashion, the light split itself can be subdivided so as to provide a mini-image to the operator's assistant or the dolly grip. It might be more practical indeed for these technicians to look at an image on a monitor than to the scene itself to decide exactly when to initiate a rack focus or a dolly movement. All of these advantages end up saving time, and thus money, for the production.

There were, however, some technical mishaps that initially limited the appeal of the novel apparatus. The early grievances were mostly concerned with the disappearance of the director, who might be locked in a trailer loaded with equipment and who would communicate with the crew and actors only through a loudspeaker. Helen Hayes, for instance, was heard complaining about such a "disembodied voice" when working on Raid on Entebbe (Marvin Chomsky, 1977). And Garrett Brown grumbled that, when he was shooting the maze scene in The Shining (Stanley Kubrick, 1980), "Stanley mostly remained seated at the video screen, and we sent a wireless image from my camera out to an antenna on a ladder and thence to the recorder,"[20] in effect forcing Brown to go back and forth between the maze and the trailer, quite a distance away, just to find out if the take was good. That problem was eventually worked out when directors were able to use the monitor on the set itself. Another difficulty involved the operator: as the video system taps the light


380

that would normally go to the eye of the camera person, a loss of clarity can be experienced by the operator, in effect making the job more difficult. One reason black-and-white taps have been traditionally preferred over color models is that the former could function with much less light intake compared to the latter. A new color tap though, the CEI Color IV, is said to be almost as economical as the black-and-white models and is thus gaining in popularity. Flickering was another "annoyance" that marred some of the viewing. But there are now new models, such as those factory-installed by Arriflex on its new 535 camera, which incorporate a totally flicker-free tap. Although more traditional directors of photography, such as Haskell Wexler, have indicated their preference for a video image that reproduces the flicker of the motion picture camera, most directors of photography shooting commercials go for the enhanced version, perhaps to soothe the apprehension of clients or agency people who might wonder about the misfiring on the monitor.[21] A fourth difficulty concerned the matching of the image received on the monitor to specific aspect ratios when shooting wide-screen or when using an anamorphic lens. Here the solutions could be makeshift in nature (paper tape can be applied directly on the monitor so as to delimit the 1.85:1 aspect ratio), or electronic (monitors can now switch easily from a squeezed to an unsqueezed image). Finally, using videotape playback after each take may slow down the impetus of the crew because it interrupts everyone's activity—a situation that has limited the use of that particular technique. It might indeed be cheaper to redo a shot immediately than to break the momentum of the cast and crew. For this reason, videotape playback, when used at all, is looked at only after several takes have been shot so as to minimize the disruption.

Moving now from a technical to a cultural evaluation of video assist, we focus on its similarities to the camera obscura, a tool used by many painters in the seventeenth century to replace or supplement their own human viewpoint. Significantly, in both machines, the observer (the painter or the director) no longer confronts the world directly but looks instead at an image formed through an optical contraption. In other words, a mediation is taking place. If the technology remains transparent to its user, he or she, in the words of Svetlana Alpers, "is seen attending not to the world and its replication in [an] image, but to . . . the quirks of [a] device."[22] In his analysis of Vermeer's work, Daniel A. Fink has pointed to a number of optical phenomena directly related to the use of a camera obscura.[23] They are all consequential for the image being produced. For example, whereas in daily life the eye continually refocuses as it engages objects located at different distances, the camera obscura equipped with a lens forces the operator to view the scene through a single plane of focus, in effect making some objects sharper than others. Likewise,


381

figure

Margo, with Burgess Meredith, looking through the camera
(Winterset , 1936)

whereas Vermeer's contemporaries represented relatively large and sharp mirror images of objects, very much like the eyes would see them, Vermeer's own mirror reflections are comparatively small and slightly out of focus, as they would appear through a lens focused on a different plane. All in all, Fink points out ten such "distortions" introduced by the instrument used by Vermeer.

In the same manner, today, the limitations of video keep interfering with the work of directors of photography because of the differences between what is seen on the monitor and what will be in fact recorded on film. The main culprit here is the lack of resolution of the video image and the fact that its contrast ratio does not match that of the film stock. Shadow detail, for instance, does not show up on the monitor, a situation that inevitably creates doubt about the handling of the lighting scheme. For the same reason, directors have been known to complain when low light levels may simply make it too dark for them to see the expressions of the players on the monitor. And, when using color, everyone frets about the differences between the colors on the set, those on the monitor, and those that will show up on film. In addition, directors of photography have noted that the usual size of the monitor (typically a 9-inch set) used by the director may also make it less likely that action will take place in the background in a long shot or even on the sides of the frame, as the miniaturized or peripheral action would not play well on such small screens.


382

The action therefore often ends up enlarged and more centered. Beyond this, if the movie is going to be cut digitally, it makes little sense for the production to pay for regular film dailies. As a result, the director will not be aware of the large-screen effect of the film until it is prepared for release in the theaters: a definite drawback. Lastly, the fact that the scene is observed through video technology as opposed to film may have consequences of its own. Film images' fascinatingly rich appearance originates in the random distribution of the silver molecules on the film surface. Each individual frame in effect configures the subject slightly differently. When played back, the scene is reconstructed twenty-four times per second, bringing forth more "livingness" to the eye of the spectator than any single frame could provide on its own. In contrast to this, as Vivian Sobchack describes it, "electronic technology atomizes and abstractly schematizes the analogic quality of the photographic and cinematic into discrete pixels and bits of information that are then transmitted serially . . .,"[24] a design responsible for the "sameness" of the electronic image. In other words, a picture so constituted may not prompt the kind of investment associated with the older technology. And this in turn may produce a viewing situation for the director that demands quick renewal and change, shorter scenes, a point of view that Charles Eidsvik has described as "glance esthetics" in lieu of the older, more traditional "gaze esthetics."[25]

Looked at another way, employee relations on the set have also gone through a subtle restructuring. The operator is no longer the sole source of vision. Someone is now watching over the very guardian of the sight. The situation is not unlike a contemporary version of Taylorism, where work is carefully meted out into distinct components that can be precisely measured through scientific management techniques. Early in the century, for example, Frank Gilbreth, a disciple of Frederick Taylor, determined through the use of photographs a bricklayer's ideal working position. He then attempted to enforce this position on other bricklayers, thus hoping to eliminate minor but wasteful divergences from the more effective stand.[26] However, as work is rationalized and systematized, a subtle de-skilling of the worker's craft occurs. In fact, it is no longer trusted at face value, it is verified through technology until it matches very precisely the demands of management. Andrew Feenberg put it this way: whereas earlier "the craftsman possessed the knowledge required for his work as subjective capacity . . . mechanization transforms this knowledge into an objective power owned by another."[27] On the set then, the camera operator ceases to function as an independent agent who is counted on to execute a difficult move. He/she becomes merely the mechanical arm of the director. The operator, having lost some of the creativity associated with his/her own work, is thus transformed into a semiautomaton. The change eliminates the trust in someone's craft. It reinforces the industrial aspect of film-making, the manufacturing


383

of a marketable commodity where the picture represents the surplus value of the labor performed by the operator.

Another characteristic shared by the camera obscura and video assist is the apparent objectivity and finality of the image they provide. Because the scene was captured by an optical device, the camera obscura's picture was thought to be necessarily truer to the model than that obtained through traditional human effort. In a similar fashion, the contemporary film director imagines gaining access to the truth of the scene when he or she abandons the actors and watches the take, no longer face-to-face from underneath the camera but indirectly on the monitor. After all, isn't this image the very picture that is being simultaneously recorded on film, the one that will be seen later by the viewers? As Jonathan Crary puts it, in each situation "the observer . . . is there as a disembodied witness to a mechanical and transcendental re-presentation of the objectivity of the world."[28] As a result, the camera obscura and video assist can be said to incorporate within their machinery the Cartesian ideal of the partition between pure body sensations and the mind, with the latter, the true self, inspecting the observations gleaned by the senses. Paul Ricoeur best described this mode of thinking when he called it "a vision of the world in which the whole of objectivity is spread out like a spectacle on which the cogito casts its sovereign gaze."[29] What is at stake here is the authority of an ideal observer, removed from the scene, someone who is no longer operating as a body-in-the-world sharing a space/time continuum with the actors. The latter, instead, are objectified, appropriated for the director's use. As it plucks the scene out of that common, human context, video assist fragments the total experience specific to the traditional directing mode. What takes place in fact duplicates the calculative thinking of the traditional scientific experiment that first sets measurable goals for itself, then authenticates their presence in an ensuing test, thus "guaranteeing the certainty and the exactness"[30] of the project as a whole. Similarly, the contemporary director ends up verifying on the monitor what he/she expects to find there in the first place. The attention, in other words, is on what Heidegger called Vorhandenheit , the foregrounding of technology, of the actual, of what has been worked out during the rehearsals, at the expense of the film still as a project (his notion of Zuhandenheit ), a potential, something not quite yet there, something that remains a becoming, that is still in flux. The present-at-hand, what is already there, takes precedence over what is still outstanding, what could still be created. A metaphysics of presence-through-the-image in effect dominates the day.

What I am suggesting here is that getting access to the image is not an automatic panacea for the director. To illustrate this point, let us look at two films produced and directed by Francis Ford Coppola. On the one hand,


384

figure

Conrad Hall behind the camera
(Tell Them Willie Boy Is Here , 1969)

Apocalypse Now (1971) emerged from complete chaos and three years of shooting—perhaps the ultimate example of "how not to make a film"—a masterpiece. On the other hand, One from the Heart (1981) was conceived most rationally with the help of the latest electronic wizardry available at the time. From the very beginning of the production, an audiotape of the actors' read-through of the script was combined with storyboard images and temporary music to help the pre-visualization of the film as a whole. Polaroids of the actors' early rehearsals then replaced the drawings, followed by videotapes of the scenes shot on location in Las Vegas. As a result, long before a single foot of film was actually shot, "the whole movie could be seen at any time,"[31] by anyone involved in the film. Furthermore, when the film was ultimately shot in a Hollywood studio, Coppola could watch each take with music and sound effects. And he "was able, at the beginning of each production day, to view an edited version of the previous day's shooting, complete with music and sound effects."[32] The idea was to be able to handle immediately any kink in a scene, any difficulty with the pacing within or between scenes. As each segment of the project could be looked at, analyzed, dissected, film-making in effect became a totally rational enterprise, with the director-engineer at the helm calculating, quantifying, mastering the impact of each and every effect. This total involvement with the ever-present image, the absolute elimination of the mystery of shooting, should have produced the most successful film ever. What Coppola forgot though, in his all-out effort at demagicizing the film process,


385

figure

James Cameron looking at the video assist
(True Lies , 1994)

is that, paraphrasing Maurice Merleau-Ponty, the director's "vision is not a view upon the 'outside,' a merely physical-optical relation with the world."[33] No more than a poem can be said to exist in the words per se, a film does not reside solely in an image that can be observed on a little screen. During the shooting, it remains instead a becoming, an opening, a possibility that may or may not be realized later on. A film, in other words (still paraphrasing Merleau-Ponty), is as much in the "intervals" between images as it is in the pictures themselves.[34] Fleeing the mystery of creation, the challenges and the claims involved in a face-to-face transaction, the contemporary director thus functions as a distant subject who masters and objectifies others through the supremacy of technique. The lingering of the body in time and space has been replaced by what Nietzsche called an Apollonian frenzy with the eye.[35] The mise-en-scène has turned into mere mise-en-image, a soulless play of isolated, context-free commodities.

Technology must never be accepted at face value. It is not because the science is there that the invention or the use of a machine will automatically follow suit. And not all novel techniques are successfully adopted by the practitioners in the field. One cannot, David F. Noble reminds us, ever simply state that the best existing technology is automatically being used at all times. Instead, we must always replace that assumption with more probing questions: "The best


386

technology? Best for whom? Best for what? Best according to what criteria, what visions, according to whose criteria, whose visions?"[36] Hence, insofar as video assist is concerned, what are the historical conditions that made its use so widespread? When Jerry Lewis used it there was little interest on the part of other directors to emulate him. Yet, not so many years afterward, in remarkable unison, most American directors ended up adopting his method. What happened in between that brought about this radical change? The answer lies in what Charles Eidsvik has called "the film's industry's defensive maneuvers of the 1970s and the 1980s,"[37] when, to repel the thrust of both television as a competing source of entertainment and videotape as a contending recording medium, changes were made in the kind of cinema that was produced. The writing in effect pushed the plots "into areas in which video could not compete well. . . ."[38] Practically speaking, this meant that the small movies, the psychological films, the non-action pictures, were abandoned to television. Conversely, the theatrical experience was redefined as the larger-than-life action spectacle. Although Eidsvik identifies location film-making as the main beneficiary of these changes, location per se did not prove itself enough of a draw to sell the real movie in the theater over the TV movie of the week. More was needed, and camera pyrotechnics were quickly enlisted to divert and bedazzle the spectator's eyes. These technological advances, however, also eroded the traditional control of the director on the set. First, the very mobility of the Steadicam created a dilemma for the director.[39] What was he or she supposed to do: run after the Steadicam operator or remain ineffectually behind? Second, the Louma crane isolated the camera at the extreme end of its reach, all the time maneuvered from afar by an operator working at a console. Third, cable contraptions of one type or another followed, flying the camera far above the scene. Fourth, helicopters equipped with gyrostabilized systems further extended the reach of the apparatus. Finally, the ease of digital technology pushed film-making toward ever more complex and demanding composite images. All in all, as the "scene" became less and less accessible, directors had no choice but to look at a remote image on a video monitor.

"In choosing our technology," Feenberg suggests, "we become what we are, which in turn shapes our future choices."[40] And so it is that a scene that required an improbable camera position would also benefit from graphic action and various kinds of pyrotechnics, traditional or otherwise—all situations that incidentally also showed up best on the monitors. In other words, whereas it is unlikely that the cinema of Ingmar Bergman would have significantly benefited from the use of video assist, that of Jim Cameron or Robert Zemeckis makes little sense without it. In this type of film-making, in fact, the device itself is no more than an advanced representative of other, more intensive technologies that will later on


387

enhance the surface appeal of the film in postproduction: digital-image processing, digital editing, digital sound enhancing, etc. And in turn this superior technology, this dazzling maneuverability, this extraordinary display of breathtaking technique is widely advertised, thus staking new claims for the global hegemony of Hollywood. The fire power of the contemporary American film may be less physically destructive than that of the old gunboat, but it nevertheless forces its superiority on the technologically backward national cinemas of Europe, Asia, and elsewhere, threatening their very survival.

For the new American director, however, success speaks for itself and money speaks best of all. Hence, no rejection of the power of technology should be expected. Needed or not, video assist is here to stay, not because it is necessarily the best tool for the job, but because, more than ever, we implicitly trust a machine more than ourselves to tell us about the world. As the objectification of the world through the domination of technique is pushed one notch further, the cogito of old can be said to have been given a more contemporary twist: video, ergo est .


388

True Lies:
Perceptual Realism, Digital Images, and Film Theory

Stephen Prince

figure

Digital compositing in  Forrest Gump

Vol. 49, no. 3 (Spring 1996): 27–37.

Thanks to Carl Plantinga and Mark J. P. Wolf for their helpful suggestions on an early version of this paper.


393

Digital imaging technologies are rapidly transforming nearly all phases of contemporary film production. Film-makers today storyboard, shoot, and edit their films in conjunction with the computer manipulation of images. For the general public, the most visible application of these technologies lies in the new wave of computer-generated and -enhanced special effects that are producing images—the watery creature in The Abyss (1989) or the shimmering, shape-shifting Terminator 2 (1991)—unlike any seen previously.

The rapid nature of these changes is creating problems for film theory. Because the digital manipulation of images is so novel and the creative possibilities it offers are so unprecedented, its effects on cinematic representation and the viewer's response are poorly understood. Film theory has not yet come to terms with these issues. What are the implications of computer-generated imagery for representation in cinema, particularly for concepts of photographically based realism? How might theory adapt to an era of digital imaging?

Initial applications of special-effects digital imaging in feature films began more than a decade ago in productions like Tron (1982), Star Trek II: The Wrath of Khan (1982), and The Last Starfighter (1984). The higher-profile successes of Terminator 2, Jurassic Park (1993), and Forrest Gump (1994), however, dramatically demonstrated the creative and remunerative possibilities of computer-generated imagery (CGI).

Currently, two broad categories of digital imaging exist. Digital-image processing covers applications like removing unwanted elements from the frame—hiding the wires supporting the stunt performers in Cliffhanger (1994), or erasing the Harrier jet from shots in True Lies (1994) where it accidentally appears. CGI proper refers to building models and animating them in the computer. Don Shay, editor of Cinefex , a journal that tracks and discusses special-effects work in cinema, emphasizes these distinctions between the categories.[1]

As a consequence of digital imaging, Forrest Gump viewers saw photographic images of actor Gary Sinise, playing Gump's amputee friend and fellow Vietnam


394

veteran, being lifted by a nurse from a hospital bed and carried, legless, through three-dimensional space. The film viewer is startled to realize that the representation does not depend on such old-fashioned methods as tucking or tieing the actor's limbs behind his body and concealing this with a loose-fitting costume. Instead, Sinise's legs had been digitally erased from the shot by computer.

Elsewhere in the same film, viewers saw photographic images of President Kennedy speaking to actor Tom Hanks, with dialogue scripted by the film's writers. In the most widely publicized applications of CGI, viewers of Steven Spielberg's Jurassic Park watched photographic images of moving, breathing, and chomping dinosaurs, images which have no basis in any photographable reality but which nevertheless seemed realistic. In what follows, I will be assuming that viewers routinely make assessments about the perceived realism of a film's images or characters, even when these are obviously fictionalized or otherwise impossible. Spielberg's dinosaurs made such a huge impact on viewers in part because they seemed far more life-like than the miniature models and stop-motion animation of previous generations of film.

The obvious paradox here—creating credible photographic images of things which cannot be photographed—and the computer-imaging capabilities which lie behind it challenge some of the traditional assumptions about realism and the cinema which are embodied in film theory. This essay first explores the challenge posed by CGI to photographically based notions of cinematic realism. Next, it examines some of the problems and challenges of creating computer imagery in motion pictures by drawing on interviews with computer-imaging artists. Finally, it develops an alternate model, based on perceptual and social correspondences, of how the cinema communicates and is intelligible to viewers. This model may produce a better integration of the tensions between realism and formalism in film theory. As we will see, theory has construed realism solely as a matter of reference rather than as a matter of perception as well. It has neglected what I will term in this essay "perceptual realism." This neglect has prevented theory from understanding some of the fundamental ways in which cinema works and is judged credible by viewers.

Assumptions about realism in the cinema are frequently tied to concepts of indexicality prevailing between the photographic image and its referent. These, in turn, constitute part of the bifurcation between realism and formalism in film theory. In order to understand how theories about the nature of cinematic images may change in the era of digital-imaging practices, this bifurcation and these notions of an indexically based film realism need to be examined.

This approach to film realism—and it is, perhaps, the most basic theoretical understanding of film realism—is rooted in the view that photographic images, unlike paintings or line drawings, are indexical signs: they are causally or exis-


395

tentially connected to their referents. Charles S. Peirce, who devised the triadic model of indexical, iconic, and symbolic signs, noted that "Photographs, especially instantaneous photographs, are very instructive, because we know that in certain respects they are exactly like the objects they represent . . . they . . . correspond point by point to nature. In that respect then, they belong to the second class of signs, those by physical connection."[2]

In his analysis of photography, Roland Barthes noted that photographs, unlike every other type of image, can never be divorced from their referents. Photograph and referent "are glued together."[3] For Barthes, photographs are causally connected to their referents. The former testifies to the presence of the latter. "I call 'photographic referent' not the optionally real thing to which an image or sign refers but the necessarily real thing which has been placed before the lens without which there would be no photograph."[4] For Barthes, "Every photograph is a certificate of presence."[5]

Because cinema is a photographic medium, theorists of cinema developed concepts of realism in connection with the indexical status of the photographic sign. Most famously, André Bazin based his realist aesthetic on what he regarded as the "objective" nature of photography, which bears the mechanical trace of its referents. In a well-known passage, he wrote, "The photographic image is the object itself, the object freed from the conditions of time and space which govern it. No matter how fuzzy, distorted, or discolored, no matter how lacking in documentary value the image may be, it shares, by virtue of the very process of its becoming, the being of the model of which it is the reproduction; it is the model."[6]

Other important theorists of film realism emphasized the essential attribute cinema shares with photography of being a recording medium. Siegfried Kracauer noted that his theory of cinema, which he subtitled "the redemption of physical reality," "rests upon the assumption that film is essentially an extension of photography and therefore shares with that medium a marked affinity for the visible world around us. Films come into their own when they record and reveal physical reality."[7] Like Bazin, Stanley Cavell emphasized that cinema is the screening or projection of reality because of the way that photography, whether still or in motion, mechanically (that is, automatically) reproduces the world before the lens.[8]

For reasons that are alternately obvious and subtle, digital imaging in its dual modes of image processing and CGI challenges indexically based notions of photographic realism. As Bill Nichols has noted, a digitally designed or created image can be subject to infinite manipulation.[9] Its reality is a function of complex algorithms stored in computer memory rather than a necessary mechanical resemblance to a referent. In cases like the slithery underwater creature in James Cameron's The Abyss , which began as a wireframe model in the


396

figure

Jurassic Park:  Not the real T. Rex

computer, no profilmic referent existed to ground the indexicality of its image. Nevertheless, digital imaging can anchor pictured objects, like this watery creature, in apparent photographic reality by employing realistic lighting (shadows, highlights, reflections) and surface texture detail (the creature's rippling responses to the touch of one of the film's live actors). At the same time, digital imaging can bend, twist, stretch, and contort physical objects in cartoonlike ways that mock indexicalized referentiality. In an Exxon ad, an automobile morphs into a tiger, and in a spot for Listerine, the CGI bottle of mouthwash jiggles, expands, and contracts in an excited display of enthusiasm for its new formula.[10]

In these obvious ways, digital imaging operates according to a different ontology than do indexical photographs. But in less obvious ways, as well, digital imaging can depart from photographically coded realism. Objects can be co-present in computer space but not in the physical 3-D space which photography records. When computer-animated objects move around in a simulated space, they can intersect one another. This is one reason why computer animators start with wireframe models which they can rotate and see through in order to determine whether the model is intersecting other points in the simulated space. Computer-simulated environments, therefore, have to be programmed to deal with the issues of collision detection and collision response.[11]


397

The animators who created the herd of gallimimus that chases actor Sam Neill and two children in Jurassic Park were careful to animate the twenty-four gallis so they would look like they might collide and were reacting to that possibility.[12] First, they had to ensure that no gallis actually did pass into and through one another, and then they had to simulate the collision responses in the creatures' behaviors as if they were corporeal beings subject to Newtonian space.

In other subtle ways, digital imaging can fail to perform Kracauer's redemption of physical reality. Lights simulated in the computer don't need sources, and shadows can be painted in irrespective of the position of existing lights. Lighting, which in photography is responsible for creating the exposure and the resulting image, is, for computer images, strictly a matter of painting, of changing the brightness and coloration of individual pixels. As a result, lighting in computer imagery need not obey the rather fixed and rigid physical conditions which must prevail in order for photographs to be created.

One of the more spectacular digital images in True Lies is a long shot of a chateau nestled beside a lake and surrounded by the Swiss Alps. The image is a digital composite, blending a mansion from Newport, Rhode Island, water shot in Nevada, and a digital matte painting of the Alps.[13] The compositing was done by Digital Domain, a state-of-the-art effects house created by the film's director, James Cameron. The shot is visually stunning—crisply resolved, richly saturated with color, and brightly illuminated across Alps, lake, and chateau.

Kevin Mack, a digital effects supervisor at Digital Domain who worked on True Lies as well as Interview with the Vampire , points out that the image is unnaturally luminant.[14] Too much light is distributed across the shot. If a photographer exposed for the lights in the chateau, the Alps would film too dark, and, conversely, if one exposed for the Alps in, say, bright moonlight, the lights in the chateau would burn out. The chateau and the Alps could not be lit so they'd both expose as brightly as they do in the image. Mack points out that the painted light effects in the shot are a digital manipulation so subtle that most viewers probably do not notice the trickery.

Like lighting, the rendering of motion can be accomplished by computer painting. President Kennedy speaking to Tom Hanks in Forrest Gump resulted from two-dimensional painting, made to look like 3-D, according to Pat Byrne, Technical Director at Post Effects, a Chicago effects house that specializes in digital imaging.[15] The archival footage of Kennedy, once digitized, was repainted with the proper phonetic mouth movements to match the scripted dialogue and with highlights on his face to simulate the corresponding jaw and muscle changes. Morphs were used to smooth out the different painted configurations of mouth and face.[16]

When animating motion via computer, special adjustments must be made precisely because of the differences between photographically captured reality


398

and the synthetic realities engineered with CGI. Credible computer animation requires the addition of motion blur to simulate the look of a photographic image. The ping-pong ball swatted around by Forrest Gump and his Chinese opponents was animated on the computer from a digitally scanned photographic model of a ping-pong ball and was subsequently composited into the live-action footage of the game (the game itself was shot without any ball). The CGI ball seemed credible because, among other reasons, the animators were careful to add motion blur, which a real, rapidly moving object passing in front of a camera will possess (as seen by the camera which freezes the action as a series of still frames), but which a key-framed computer animated object does not.

In these ways, both macro and micro, digital imaging possesses a flexibility that frees it from the indexicality of photography's relationship with its referent.[17] Does this mean, then, that digital-imaging capabilities ought not to be grouped under the rubric of a realist film theory? If not, what are the alternatives? What kind of realism, if any, do these images possess?

In traditional film theory, only one alternative is available: the perspective formulated in opposition to the positions staked out by realists like Kracauer, Bazin, and Cavell. This position, which might be termed the formalist outlook, stresses cinema's capacity for reorganizing, and even countering and falsifying, physical reality. Early exponents of such a position include Rudolf Arnheim, Dziga Vertov, and Sergei Eisenstein. In his discussion of classical film theory, Noël Carroll has pointed out this bifurcation between the camps of realism and formalism and linked it to an essentializing tendency within theory, a predilection of theorists to focus on either the cinema's capability to photographically copy physical reality or to stylistically transcend that reality.[18]

This tension in classical theory between stressing the ways film either records or reorganizes profilmic reality continues in contemporary theory, with the classical formalist emphasis upon the artificiality of cinema structure being absorbed into theories of the apparatus, of psychoanalysis, or of ideology as applied to the cinema. In these cases, cinematic realism is seen as an effect produced by the apparatus or by spectators positioned within the Lacanian Imaginary. Cinematic realism is viewed as a discourse coded for transparency such that the indexicality of photographic realism is replaced by a view of the "reality-effect" produced by codes and discourse. Jean-Louis Baudry suggests that "Between 'objective reality' and the camera, site of inscription, and between the inscription and the projection are situated certain operations, a work which has as its result a finished product."[19] Writing about the principles of realism, Colin McCabe stresses that film is "constituted by a set of discourses which . . . produce a certain reality."[20]

Summarizing these views, Dudley Andrew explains, "The discovery that resemblance is coded and therefore learned was a tremendous and hard-won vic-


399

tory for semiotics over those upholding a notion of naive perception in cinema."[21] Where classical film theory was organized by a dichotomy between realism and formalism, contemporary theory has preserved the dichotomy even while recasting one set of its terms. Today, indexically based notions of cinema realism exist in tension with a semiotic view of the cinema as discourse and of realism as one discourse among others.

In some of the ways just discussed, digital imaging is inconsistent with indexically based notions of film realism. Given the tensions in contemporary film theory, should we then conclude that digital-imaging technologies are necessarily illusionistic, that they construct a reality-effect which is merely discursive? They do, in fact, permit film artists to create synthetic realities that can look just like photographic realities. As Pat Byrne noted, "The line between real and not-real will become more and more blurred."[22] How should we understand digital imaging in theory? How should we build theory around it? When faced with digitized images, will we need to discard entirely notions of realism in the cinema?

The tensions within film theory can be surmounted by avoiding an essentializing conception of the cinema stressing unique, fundamental properties[23] and by employing, in place of indexically based notions of film realism, a correspondence-based model of cinematic representation. Such a model will enable us to talk and think about both photographic images and computer-generated images and about the ways that cinema can create images that seem alternately real and unreal. To develop this approach, it will be necessary to indicate, first, what is meant by a correspondence-based model and, then, how digital imaging fits within it.

An extensive body of evidence indicates the many ways in which film spectatorship builds on correspondences between selected features of the cinematic display and a viewer's real-world visual and social experience.[24] These include iconic and noniconic visual and social cues which are structured into cinematic images in ways that facilitate comprehension and invite interpretation and evaluation by viewers based on the salience of represented cues or patterned deviations from them. At a visual level, these cues include the ways that photographic images and edited sequences are isomorphic with their corresponding real-world displays (e.g., through replication of edge and contour information and of monocular distance codes; in the case of moving pictures, replication of motion parallax; and in the case of continuity editing, the creation of a screen geography with coherent coordinates through the projective geometry of successive camera positions). Under such conditions, empirical evidence indicates that naive viewers readily recognize experientially familiar pictured objects and can comprehend filmed sequences, and that continuity editing enhances such comprehension.[25]


400

At the level of social experience, the evidence indicates that viewers draw from a common stock of moral constructs and interpersonal cues and percepts when evaluating both people in real life and represented characters in the media. Socially derived assumptions about motive, intent, and proper role-based behavior are employed when responding to real and media-based personalities and behavior.[26] As communication scholars Elizabeth Perse and Rebecca Rubin have pointed out, "'people' constitutes a construct domain that may be sufficiently permeable to include both interpersonal and [media] contexts."[27]

Recognizing that cinematic representation operates significantly, though not exclusively, in terms of structured correspondences between the audiovisual display and a viewer's extra-filmic visual and social experience enables us to ask about the range of cues or correspondences within the image or film, how they are structured, and the ways a given film patterns its represented fictionalized reality around these cues. What kind of transformations does a given film carry out upon the correspondences it employs with viewers' visual and social experience? Attributions of realism, or the lack thereof, by viewers will inhere in the ways these correspondences are structured into and/or transformed by the image and film. Instead of asking whether a film is realistic or formalistic, we can ask about the kinds of linkages that connect the represented fictionalized reality of a given film to the visual and social coordinates of our own three-dimensional world, and this can be done for both "realist" and "fantasy" films alike. Such a focus need not reinstate indexicality as the ground of realism, since it can emphasize falsified correspondences and transformation of cues. Nor need such a focus turn everything about the cinema back into discourse, into an arbitrarily coded reorganization of experience. As we will see, even unreal images can be perceptually realistic. Unreal images are those which are referentially fictional. The Terminator is a represented fictional character that lacks reference to any category of being existing outside the fiction. Spielberg's dinosaurs obviously refer to creatures that once existed, but as moving photographic images they are referentially fictional. No dinosaurs now live which could be filmed doing things the fictionalized creatures do in Jurassic Park . By contrast, referentially realistic images bear indexical and iconic homologies with their referents. They resemble the referent, which, in turn, stands in a causal, existential relationship to the image.[28]

A perceptually realistic image is one which structurally corresponds to the viewer's audiovisual experience of three-dimensional space. Perceptually realistic images correspond to this experience because film-makers build them to do so. Such images display a nested hierarchy of cues which organize the display of light, color, texture, movement, and sound in ways that correspond with the viewer's own understanding of these phenomena in daily life.


401

figure

Forrest Gump:  Computer-generated crowd

Perceptual realism, therefore, designates a relationship between the image or film and the spectator, and it can encompass both unreal images and those which are referentially realistic. Because of this, unreal images may be referentially fictional but perceptually realistic.

We should now return to, and connect this discussion back to, the issue of digital imaging. When lighting a scene becomes a matter of painting pixels, and capturing movement is a function of employing the correct algorithms for mass, inertia, torque, and speed (with the appropriate motion blur added as part of the mix), indexical referencing is no longer required for the appearance of photographic realism in the digital image. Instead, Gump's ping-pong ball and Spielberg's dinosaurs look like convincing photographic realities because of the complex sets of perceptual correspondences that have been built into these images. These correspondences, which anchor the computer-generated image in apparent three-dimensional space, routinely include such variables as surface texture, color, light, shadow, reflectance, motion speed and direction.

Embedding or compositing computer imagery into live action, as occurs when Tom Hanks as Gump "hits" the CG ping-pong ball or when Sam Neill is "chased" by the CG gallimimus herd, requires matching both environments. The physical properties and coordinates of the computer-generated scene components must be made to correspond with those of the live-action scene. Doing this requires precise and time-consuming creation and manipulation of multiple 3-D perceptual cues. Kevin Mack, at Digital Domain, and Chris Voellmann,


402

a digital modeller and animator at Century III Universal Studios, point out that light, texture, and movement are among the most important cues to be manipulated in order to create a synthetic reality that looks as real as possible.[29]

To simulate light properties that match both environments, a digital animator may employ scan-line algorithms that calculate pixel coloration one scan line at a time, ray tracing methods that calculate the passage of light rays through a modelled environment, or radiosity formulations that can account for diffuse, indirect illumination by analyzing the energy transfer between surfaces.[30] Such techniques enable a successful rendering[31] of perceptual information that can work to match live-action and computer environments and lend credence and a sense of reality to the composited image such that its computerized components seem to fulfill the indexicalized conditions of photographic realism. When the velociraptors hunt the children inside the park's kitchen in the climax of Jurassic Park , the film's viewer sees their movements reflected on the gleaming metal surfaces of tables and cookware. These reflections anchor the creatures inside Cartesian space and perceptual reality and provide a bridge between live-action and computer-generated environments. In the opening sequence of Forrest Gump , as a CG feather drifts and tumbles through space, its physical reality is enhanced by the addition of a digitally painted reflection on an automobile windshield.

To complete this anchoring process, the provision of information about surface texture and movement is extremely important and quite difficult, because the information provided must seem credible. Currently, many of the algorithms needed for convincing movement either do not exist or are prohibitively expensive to run on today's computers. The animators and renderers at Industrial Light and Magic used innovative software to texture-map[32] skin and wrinkles onto their dinosaurs and calibrated variations in skin jostling and wrinkling with particular movements of the creatures. However, while bone and joint rotation are successfully visualized, complex information about the movement of muscles and tendons below the skin surface is lacking.

Kevin Mack describes this limit in present rendering abilities as the "human hurdle"[33] —that is, the present inability of computers to fully capture the complexities of movement by living organisms. Hair, for example, is extremely difficult to render because of the complexities of mathematically simulating properties of mass and inertia for finely detailed strands.[34] Chris Voellmann points out that today's software can create flexors and rotators but cannot yet control veins or muscles.

Multiple levels of information capture must be successfully executed to convincingly animate and render living movement because the viewer's eye is adept at perceiving inaccurate information.[35] These levels include locomotor mechanics—the specification of forces, torques, and joint rotations. In addition,


403

"gait-specific rules"[36] must be specified. The Jurassic Park animators, for example, derived gait-specific rules for their dinosaurs by studying the movements of elephants, rhinos, komodo dragons, and ostriches and then making some intelligent extrapolations. Beyond these two levels of information control is the most difficult one—capturing the expressive properties of movement. Human and animal movement cannot look mechanical and be convincing; it must be expressive of mood and affect.

As the foregoing discussion indicates, available software and the speed and economics of present computational abilities are placing limits on the complexities of digitally rendered 3-D cues used to integrate synthetic and live-action objects and environments. But the more important point is that present abilities to digitally simulate perceptual cues about surface texture, reflectance, coloration, motion, and distance provide an extremely powerful means of "gluing" together synthetic and live-action environments and of furnishing the viewer with an internally unified and coherent set of cues that establish correspondences with the properties of physical space and living systems in daily life. These correspondences in turn establish some of the most important criteria by which viewers can judge the apparent realism or credibility possessed by the digital image.

Obvious paradoxes arise from these judgments. No one has seen a living dinosaur. Even paleontologists can only hazard guesses about how such creatures might have moved and how swiftly. Yet the dinosaurs created at ILM have a palpable reality about them, and this is due to the extremely detailed texture-mapping, motion animation, and integration with live action carried out via digital imaging. Indexicality cannot furnish us with the basis for understanding this apparent photographic realism, but a correspondence-based approach can. Because the computer-generated images have been rendered with such attention to 3-D spatial information, they acquire a very powerful perceptual realism, despite the obvious ontological problems in calling them "realistic." These are falsified correspondences, yet because the perceptual information they contain is valid, the dinosaurs acquire a remarkable degree of photographic realism.

In a similar way, President Kennedy speaking in Forrest Gump is a falsified correspondence which is nevertheless built from internally valid perceptual information. Computer modelling of synthetic visual speech and facial animation relies on existing micro-analyses of human facial expression and phonetic mouth articulations. The digital-effects artist used these facial cues to animate Kennedy's image and sync his mouth movements with the scripted dialogue. At the perceptual level of phonemic articulation and facial register, the correspondences established are true and enable the viewer to accept the photographic and dramatic reality of the scene. But these correspondences also establish a falsified relationship with the historical and archival filmic records of reality. The


404

resulting image is perceptually realistic but referentially unreal, a paradox that present film theory has a hard time accounting for.

The profound impact of digital imaging, in this respect, lies in the unprecedented ways that it permits film-makers to extend principles of perceptual realism to unreal images. The creative manipulation of photographic images is, of course, as old as the medium of photography. For example, flashing film prior to development or dodging and burning portions of the image during printing will produce lighting effects that did not exist in the scene that was photographed. The tension between perceptual realism and referential artifice clearly predates digital imaging. It has informed all fantasy and special-effects work where film-makers strive to create unreal images that nevertheless seem credible. What is new and revolutionary about digital imaging is that it increases to an extraordinary degree a film-maker's control over the informational cues that establish perceptual realism. Unreal images have never before seemed so real.

Digital imaging alters our sense of the necessary relationship involving both the camera and the profilmic event. The presence of either is no longer an absolute requirement for generating photographic images that correspond to spatio-temporally valid properties of the physical world. If neither a camera nor an existent referent is necessary for the digital rendition of photographic reality, the application of internally valid perceptual correspondences with the 3-D world is necessary for establishing the credibility of the synthetic reality. These correspondences establish bridges between what can be seen and photographed and that which can be "photographed" but not seen.

Because these correspondences between synthetic environments and real environments employ multiple cues, the induced realism of the final CG image can be extraordinarily convincing. The digital-effects artists interviewed for this essay resisted the idea that any one cue was more important than others and instead emphasized that their task was to build as much 3-D information as possible into the CG image, given budgetary constraints, present computational limitations, and the stylistic demands of a given film. With respect to the latter, Kevin Mack pointed out that style coexists with the capability for making the CG images look as real as possible. The Swiss chateau composite in True Lies discussed earlier exemplifies this tension.

The apparent realism of digitally processed or created images, then, is a function of the way that multiple levels of perceptual correspondence are built into the image. These establish reference points with the viewer's own experientially based understanding of light, space, motion, and the behavior of objects in a three-dimensional world. The resulting images may not contain photographable events, but neither do they represent purely illusory constructions. The reliability or nonreliability of the perceptual information they contain furnishes


405

the viewer with an important framework for evaluating the logic of the screen worlds these images help establish.

The emphasis in contemporary film theory has undeniably shifted away from naive notions of indexical realism in favor of an attention to the constructedness of cinematic discourse. Yet indexicality remains an important point of origin even for perspectives that reincorporate it as a variant of illusionism, of the cinema's ability to produce a reality-effect. Bill Nichols notes that "Something of reality itself seems to pass through the lens and remain embedded in the photographic emulsion," while also recognizing that "Digital sampling techniques destroy this claim."[37] He concludes that the implications of this "are only beginning to be grasped,"[38] and therefore limits his recent study of the filmic representation of reality to non-digitized images.

Digital imaging exposes the enduring dichotomy in film theory as a false boundary. It is not as if cinema either indexically records the world or stylistically transfigures it. Cinema does both. Similarly, digital-imaging practices suggest that contemporary film theory's insistence upon the constructedness and artifice of cinema's discursive properties may be less productive than is commonly thought. The problem here is the implication of discursive equivalence, the idea that all cinematic representations are, in the end, equally artificial, since all are the constructions of form or ideology. But, as this essay has suggested, some of these representations, while being referentially unreal, are perceptually realistic. Viewers use and rely upon these perceptual correspondences when responding to, and evaluating, screen experience.

These areas of correspondence coexist in any given film with narrative, formal, and generic conventions, as well as intertextual determinants of meaning. Christopher Williams has recently observed that viewers make strong demands for reference from motion pictures, but in ways that simultaneously accommodate style and creativity: "We need films to be about life in one way or another, but we allow them latitude in how they meet this need."[39] Thus, Williams maintains that any given film will feature "the active interplay between the elements which can be defined as realist, and the others which function simultaneously and have either a nonrealist character (primarily formal, linguistic or conventional) or one which can be called anti-realist because the character of its formal, linguistic or conventional procedures specifically or explicitly tries to counteract the cognitive dimensions we have linked with realism."[40] Building 3-D cues inside computer-generated images enables viewers to correlate those images with their own spatio-temporal experience, even when the digitally processed image fails in other ways to obey that experience (as when the Terminator morphs out of a tiled floor to seize his victim). Satisfying the viewer's demand for reference permits, in turn, patterned or stylish deviations from reference.


406

figure

Computer imaging in  The Mask

Stressing correspondence-based transformational abilities enables us to maintain a link, a relationship, between the materials that are to be digitally transformed (elements of the 3-D world) and their changed state, as well as providing a means for preserving a basis for concepts of realism in a digitized cinema. Before we can subject digitally animated and processed images, like the velociraptors stalking the children through the kitchens of Jurassic Park , to extended meta-critiques of their discursive or ideological inflections (and these critiques are necessary), we first need to develop a precise understanding of how these images work in securing for the viewer a perceptually valid experience which may even invoke, as a kind of memory trace, now historically superseded assumptions about indexical referencing as the basis of the credibility that photographic images seem to possess.

In the correspondence-based approach to cinematic representation developed here, perceptual realism, the accurate replication of valid 3-D cues, becomes not only the glue cementing digitally created and live-action environments, but also the foundation upon which the uniquely transformational functions of cinema exist. Perceptual realism furnishes the basis on which digital imaging may be carried out by effects artists and understood, evaluated, and interpreted by viewers. The digital replication of perceptual correspondence for the film viewer is an enormously complex undertaking and its ram-


407

ifications clearly extend well beyond film theory and aesthetics to encompass ethical, legal, and social issues. Film theory will need to catch up to this rapidly evolving new category of imaging capabilities and grasp it in all of its complexity. To date, theory has tended to minimize the importance of perceptual correspondences, but the advent of digital imaging demonstrates how important they are and have been all along. Film theory needs now to pay closer attention to what viewers see on the screen, how they see it, and the relation of these processes to the larger issue of how viewers see. Doing this may mean that film theory itself will change, and this essay has suggested some ways in which that might occur. Digital imaging represents not only the new domain of cinema experiences, but a new threshold for theory as well.


408

PART FIVE— TECHNOLOGIES
 

Preferred Citation: Henderson, Brian, and Ann Martin, editors. Film Quarterly: Forty Years - A Selection. Berkeley:  University of California Press,  c1999 1999. http://ark.cdlib.org/ark:/13030/ft5h4nb36j/