Essays on Theater and the Arts

I found myself checking up on the parts of a horse the other day. It was after the Daily News had carried an AP story about some new prehistoric art found in the Perigueux region of Franceengravings thought to predate the Lascaux cave paintings by 10,000 years. It was a burial ground of some sort, apparently: a number of human skeletons had been found in the cave as well.

The version of the story Newsday carried included a quote from an official of the French Ministry of Culture: “The presence of graves in a decorated cave is unprecedented.” But the drawings in the Daily News photograph didn’t look like decorations; they looked like sketchpad studies. They were partial (a mane here, a hoof or fetlock there, an idea of musculature) and unarranged, all piled on top of one another as though the artist hadn’t wanted to look for a blank space on the wall for fear of missing whatever he was trying to capture from memory or life.

Only one of the figures in the photographa horsewas recognizable. It seemed curiously realistic, so realistic that for a moment I wondered if the drawings might be a hoax. It wasn’t stylized enough for prehistoric art, I thought; and it seemed too interested in anatomical detail. This was no flat, undifferentiated geometric shape with characteristics one might interpret as equine; this was a proper horse, drawn in profile, fully articulated, and almost in perspective, complete with all the things a horse should have. You could make out every element of horse physiognomy: upper and lower muzzle, nostril, even the soft, fat, jowly part that covers a muscle I now know to be called the masseter.

There’s nothing to say that primitive artwork has to be more stylized than it is realistic. Or, to put it another way, there’s no reason to think that art wasn’t realistic before it was stylized—any more than there is to think it impossible that a more advanced technology than ours once existed a long time ago in a galaxy far, far away. I mention the Perigueux horse because I’ve been thinking about realism and views of reality in the context of some of the summer’s more and less obviously cheesy movies. Mostly I’ve been trying to figure out why the picture of a world proposed by Steven Spielberg’s A.I. bothered me so much.

When it comes to matters of realism and stylistic form, it’s always interesting to find out what we are and aren’t prepared to accept. Detail is what tends to create problems. I remember once, some years ago, getting laughed at when I objected to something at the end of a horror picture. The werewolf-hero had been cornered by the SWAT team and would be blown away in a matter of moments, but first the heroine wanted to wish him a fond farewell and stepped into the line of fire. I said it was “ridiculous—unrealistic.” The friends I was watching the video with thought it hilarious that I hadn’t objected to the premise of the picture as “ridiculous” or “unrealistic,” but only that one small aspect.

We tend to hold different art forms to different standards of verisimilitude. We demand more literal truth from the narrative and dramatic, say, than the graphic arts. When the Metropolitan Museum of Art held an exhibit of late Renaissance drawings earlier this year, you didn’t hear museum-goers finding a lot of fault with Correggio because some of the pictures deviated from natural truth. You didn’t notice anyone pointing critically, saying, “Look at the way that Madonna is holding the child! It’s ridiculous! No mother would hold a baby that way, it would slide right off her lap!” The point was the folds of her dress and the way they draped over her leg: these would have been obscured if the artist had taken the actual real-life weight of an actual real-life baby into account.

It’s artistry itself, as often as not, that leads us to ignore some discrepancy between the truth as it’s depicted in a work of art and the way things are. If you go to see Kenneth Lonergan’s Lobby Hero at the John Houseman Theater, there may come a point when you find yourself noticing something slightly unrealistic about the play.

Set in the foyer of a Manhattan high-rise, it concerns the relationship between a young security guard who works the graveyard shift at the apartment building, his supervisor, and two cops, one of whom is having an affair with a tenant in the building. You’d look hard to find a visual stage truth as compelling as the way the shadow of an adjacent building on Allen Moyer’s set cuts off the sunlight from the sunken area just outside on the pavement, exactly the way the buildings surrounding those badly designed East Side high-rises always do.

You know that building, you can visualize the whole exterior just from the way Mark McCullough has lighted that tiny sliver of stage, and the characters are equally well observed. All the same, it’s bound to occur to you that in the entire course of the two nights the play spans, no one other than the characters in the play crosses the lobby. It’s unimportant. The truths contained in the characters’ expectations and treatment of one another are more interesting than the convention we’re being asked to accept—just as the folds in the drapery are more interesting than the bulk of the baby in Correggio’s drawing.

Sometimes what prompts us to accept a glitch in verisimilitude is the arrival of a new technique, a way of expressing something that couldn’t have been expressed before in a particular medium. I remember that some years back, when the Met was holding one of its exhibitions of fifth-century sculpture, there was a particularly wonderful piece of signage pointing out that the famous marble relief from the Acropolis of Nike adjusting her sandal is fundamentally unrealistic although it represents, at the same time, an important moment in the development of “realism.” The way the sculpture captures the fall of the cloth over the goddess’s body is lifelike beyond anything that marble had hitherto managed to express. Still, the curator noted, a cloth that fell exactly so—that showed the outline of the body as the one in the statue does—would have to be gossamer-thin, and fabric of that weight wouldn’t drape well. In order to express what he wanted to express, the artist had had to create another reality in which both a garment and the object it veils are visible at the same time.

One of this summer’s cinematic talking points is a movie that uses computer-generated images of actors instead of real actors. It’s fascinating for the space of about ten minutes because of the precise way in which it doesn’t work. The moving figures that act out the story seem like neither actors nor animations, merely like an attempt to ape a simulation of life. Animation takes static images (illustrations) and breathes life into them. (Its wit, historically, resided in its ability to assign human attributes to nonhuman entities—objects and animals—thereby commenting on humanity.) But the suggestion of life is dependent on spontaneity. The creators of Final Fantasy didn’t have that to work with, so they had to fall back on facial and gestural cliché: this expression for fear, that pose for anger or grief. For all its technical prowess, Final Fantasy turned out to be a throwback to the most primitive style of silent movie acting.

Of course, it’s caused a certain amount of consternation in the entertainment industry. The fear is that if such methods are found to be “successful,” computer images will gradually come to replace real actors on the screen. Interestingly, this real-life development actually mirrors the major plot point of A.I., Spielberg’s long-awaited movie about a boy-robot who develops mortal longings. The film, which Spielberg developed from an idea that Stanley Kubrick had researched for years before turning the project over to the younger director, posits a postapocalyptic future (some polar icecaps have melted, drowning all of the world except for a significant portion of New Jersey) in which human beings have so perfected the art of simulating humanity that the only thing left for a self-respecting Promethean to explore is whether a robot can be programmed to love and thereby become more “human.”

It’s odd that in the role of the boy-robot, David, Spielberg chose to cast Haley Joel Osment, the child actor whose passion in The Sixth Sense played so well against Bruce Willis’s trademark lack of affect. In A.I., the young actor is required first to simulate lack of affect himself and then, as David’s adoptive mother utters the words that program him to love her for all time, to simulate recently acquired artificial affect.

Actually, there are a number of curious things about A.I., not least of which is the widely noted “schizoid” quality that critics have enjoyed attributing to the Spielberg/Kubrick dichotomy. The movie keeps presenting us with recognizable tropes, situations arising out of the singular plot, which we think will develop in a way that explores what it means to be human. (That’s Spielberg the bard, the king of genre, the arch-storyteller.) But these setups keep petering out, wandering off into tough-minded existential gloom. (That’s Kubrick the genius, the redoubtable intellect.)

Watching A.I., I found myself prey to the American Werewolf in London syndrome, willing to entertain the premise but stumbling over details. I was prepared to accept a world of punishingly planned parenthood serviced by a race of humanoid robots created to lick the resources problem (it beats rationing). But I kept wondering why the couple in the movie, David’s adoptive parents, are so inexplicably wealthy. They live in a huge, beautifully appointed house, miles from anyone else, and can afford to have their birth son cryogenically frozen until such time as a cure is found for whatever it is that ails him.

And why is it that they appear to have no friends? Where are all the other people in this world? It seems inhabited entirely by people who work at the robot plant. Apart from them, the only human beings are the rabble—the crowds of ugly, sweaty people who frequent the roving demolition festivals called Flesh Fairs (carnivals; get it?) where antiquated, damaged, or otherwise unwanted robots are ritually trashed. They’re part theme park, part slave mart, part revival meeting, part public execution, and the unkempt folk who attend them are there to exorcise their fears of extinction.

The friend who came with me to see A.I. remarked on how the mob that turns on the carnival manager, rallying to defend the robot child because he is a child, is acting out of sentiment, not humanity. There’s nothing noble or uplifting about the scene; it simply substitutes mawkish savagery and brutality for the self-interested sort. I doubt that was Spielberg’s intention, but then the whole movie is sort of one big glitch in verisimilitude. It’s a portrait of a society trying to make lifelike beings, drawn by a man who has been so removed from real life for so long that he doesn’t remember what it looks like. Or, rather, two such men—the one who conceived the project and the one who carried it out.

At least the movie based on a computer game knows that it’s junk. Ironically (or perhaps predictably), it carries the same message as the Spielberg epic: what makes us human is our dreams. But Spielberg here is being either disingenuous or naive: his point, surely, is that what exalts the human race is movies, not dreams themselves but dream-makers like himself and Kubrick. The whole movie is a series of self-absorbed allusions to Spielberg and Kubrick—their humanity, their achievement, their work. I think it’s telling (and more fraught with worrying connotations than anything in Final Fantasy) that the most lifelike and compelling performance in A.I. comes from a computer-animated teddy bear.

§62 · July 21, 2001 · Film · Tags: , , , · [Print]

Comments are closed.