Artificial Affect

Ifound myself checking up on the parts of a horse the other day. It was after the Daily News had carried an AP story about some new prehistoric art found in the Perigueux region of France—engravings thought to predate the Lascaux cave paintings by 10,000 years. It was a burial

Ifound myself checking up on the parts of a horse the other day. It was after the Daily News had carried an AP story about some new prehistoric art found in the Perigueux region of Franceengravings thought to predate the Lascaux cave paintings by 10,000 years. It was a burial ground of some sort, and the version of the story that Newsday carried included a quote from an official of the French Ministry of Culture: “The presence of graves in a decorated cave is unprecedented.”

But the drawings in the Daily News photograph didn’t look like decorations; they looked like sketchpad studies—partial (a mane here, a hoof there, an idea of musculature) and unarranged. They were all on top of one another as though the artist hadn’t wanted to take time to find a blank space on the wall for fear of missisng whatever he was trying to capture from memory or life.

Only one of the figures in the photographa horsewas recognizable. It seemed curiously realistic, so realistic that for a moment I wondered if the drawings might be a hoax. It didn’t seem stylized enough for prehistoric art. This was no flat, geometric artifact with characteristics one might interpret as equine; it was a proper horse, fully articulated, drawn in profile, and almost in perspective, complete with all the things a horse should have. You could make out every element of horse physiognomy: upper and lower muzzle, nostril, even the soft, fat, jowly part that covers a muscle I now know to be called the masseter.

There’s nothing to say that primitive artwork has to be more stylized than it is realistic. Or, to put it another way, there’s no reason to think that art wasn’t realistic before it was stylizedany more than there is to think it impossible that a more advanced technology than ours once existed a long time ago in a galaxy far, far away. I mention the Perigueux horse because I’ve been thinking about realism and views of reality in the context of some of the summer’s more and less obviously cheesy movies. Mostly I’ve been trying to figure out why the picture of a world proposed by Steven Spielberg’s A.I. bothered me so much.

When it comes to matters of realism and stylistic form, it’s always interesting to find out what we are and aren’t prepared to accept. Detail is what tends to create problems. I remember once, some years ago, getting laughed at when I objected to something at the end of a horror picture. The werewolf-hero had been cornered by the SWAT team and would be blown away in a matter of moments, but first the heroine wanted to wish him a fond farewell and stepped into the line of fire. I said it was “ridiculous—unrealistic.” The friends I was watching the video with thought it hilarious that I hadn’t objected to the premise of the picture as “ridiculous” or “unrealistic,” but only that one small aspect.

We tend to hold different art forms to different standards of verisimilitude. We demand more literal truth from the narrative and dramatic, say, than the graphic arts. When the Metropolitan Museum of Art held an exhibit of late Renaissance drawings earlier this year, you didn’t hear museum-goers finding a lot of fault with Correggio because some of the pictures deviated from natural truth. You didn’t notice anyone pointing critically, saying, “Look at the way that Madonna is holding the child! It’s ridiculous! No mother would hold a baby that way, it would slide right off her lap!” The point was the folds of her dress and the way they draped over her leg: these would have been obscured if the artist had taken the actual weight of a real-life baby into account.

It’s artistry itself, as often as not, that leads us to ignore some discrepancy between the truth as it’s depicted in a work of art and the way things are. If you go to see Kenneth Lonergan’s Lobby Hero at the John Houseman Theater (it reopened there in May and runs through Sept. 2), there may come a point when you find yourself noticing a particular unrealistic aspect of the play. Set in the foyer of a Manhattan high-rise, it concerns the relationship between four characters: a young security guard who works the graveyard shift at the apartment building, his supervisor, and two cops, one of whom is having an affair with a tenant in the building.

You’d look hard to find a visual stage truth as compelling as the way the shadow of an adjacent building on Allen Moyer’s set cuts off the sunlight from the sunken area just outside on the pavement, exactly the way the buildings surrounding those badly designed East Side high-rises always do. You know that building, you can visualize the whole exterior just from the way Mark McCullough has lighted that tiny sliver of stage, and the characters are equally well observed. All the same, it’s bound to occur to you that in the entire course of the two nights in which the play is set, no one other than the characters in the play crosses the lobby.

It’s unimportant. The truths contained in the characters’ expectations and treatment of one another are more interesting than the convention we’re being asked to accept—just as the folds in the drapery are more interesting than the bulk of the baby in Correggio’s drawing.

Sometimes what prompts us to accept a glitch in verisimilitude is the arrival of a new technique, a way of expressing something that a particular medium couldn’t have  expressed before. I remember that some years back, when the Met was holding one of its exhibitions of fifth-century sculpture, a wonderful bit of signage pointed out that the famous statue of Nike bending down to fasten her sandal both represents an important moment in the development of “realism” and is at the same time fundamentally unrealistic. The way the sculpture captures the fall of the cloth over the goddess’ body is lifelike beyond anything that marble had hitherto managed to express. Still, the curator noted, cloth falling exactly that way, that showed the outline of the body as the one in the statue does, would have to be gossamer-like, and fabric that light wouldn’t drape well. To express what he wanted to express, the artist had had to create another reality in which both a garment and the object it veils are visible at the same time thereby anticipating certain schools of modern art by a couple of millenia.

One of this summer’s cinematic talking points is Final Fantasy, a movie based on a video game, that uses computer-generated images of actors instead of real ones. It’s fascinating for the space of about ten minutes because of the precise way in which it doesn’t work. The moving figures that act out the story seem like neither actors nor animations, merely like an attempt to ape a simulation of life.

Animation—which we still use almost in its literal and etymological sense—takes nonliving entities and breathes life into them. Its wit historically resided in its ability to assign human attributes to nonhuman objects and creatures, thereby commenting on humanity. But the suggestion of life is dependent on spontaneity. The creators of Final Fantasy didn’t have that to work with, so they had to fall back on facial and gestural cliché: this expression for fear, that pose for anger or grief. For all its technical prowess, Final Fantasy turned out to be a throwback to the most primitive styles of stage and silent-movie acting.

Nevertheless, it’s caused a certain amount of consternation in the entertainment industry. The fear is that if such methods are found to be “successful,” computer images will gradually come to replace real actors on the screen. Interestingly, this development echoes a major plot point in A.I., Spielberg’s long-awaited movie about a boy-robot who develops mortal longings.

The film, which Spielberg developed from an idea that Stanley Kubrick had researched for years before turning the project over to the younger director, posits a postapocalyptic future (some polar icecaps have melted, drowning the entire globe except for a large part of New Jersey) in which human beings have so perfected the art of simulating humanity that the only thing left for a self-respecting Promethean to explore is whether a robot can be programmed to love and thereby become more “human.”

It’s odd that Spielberg chose to cast Haley Joel Osment, the child actor whose passion in The Sixth Sense was so moving and played so well against Bruce Willis’s trademark lack of affect, as the boy-robot David. In A.I., the young actor is himself required to simulate lack of affect and later, as David’s adoptive mother utters the words that program him to love her for all time, to simulate recently acquired artificial affect.

Actually, there are a number of curious things about A.I., not least of which is the widely noted “schizoid” quality that critics have enjoyed attributing to the Spielberg/Kubrick dichotomy. The movie’s singular plot keeps presenting us with recognizable tropes that we think will develop in such a way as to explore what it means to be “human.” (That’s Spielberg the bard, king of genre, storyteller extraordinaire.) But these setups keep petering out, wandering off into tough-minded existential gloom. (That’s Kubrick, genius and redoubtable intellect.)

Watching A.I., I found myself prey to the American Werewolf in London syndrome, willing to entertain the premise but stumbling over details. I was prepared to accept a world of punishingly planned parenthood serviced by a race of humanoid robots. But I kept wondering why the couple in the movie, David’s adoptive parents, have no friends and why they are so inexplicably wealthy. They live in a huge, beautifully appointed house, miles from anyone else, and can afford to have their birth son cryogenically frozen until such time as a cure is found for whatever it is that ails him.

Where are all the other people in this world? Apart from the factory workers who operate the robot plant, the only human beings in the picture are the rabble—the crowds of ugly, sweaty people who frequent the roving demolition festivals called Flesh Fairs (carnivals—get it?) where antiquated, damaged, or otherwise unwanted robots are ritually trashed. The Flesh Fairs are part theme park, part slave market, part revival meeting, part public execution, and the unkempt folk who attend them are there to exorcise their fears of extinction.

The friend who came with me to see A.I. pointed out that there’s nothing noble or uplifting about the climactic scene in which the mob turns on the carnival manager, rallying to defend the robot child because he is a child. The sequence simply replaces self-interested savagery and brutality with mawkish savagery and brutality. I doubt that was Spielberg’s intention, but then the whole movie is sort of one big glitch in verisimilitude. It’s the portrait of a society trying to make lifelike beings, drawn by a man who has been so removed from real life for so long that he doesn’t remember what it looks like. Or, rather, two such men—the reclusive genius who conceived the project and the commercial giant who completed it.

At least the movie based on a computer game knows that it’s junk. Ironically (or perhaps predictably), it carries the same message as the Spielberg epic: what makes us human is our dreams. But Spielberg here is being either disingenuous or naive: his point, surely, is that what exalts the human race is movies, not dreams themselves but dream-makers like himself and Kubrick. The whole picture is a series of self-absorbed allusions to Spielberg and Kubrick—their humanity, their achievement, their work. It’s telling (and potentially more worrying than anything in Final Fantasy) that the most compelling and lifelike performance in A.I. comes from a computer-animated teddy bear.

New York Press, July 24, 2001