Dagger John and the Triumph of the Irish

Among the publishing sensations of 1836 was a book by one Maria Monk entitled Awful Disclosures, which purported to be her memoir of life in a Montreal nunnery. Hot stuff by early 19th-century standards, Monk’s book claimed that all nuns were forced to have sex with priests and that the “fruit of priestly lusts” were baptized, murdered, and carried away for secret burial in purple velvet sacks. Nuns who tried to leave the convent were whipped, beaten, gagged, imprisoned, or secretly murdered. Maria claimed to have escaped with her unborn child.

nast-daggerjohn

From New York Press, March 25, 2003

Among the publishing sensations of 1836 was a book by one Maria Monk entitled Awful Disclosures, which purported to be her memoir of life in a Montreal nunnery. Hot stuff by early 19th-century standards, Monk’s book claimed that all nuns were forced to have sex with priests and that the “fruit of priestly lusts” were baptized, murdered, and carried away for secret burial in purple velvet sacks. Nuns who tried to leave the convent were whipped, beaten, gagged, imprisoned, or secretly murdered. Maria claimed to have escaped with her unborn child.

In fact, Maria had never been a nun. She was a runaway from a Catholic home for delinquent girls, and her child’s father was no priest, but merely the boyfriend who had helped her escape. Nevertheless, Awful Disclosures became an overnight bestseller, echoing as it did the most popular anti-Catholic slanders of the day and reflecting the savage hatred of the Irish with which they went hand in hand. It was the cultural climate that partly led to John Joseph Hughes, fourth bishop and first archbishop of New York, becoming what one reporter called “the best known, if not exactly the best loved, Catholic bishop in the country.”

John Hughes was an Irishman, an immigrant and a poor farmer’s son. Though intelligent and literate, he had little formal education before he entered the seminary. He was complicated: warm, impulsively charitable, vain (he wore a wig) and combative (he once admitted to “a certain pungency of style” in argument). No man accused him of sainthood; many found him touched with greatness. He built St. Patrick’s Cathedral and founded America’s system of parochial education; he once threatened to burn New York to the ground. Like all archbishops and bishops, Hughes placed a cross in his signature. Some felt it more resembled a knife than the symbol of the redemption of the world, and so the gutter press nicknamed him “Dagger John.” He probably loved it.

Born on June 24, 1797 in Annaloghan, County Tyrone, Hughes later observed that he’d lived the first five days of his life on terms of “social and civil equality with the most favored subjects of the British Empire.” Then he was baptized a Catholic. British law forbade Catholics to own a house worth more than five pounds, hold the King’s commission in the military or receive a Catholic education. It also forbade Roman Catholic priests to preside at Catholic burials, so that—as William J. Stern noted in a 1997 article in City Journal—when Hughes’s younger sister Mary died in 1812, “the best [the priest] could do was scoop up a handful of dirt, bless it, and hand it to Hughes to sprinkle on the grave.”

In 1817, Hughes emigrated to America. He was hired as a gardener and stonemason by the Reverend John Dubois, rector of St. Mary’s College and Seminary in Emmitsburg, Maryland. Believing himself called to the priesthood, Hughes asked to be admitted to the seminary. Father Dubois rejected him as lacking a proper education.

Hughes had met Mother Elizabeth Ann Bayley Seton, a convert to Catholicism who had become a nun after her husband’s death and occasionally visited St. Mary’s. She saw something in the Irishman that Dubois had not and asked the rector to reconsider. So Hughes began his studies in September 1820, graduating and receiving ordination to the priesthood in 1826. He was first assigned to the diocese of Philadelphia.

Anti-Catholic propaganda was everywhere in the City of Brotherly Love. Hughes’s temperament favored the raised fist more than the turned cheek. So when, in 1829, a Protestant newspaper attacked “traitorous popery,” Hughes denounced its editorial board of Protestant ministers as “clerical scum.” And after scores of Protestant ministers fled the 1834 cholera epidemic, which Nativists blamed on the Irish, Hughes ridiculed the ministers—”remarkable for their pastoral solicitude, as long as the flock is healthy….”

In 1835, Hughes won national fame when he debated John Breckenridge, a prominent Protestant clergyman from New York. Breckenridge conjured up the Inquisition, proclaiming that Americans wanted no such popery, no loss of individual liberty. Hughes described the Protestant tyranny over Catholic Ireland and the scene at his sister’s grave. He said he was “an American by choice, not by chance…born under the scourge of Protestant persecution” and that he knew “the value of that civil and religious liberty, which our…government secures for all.” The debate received enormous publicity, making Hughes a hero among many American Catholics. It was noted in Rome.

Dubois, who had left St. Mary’s to become bishop of New York, suffered a series of blows to his health. Hughes was barely forty. Nevertheless, in January 1838, he was appointed co-adjutor bishop—assuring him the succession to Dubois—and was consecrated in the old St. Patrick’s Cathedral on Mott Street. To the older man, it was a terrible humiliation to see a man he had deemed unqualified for the priesthood succeed him. When Dubois died, in 1842, he was buried at his request beneath the doorstep of Old St. Patrick’s Cathedral so the Catholics of New York might step on him in death as they had in life.

Hughes’s first order of business to gain control of his own diocese. Under state law, most Catholic churches and colleges were owned and governed by boards of trustees—laymen, elected by a handful of wealthy pew holders (parishioners who couldn’t afford pew rents couldn’t vote), who bought the property and built the church. When, in 1839, the trustees of Old St. Patrick’s Cathedral had the police remove from the premises a new Sunday school teacher whom Dubois had appointed, Hughes called a mass meeting of the parish. He likened the trustees to the British oppressors of the Irish, thundering that the “sainted spirits” of their forebears would “disavow and disown them, if…they allowed pygmies among themselves to filch away rights of the Church which their glorious ancestors would not yield but with their lives to the persecuting giant of the British Empire.” He later said that by the time he had finished speaking, many in the audience were weeping like children. He added, “I was not far from it myself.”

The public schools were then operated by the Public School Society, a publicly funded but privately managed committee. They favored “non-denominational” moral instruction, reflecting a serene worldview that Protestantism was a fundamental moral code and the basis of the common culture. In fact, as Hughes biographer Father Richard Shaw pointed out, “the entire slant of the teaching was very much anti-Irish and very much anti-Catholic.” The curriculum referred to deceitful Catholics, murderous inquisitions, vile popery, Church corruption, conniving Jesuits and the pope as the anti-Christ of Revelations.

Bishop Dubois had advised Catholic parents to keep their children out of the public schools to protect their immortal souls. But Hughes understood the need for formal education among the poor. He demanded that the Public School Society allocate funds for Catholic schools: “We hold…the same idea of our rights that you hold of yours. We wish not to diminish yours, but only to secure and enjoy our own.” He concluded by warning that should the rights of Catholics be infringed, “the experiment may be repeated to-morrow on some other.”

On October 29, 1840, a public hearing was held at City Hall, with numerous lawyers and clergymen representing the Protestant establishment and Hughes the Catholics. Hughes opened with a three-and-a-half-hour spellbinder. The Protestants spent the next day and a half insulting Hughes as an ignorant ploughboy and demonizing Catholics “as irreligious idol worshippers, bent on the murder of all Protestants and the subjugation of all democracies,” according to historian Ray Allen Billington. The City Council denied his request.

With elections less than a month away, Hughes created his own party, Carroll Hall, named for the only Catholic signer of the Declaration of Independence. He ran a slate of candidates to split the Democratic vote, thereby punishing the Democrats for opposing him. The Democrats lost by 290 votes. Carroll Hall had polled 2,200.

In April 1842 the Legislature replaced the Public School Society with elected school boards and forbade sectarian religious instruction. When the Whigs and Nativists had the King James version declared a non-sectarian book, Hughes set about establishing what has become the nation’s major alternative to public education, a privately funded Catholic school system. He would create more than 100 grammar and high schools and help found Fordham University and Manhattan, Manhattanville and Mount St. Vincent Colleges.

Anti-Catholicism had gained legitimacy by the 1840s. Now the Nativist movement included not only Protestant fundamentalists who saw Catholicism as Satan’s handiwork, but also intellectuals—like Mayor James Harper, of the Harper publishing house—who considered Catholicism incompatible with democracy. All hated the Irish. Harper’s described the “Celtic physiognomy” as “simian-like, with protruding teeth and short upturned noses.” Their cartoonist, Thomas Nast, caricatured the Irish accordingly.

Between May and July of 1844, Nativist mobs in Philadelphia, summoned to “defend themselves against the bloody hand [of the Pope],” ransacked and leveled at least three churches, a seminary and nearly the entire Catholic residential neighborhood of Kensington. When Hughes learned a similar pogrom, beginning with an assault upon Old St. Patrick’s Cathedral, was planned in New York, he called upon “the Catholic manhood of New York” to rise to the defense of their churches and he armed them. A mob that stoned the stained glass windows of the cathedral found the building full of riflemen, and the violence went no further. Hughes later wrote that there had not been “a [Catholic] church in the city…not protected with an average force of one to two thousand men-cool, collected, armed to the teeth….”

Invoking the conflagration that kept Napoleon from using Moscow as his army’s winter quarters, Hughes warned Mayor Harper that if one church were attacked, “should one Catholic come to harm, or should one Catholic business be molested, we shall turn this city into a second Moscow.” New York’s buildings were largely wooden, and the city had burned twice in the previous century. There were no riots.

On July 19, 1850, Pope Pius IX created the archdiocese of New York, a development reflecting the growth of both the city’s Catholic population and the influence of Hughes himself. Having received the white woolen band of an archbishop from the hands of the Supreme Pontiff, Hughes embarked on a new project, “…a cathedral…worthy of our increasing numbers, intelligence and wealth as a religious community.” On August 15, 1858, before a crowd of 100,000, he laid the cornerstone of the new St. Patrick’s Cathedral at 5th Avenue and 51st Street. He would not see it finished. On January 3, 1864, death came for the archbishop.

After Maria Monk gave birth to a second illegitimate child, her Protestant champions quietly abandoned her. She became a prostitute, was arrested for pickpocketing and died in prison. Her book is still in print.

hughes1


Comedy Lite

I picked up a copy of Yasmina Reza’s Life (x) 3 a couple of days after seeing the play at Circle in the Square and was flabbergasted to discover that the opening scene between John Turturro and Helen Hunt is actually funny.

Yasmina Reza’s Life (x) 3
at Circle in the Square

David Ives’s Polish Joke
at Manhattan Theater Club

I picked up a copy of Yasmina Reza’s Life (x) 3 a couple of days after seeing the play at Circle in the Square and was flabbergasted to discover that the opening scene between John Turturro and Helen Hunt is actually funny. Sonia and Henry are arguing about how to deal with their recalcitrant six-year-old.

HENRY: He wants a biscuit.
SONIA: He’s just cleaned his teeth.
HENRY: He’s asking for a biscuit.
SONIA: He knows very well there’s no biscuits in bed.
HENRY: You tell him.
SONIA: Why didn’t you?
HENRY: Because I didn’t know there were no biscuits in bed.

Sonia goes offstage to lay down the law. The child begins to cry and is still wailing when she re-enters. Henry suggests bringing him a slice of apple, but Sonia puts the kibosh on that, too. Henry goes offstage and comes back a moment later.

HENRY: He’s agreed to the slice of apple.
SONIA: He’s not having any apple, he’s not having anything, you don’t eat in bed, the subject is closed.
HENRY: You tell him.I said yes to the apple, I thought the apple was a possibility. If you’re saying no, go and tell him yourself.
SONIA: Take him a slice of apple and tell him you’re doing it behind my back. Tell him I said no and you’re only doing it because you said yes, but that I mustn’t find out because I’m radically opposed to any kind of food in bed.

I’m not going to make any noises about not needing to have small children to find this funny. I am going to suggest that it doesn’t take Donald Sinden and Maggie Smith to make it play. It probably could use Brit actors, though, and they’d probably have to be performing Christopher Hampton’s translation from the French as written, not an Americanized version of the thing.

Problems of style, idiom, and cultural context tend to rear their heads when you decide that a West End hit will be best served by a cast of American movie stars and a dumbed-down script that Americanizes everything. Take the biscuit-cookie dichotomy, for instance. “Biscuit,” which appears in Hampton’s script, is a light-comedy word. It’s neutral. It allows us to get where the dialogue is going and focus on the politics of the situation. “Cookie,” the word that’s substituted in the production at Circle, isn’t neutral. It’s a joke word, almost a punchline in and of itself. Turturro is cute saying it—anyone would be. We’re derailed by the cuteness, and it doesn’t really matter what comes after.

Or take the subject of the conversation itself. Americans discussing questions of how to negotiate with their children isn’t satire, it’s sitcom. All Americans negotiate with their children, and all Americans argue about it. We take child-rearing seriously. Europeans don’t make such an issue out of it—that is, only a certain class of pseudo-sophisticated European does, probably in emulation of Americans. Spoken in American accents, the opening scene tells us nothing about Sonia and Henry. Spoken in Estuary accents, it would give us a hint about who they are.

Actually, though, the main thing you’d need in order for the opening of Life (x) 3 to be funny would be not to have Ms. Hunt anywhere in the equation. A limited actress to begin with, Ms. Hunt has taken to allowing the persona that made her America’s sweetheart (unaccountably, I think) to obtrude on every role she performs. It’s hard to believe that her mannerisms—the squint she levels at fellow actors, the whiny, strained, prosaically uninflected voice—ever seemed refreshing or even benign. She wields them now like weapons, encased in an impenetrable air of moral righteousness.

The actress playing the mother in the opening scene of Reza’s play should be able to forgo moral ascendancy. Within moments, she will be yelling for the child to “Shut the fuck up” and irrationally baiting him with visions of dessert delicacies. But Ms. Hunt brings the same emotional freight and crusading spirit to Life (x) 3 that she brought to the role of the doting mother in As Good as It Gets. It’s the same performance, for Pete’s sake, and it’s tiresome beyond belief.

In order to find anything witty or interesting in Reza’s play, you kind of have to redirect it in your head. Life (x) 3 is a slight, unassuming comedy—part gentle farce, part moth-like speculation on human behavior—that shows us the same disastrous dinner party three times in three different ways. Sonia and Henry and their guests, Hubert and Inez, are no more or less two-dimensional than the characters in a Noel Coward play. Like the foursome in Private Lives, like the theatrical family in Hay Fever and their pompous, conventional guests, Reza’s characters exist pretty much to be rude to each other. The fun lies in watching well-dressed, over-educated people misbehave. There really isn’t much more to it than that.

This is fragile stuff. Even the justification for the life-in-triplicate gimmick is gossamer-thin. Henry and Hubert are both astrophysicists, a circumstance that allows for passing references to cosmology and metaphysics (the phrase “modification of a presumed reality” comes up). But there’s no attempt to delineate a process of behavioral causality. The characters’ characters just change from scene to scene. Thus, one version of Henry is high-strung and self-dramatizing. Another is modest and self-assured. One version of Sonia is overtly hostile to Hubert, who is more successful than Henry and might help him to advance his career. In another version, Sonia and Hubert are contemplating a weekday tryst.

One of the things that’s been said about the play is that the variations are in the wrong order, and certainly the conceit seems haphazard in this production. Reading Life (x) 3, though, you realize that there may in fact be a progression. In each scene, the characters seem to become less simplistic, less oriented toward the exigencies of a specific genre. They lose their formulaic edge and take on, instead, a tinge of idiosyncrasy.

My guess is that each of the four actors in the play is effectively supposed to become three completely different people, and that our pleasure and fascination should derive from the artistry with which they do so. Reza is herself an actress (she played Inez in the original Paris production), and I suspect that Life (x) 3 is fundamentally a play about theater, just as Art was. People were wrong-headed in dismissing that first play because it seemed to raise long-settled controversies about painting.

The purpose of the white-on-white canvas—the reason why it had to be that and not anything else—was so that we would spend the evening watching actors pretending to respond to something when it looked to us like nothing was there at all. The white-on-white canvas meant that the actors in Art were doing the opposite of what actors in a play normally do. Instead of pretending to respond to a nothing that looks like something, they were responding to a something that looked like nothing. It was all a metaphor for theater—or for the metaphor that theater is itself. The closing speech in which images of whiteness vanish into nothingness—the clouds, the snow, the solitary skier who becomes “a man who moves across a space and disappears”—was a picture of the stage.

Subtext is, unfortunately, not an option with three-fourths of the New York cast of Life (x) 3. (The exception is Linda Emond, whom one could watch for hours.) As a result, the play becomes less and less interesting as the evening wears on. John Turturro is a god. I adore him, but light social comedy isn’t his key, and Brent Spiner has nowhere near the requisite edge of menace to bring off the role of Hubert. Matthew Warchus, who staged the production at London’s National Theater (where he had Mark Rylance, Harriet Walter and Imelda Staunton to work with, forsooth) must simply have thrown up his hands.

The guy you want playing the unpleasant but overwritten Hubert in Life (x) 3 is probably Walter Bobbie, the subtle, urbane, versatile actor who is appearing just now in David Ives’s Polish Joke to the great delight of Manhattan Theater Club audiences. Mr. Bobbie, I recall, did an elegant turn in a minor role in a minor Shaw play some years ago, then played Nicely-Nicely Johnson in the Jerry Zaks revival of Guys and Dolls, and spent the next ten years as artistic director of the Encores! concert series at City Center—too long, I feel, as it seems to have kept him off the stage. Mr. Bobbie is rare among American stage actors in his ability to project intellectual feeling without playing it as an abstraction (he’s like Kevin Spacey in this regard), and he finds nuance in the most intriguing and unexpected places, wild comic cameos and archetypal characters, more like a novelist than an actor. He’s at his best in Ives’ extremely funny if frustrating play.

It concerns a young man of Polish descent (Malcolm Gets) named Jasiu (pronounced “Yashoo”), who goes through life pretending to be an Irishman—or who already has gone through life doing that; it’s not really clear. The play begins with a lovely, wry monologue in which Mr. Gets—memory-play-style—proposes a vision of two kinds of ethnic doom, real and perceived. The speech gives way to a flashback in which we witness the childhood conversation from which he apparently derived all philosophy, a backyard chat with his beer-guzzling Uncle Roman (Richard Ziman) in which the latter had laid out for him the practical and existential pitfalls of being Polish.

The beauty of the writing in this opening sequence—and it’s one of the funniest set-pieces being done on a New York stage just now—suggests that the rest of the play will show us the consequences of this discussion. In fact, though, Ives falls into telling-not-showing mode, and from there until a poignant final scene, the play consists of Mr. Gets narrating things we’d like to see performed, while a succession of surreal, Christopher Durang–like nightmare episodes illustrate the same joke over and over again. It’s not that the scenes aren’t funny, but the excellent ensemble cast—which includes Nancy Opel and Nancy Bell, in addition to Mr. Ziman and Mr. Bobbie—are only allowed to become embodiments of ideas we’ve already heard expressed.

The director, John Rando, has orchestrated the whole thing joyously before a wonderfully inventive Loy Arcenas set that symbolically captures the image of the world as the hero sees it. But it’s tough to stay funny at the pitch of frenetic wackiness that Ives has striven for, and the longueurs begin to outweigh the pleasure of the witty one-liners. The joke version of Daniel Deronda you thought you saw coming never materializes. Instead, there’s a tiresome thread about a tiresome girl whom Gets leaves in his search for an identity, and he ultimately discovers (imagine!) the nobility of being Polish.

Mr. Gets is as endearing and piquant as ever, and Nancy Opel is, as usual, amazingly funny. And Mr. Ziman has one or two delightful moments toward the end of the play when Jasiu comes to see Roman, now dying, and discovers that his uncle doesn’t even remember the conversation in which he imparted the precepts the boy went on to live by.

There’s a truth there, but Ives skirts around it, giving us instead an ersatz moral—that all people everywhere are really the same. It seems a missed opportunity. Surely the point here is something more complicated—almost ineffable—about the haphazard way in which our childhood selves receive and interpret information, giving them a construction or importance that the grownups perhaps never intended.

New York Press, April 16, 2003

The Conservative Case Against George W. Bush

Theodore Roosevelt, that most virile of presidents, insisted that, “To announce that there should be no criticism of the president, or that we are to stand by the president, right or wrong, is not only unpatriotic and servile, but is morally treasonable to the American people.” With that in mind

Theodore Roosevelt, that most virile of presidents, insisted that, “To announce that there should be no criticism of the president, or that we are to stand by the president, right or wrong, is not only unpatriotic and servile, but is morally treasonable to the American people.” With that in mind, I say: George W. Bush is no conservative, and his unprincipled abandonment of conservatism under the pressure of events is no statesmanship. The Republic would be well-served by his defeat this November.

William F. Buckley’s recent retirement from the National Review, nearly half a century after he founded it, led me to reflect on American conservatism’s first principles, which Buckley helped define for our time. Beneath Buckley’s scintillating phrases and rapier wit lay, as Churchill wrote of Lord Birkenhead, “settled and somewhat somber conclusions upon… questions about which many people are content to remain in placid suspense”: that political and economic liberty were indivisible; that government’s purpose was protecting those liberties; that the Constitution empowered government to fulfill its proper role while restraining it from the concentration and abuse of power; and that its genius lay in the Tenth Amendment, which makes explicit that the powers not delegated to government are reserved to the states or to the people.

More generally, American conservatives seek what Lord Acton called the highest political good: to secure liberty, which is the freedom to obey one’s own will and conscience rather than the will and conscience of others. Any government, of any political shade, that erodes personal liberty in the name of social and economic progress must face a conservative’s reasoned dissent; for allowing one to choose between right and wrong, between wisdom and foolishness, is the essential condition of human progress. Although sometimes the State has a duty to impose restrictions, such curbs on the liberty of the individual are analogous to a brace, crutch, or bandage. However necessary in the moment, they are best removed as soon as possible, as they tend to weaken and to cramp. Thus American conservative politics championed private property, an institution sacred in itself and vital to the well-being of society. It favored limited government, balanced budgets, fiscal prudence, and avoidance of foreign entanglements.

More subtly, American conservatism viewed human society as something of an organism in itself. This sense of society’s organic character urged the necessity of continuity with the past, with change implemented gradually and with as little disruption as possible. Thus, conservatism emphasized the “civil society”—the private voluntary institutions developed over time by passing the reality test (i.e., because they work) such as families, private property, religious congregations and neighborhoods—rather than the State. In nearly every sense, these institutions were much closer to the individuals who composed them than the State could ever be. They had the incidental and beneficial effect of protecting one’s personal liberty against undue intrusion from governments controlled by fanatics and busybodies—the phenomenon Edmund Burke presciently termed “armed ideologies”—and thus upheld our way of life as flying buttresses supported a Gothic cathedral.

But the policies of this administration self-labeled “conservative” have little to do with tradition. Rather, they tend to centralize power in the hands of the government under the guise of patriotism. If nothing else, the Bush administration has thrown into question what being a conservative in America actually means.

Forty years ago, when Lyndon Johnson believed the United States could afford both Great Society and the Vietnam War, conservatives attacked his fiscal policies as extravagant and reckless. Ten years ago, the Republican Party regained control of Congress with the Contract with America, which included a balanced-budget amendment to restore fiscal responsibility. But today, thanks to tax cuts and massively increased military spending, the Bush administration has transformed, according to the Congressional Budget Office, a ten-year projected surplus of $5.6 trillion into a deficit of $4.4 trillion: a turnaround of $10 trillion in roughly 32 months.

The Bush Administration can’t even pretend to keep an arm’s length from Halliburton, the master of the no-bid government contract. Sugar, grain, cotton, oil, gas, and coal: These industries enjoy increased subsidies and targeted tax breaks not enjoyed by less well-connected industries. The conservative Heritage Foundation blasts the administration’s agricultural subsidies as the nation’s most wasteful corporate welfare program. The libertarian Cato Institute has called the administration’s energy plan “three parts corporate welfare and one part cynical politics…a smorgasbord of handouts and subsidies for virtually every energy lobby in Washington” that “does little but transfer wealth from taxpayers to well-connected energy lobbies.” And the Republican Party’s Medicare drug benefit, the largest single expansion of the welfare state since Johnson’s Great Society, was designed to appeal to senior citizens who, as any competent politician knows, show up at the polls.

None of this is conservative, though it is in keeping with the Bush family’s history. Kevin Phillips, whose 1969 classic The Emerging Republican Majority outlined the policies that would lead to the election of President Reagan, describes in his American Dynasty the Bush family’s rise to wealth and power through crony capitalism: the use of contacts obtained in public service for private profit. Phillips argues that the Bushes don’t disfavor big government as such: merely that part of it which regulates business, maintains the environment, or aids the needy. Subsidizing oil-well drilling through tax breaks, which made George H. W. Bush’s fortune, or bailing out financial institutions, such as Neil Bush’s bankrupt Silverado Savings and Loan, however, is a good thing.

This deficit spending also helps Bush avoid the debate on national priorities we would have if these expenditures were being financed through higher taxes on a pay-as-you-go basis. After all, we’re not paying the bill now; instead, it will come due far in the future, long after today’s policy-makers are out of office. And this debt is being incurred just as the baby boomers are about to retire. In January 2004, Charles Kolb, who served in the Reagan and George H. W. Bush White Houses, testified before Congress that, at a time when demographics project more retirees and fewer workers, projected government debt will rise from 37 percent of the economy today to 69 percent in 2020 and 250 percent in 2040. This is the sort of level one associates with a Third World kleptocracy.

Even worse than this extravagance are the administration’s unprecedented intrusions into our constitutional privacy rights through the Patriot Act. If it does not violate the letter of the Fourth Amendment, it violates its spirit. To cite two examples, the FBI has unchecked authority through the use of National Security Letters to require businesses to reveal “a broad array of sensitive information, including information about the First Amendment activities of ordinary Americans who are not suspected of any wrongdoing.” Despite the Fourth Amendment’s prohibition on unreasonable search and seizure, the government need not show probable cause: It does not need to obtain a warrant from a judge. And who can trust any law enforced by John Ashcroft, who single-handedly transformed a two-bit hubcap thief like José Padilla first into a threat to national security and then, through his insistence that Padilla, an American citizen, could be held without charges, into a Constitutional crisis?

All this stems from Bush’s foreign policy of preemptive war, which encourages war for such vague humanitarian ends as “human rights,” or because the United States believes another country may pose a threat to it. Its champions seem almost joyously to anticipate a succession of wars without visible end, with the invasion of Iraq merely its first fruit: former Bush appointee Richard Perle, from his writings on foreign policy, would have us war against nearly every nation that he defines as a rogue. The ironic consequence of this policy to stabilize the world is greater instability. It reminds me of the old FDR jingle from the Daily Worker:

I hate war, and so does Eleanor,
But we won’t feel safe until everybody’s dead.

To be sure, there’s more than enough blame to go around with the Congress’ cowardly surrender to the Executive of its power to declare war. The Founding Fathers, who knew war from personal experience, explicitly placed the war power in the hands of the Congress. As James Madison wrote over 200 years ago:

The Constitution expressly and exclusively vests in the Legislature the power of declaring a state of war… The separation of the power of declaring war from that of conducting it is wisely contrived to exclude the danger of its being declared for the sake of its being conducted.

But since the Korean War (which the Congress defined as a “police action” to avoid using its war powers), war has been waged without its formal declaration. Thus Congressional power atrophies in the face of flag-waving presidents. Perhaps Congress is too preoccupied with swilling from the gravy trough that our politics has become to recall its Constitutional role as a co-equal branch of government, guarding its powers and privileges against executive usurpation. The Congress has forgotten that the men who exacted Magna Carta from King John at sword point instituted Parliament to restrain the executive from its natural tendency to tax, spend, and war.

Moreover, there is nothing conservative about war. As Madison wrote:

Of all the enemies to public liberty war is, perhaps, the most to be dreaded, because it comprises and develops the germ of every other. [There is an] inequality of fortunes, and the opportunities of fraud, growing out of a state of war, and…degeneracy of manners and of morals…No nation could preserve its freedom in the midst of continual warfare.

By contrast, business, commerce, and trade, founded on private property, created by individual initiative, families, and communities, has done far more to move the world forward than war. Yet faith in military force and an arrogant belief that American values are universal values still mold our foreign policy nearly a century after Woodrow Wilson, reelected with a promise of keeping America out of World War I, broke faith with the people by engineering a declaration of war within weeks of his second inauguration.

George W. Bush’s 2000 campaign supposedly rejected Wilsonian foreign policy by articulating both the historic Republican critique of foreign aid and explicitly criticizing Bill Clinton’s nation-building. Today, the administration insists we can be safe only by compelling other nations to implement its vision of democracy. This used to be called imperialism. Empires don’t come cheap; worse, “global democracy” requires just the kind of big government that conservatives abhor. When the Wall Street Journal praises the use of American tax dollars to provide electricity and water services in Iraq, something we used to call socialism, either conservatism has undergone a tectonic shift or the paper’s editors are being disingenuous.

This neo-conservative policy rejects the traditional conservative notion that American society is rooted in American culture and history—in the gradual development of American institutions over nearly 230 years—and cannot be separated from them. Instead, neo-conservatives profess that American values, which they define as democracy, liberty, free markets, and self-determination, are “universal” rather than particular to us, and insist they can and should be exported to ensure our security.

This is nonsense. The qualities that make American life desirable evolved from our civil society, created by millions of men and women using the freedom created under limited constitutional government. Only a fool would believe they could be spread overnight with bombs and bucks, and only a fool would insist that the values defined by George W. Bush as American are necessarily those for which we should fight any war at all.

Wolfowitz, Perle, and their allies in the Administration claimed the Iraqis would greet our troops with flowers. Somehow, more than a year after the president’s “Mission Accomplished” photo-op, a disciplined body of well-supplied military professionals is still waging war against our troops, their supply lines, and our Iraqi collaborators. Indeed, the regime we have just installed bids fair to become a long-term dependent of the American taxpayer under U.S. military occupation.

The Administration seems incapable of any admission that its pre-war assertions that Iraq possessed weapons of mass destruction were incorrect. Instead, in a sleazy sleight of hand worthy of Lyndon Johnson, the Administration has retrospectively justified its war with Saddam Hussein’s manifold crimes.

First, that is a two-edged sword: If the crimes of a foreign government against its people justify our invasion, there will be no end of fighting. Second, the pre-war assertions were dishonest: Having decided that Iraq possessed weapons of mass destruction, the policymakers suppressed all evidence that it did not. This immorality is thrown into high relief by the war’s effect on Iraqi civilians. We have no serious evidence of any connection between Iraq and 9/11. Dropping 5000-pound bombs on thousands of people who had nothing to do with attacking us is as immoral as launching airplanes at an American office building.

To sum up: Anything beyond the limited powers expressly delegated by the people under the Constitution to their government for certain limited purposes creates the danger of tyranny. We stand there now. For an American conservative, better one lost election than the continued empowerment of cynical men whose abuse of power unrestrained by principle is based upon the compromise of conservative beliefs. George W. Bush claims to be conservative. His administration’s unwholesome intrusion into domestic life and personal liberty, and the local governments who imitate it, suggest otherwise. George W. Bush is no no friend of limited, constitutional government—and no friend of freedom. The Republic would be better served by his defeat in November.

New York Press, August 4, 2004

Artificial Affect

Ifound myself checking up on the parts of a horse the other day. It was after the Daily News had carried an AP story about some new prehistoric art found in the Perigueux region of France—engravings thought to predate the Lascaux cave paintings by 10,000 years. It was a burial

Ifound myself checking up on the parts of a horse the other day. It was after the Daily News had carried an AP story about some new prehistoric art found in the Perigueux region of Franceengravings thought to predate the Lascaux cave paintings by 10,000 years. It was a burial ground of some sort, and the version of the story that Newsday carried included a quote from an official of the French Ministry of Culture: “The presence of graves in a decorated cave is unprecedented.”

But the drawings in the Daily News photograph didn’t look like decorations; they looked like sketchpad studies—partial (a mane here, a hoof there, an idea of musculature) and unarranged. They were all on top of one another as though the artist hadn’t wanted to take time to find a blank space on the wall for fear of missisng whatever he was trying to capture from memory or life.

Only one of the figures in the photographa horsewas recognizable. It seemed curiously realistic, so realistic that for a moment I wondered if the drawings might be a hoax. It didn’t seem stylized enough for prehistoric art. This was no flat, geometric artifact with characteristics one might interpret as equine; it was a proper horse, fully articulated, drawn in profile, and almost in perspective, complete with all the things a horse should have. You could make out every element of horse physiognomy: upper and lower muzzle, nostril, even the soft, fat, jowly part that covers a muscle I now know to be called the masseter.

There’s nothing to say that primitive artwork has to be more stylized than it is realistic. Or, to put it another way, there’s no reason to think that art wasn’t realistic before it was stylizedany more than there is to think it impossible that a more advanced technology than ours once existed a long time ago in a galaxy far, far away. I mention the Perigueux horse because I’ve been thinking about realism and views of reality in the context of some of the summer’s more and less obviously cheesy movies. Mostly I’ve been trying to figure out why the picture of a world proposed by Steven Spielberg’s A.I. bothered me so much.

When it comes to matters of realism and stylistic form, it’s always interesting to find out what we are and aren’t prepared to accept. Detail is what tends to create problems. I remember once, some years ago, getting laughed at when I objected to something at the end of a horror picture. The werewolf-hero had been cornered by the SWAT team and would be blown away in a matter of moments, but first the heroine wanted to wish him a fond farewell and stepped into the line of fire. I said it was “ridiculous—unrealistic.” The friends I was watching the video with thought it hilarious that I hadn’t objected to the premise of the picture as “ridiculous” or “unrealistic,” but only that one small aspect.

We tend to hold different art forms to different standards of verisimilitude. We demand more literal truth from the narrative and dramatic, say, than the graphic arts. When the Metropolitan Museum of Art held an exhibit of late Renaissance drawings earlier this year, you didn’t hear museum-goers finding a lot of fault with Correggio because some of the pictures deviated from natural truth. You didn’t notice anyone pointing critically, saying, “Look at the way that Madonna is holding the child! It’s ridiculous! No mother would hold a baby that way, it would slide right off her lap!” The point was the folds of her dress and the way they draped over her leg: these would have been obscured if the artist had taken the actual weight of a real-life baby into account.

It’s artistry itself, as often as not, that leads us to ignore some discrepancy between the truth as it’s depicted in a work of art and the way things are. If you go to see Kenneth Lonergan’s Lobby Hero at the John Houseman Theater (it reopened there in May and runs through Sept. 2), there may come a point when you find yourself noticing a particular unrealistic aspect of the play. Set in the foyer of a Manhattan high-rise, it concerns the relationship between four characters: a young security guard who works the graveyard shift at the apartment building, his supervisor, and two cops, one of whom is having an affair with a tenant in the building.

You’d look hard to find a visual stage truth as compelling as the way the shadow of an adjacent building on Allen Moyer’s set cuts off the sunlight from the sunken area just outside on the pavement, exactly the way the buildings surrounding those badly designed East Side high-rises always do. You know that building, you can visualize the whole exterior just from the way Mark McCullough has lighted that tiny sliver of stage, and the characters are equally well observed. All the same, it’s bound to occur to you that in the entire course of the two nights in which the play is set, no one other than the characters in the play crosses the lobby.

It’s unimportant. The truths contained in the characters’ expectations and treatment of one another are more interesting than the convention we’re being asked to accept—just as the folds in the drapery are more interesting than the bulk of the baby in Correggio’s drawing.

Sometimes what prompts us to accept a glitch in verisimilitude is the arrival of a new technique, a way of expressing something that a particular medium couldn’t have  expressed before. I remember that some years back, when the Met was holding one of its exhibitions of fifth-century sculpture, a wonderful bit of signage pointed out that the famous statue of Nike bending down to fasten her sandal both represents an important moment in the development of “realism” and is at the same time fundamentally unrealistic. The way the sculpture captures the fall of the cloth over the goddess’ body is lifelike beyond anything that marble had hitherto managed to express. Still, the curator noted, cloth falling exactly that way, that showed the outline of the body as the one in the statue does, would have to be gossamer-like, and fabric that light wouldn’t drape well. To express what he wanted to express, the artist had had to create another reality in which both a garment and the object it veils are visible at the same time thereby anticipating certain schools of modern art by a couple of millenia.

One of this summer’s cinematic talking points is Final Fantasy, a movie based on a video game, that uses computer-generated images of actors instead of real ones. It’s fascinating for the space of about ten minutes because of the precise way in which it doesn’t work. The moving figures that act out the story seem like neither actors nor animations, merely like an attempt to ape a simulation of life.

Animation—which we still use almost in its literal and etymological sense—takes nonliving entities and breathes life into them. Its wit historically resided in its ability to assign human attributes to nonhuman objects and creatures, thereby commenting on humanity. But the suggestion of life is dependent on spontaneity. The creators of Final Fantasy didn’t have that to work with, so they had to fall back on facial and gestural cliché: this expression for fear, that pose for anger or grief. For all its technical prowess, Final Fantasy turned out to be a throwback to the most primitive styles of stage and silent-movie acting.

Nevertheless, it’s caused a certain amount of consternation in the entertainment industry. The fear is that if such methods are found to be “successful,” computer images will gradually come to replace real actors on the screen. Interestingly, this development echoes a major plot point in A.I., Spielberg’s long-awaited movie about a boy-robot who develops mortal longings.

The film, which Spielberg developed from an idea that Stanley Kubrick had researched for years before turning the project over to the younger director, posits a postapocalyptic future (some polar icecaps have melted, drowning the entire globe except for a large part of New Jersey) in which human beings have so perfected the art of simulating humanity that the only thing left for a self-respecting Promethean to explore is whether a robot can be programmed to love and thereby become more “human.”

It’s odd that Spielberg chose to cast Haley Joel Osment, the child actor whose passion in The Sixth Sense was so moving and played so well against Bruce Willis’s trademark lack of affect, as the boy-robot David. In A.I., the young actor is himself required to simulate lack of affect and later, as David’s adoptive mother utters the words that program him to love her for all time, to simulate recently acquired artificial affect.

Actually, there are a number of curious things about A.I., not least of which is the widely noted “schizoid” quality that critics have enjoyed attributing to the Spielberg/Kubrick dichotomy. The movie’s singular plot keeps presenting us with recognizable tropes that we think will develop in such a way as to explore what it means to be “human.” (That’s Spielberg the bard, king of genre, storyteller extraordinaire.) But these setups keep petering out, wandering off into tough-minded existential gloom. (That’s Kubrick, genius and redoubtable intellect.)

Watching A.I., I found myself prey to the American Werewolf in London syndrome, willing to entertain the premise but stumbling over details. I was prepared to accept a world of punishingly planned parenthood serviced by a race of humanoid robots. But I kept wondering why the couple in the movie, David’s adoptive parents, have no friends and why they are so inexplicably wealthy. They live in a huge, beautifully appointed house, miles from anyone else, and can afford to have their birth son cryogenically frozen until such time as a cure is found for whatever it is that ails him.

Where are all the other people in this world? Apart from the factory workers who operate the robot plant, the only human beings in the picture are the rabble—the crowds of ugly, sweaty people who frequent the roving demolition festivals called Flesh Fairs (carnivals—get it?) where antiquated, damaged, or otherwise unwanted robots are ritually trashed. The Flesh Fairs are part theme park, part slave market, part revival meeting, part public execution, and the unkempt folk who attend them are there to exorcise their fears of extinction.

The friend who came with me to see A.I. pointed out that there’s nothing noble or uplifting about the climactic scene in which the mob turns on the carnival manager, rallying to defend the robot child because he is a child. The sequence simply replaces self-interested savagery and brutality with mawkish savagery and brutality. I doubt that was Spielberg’s intention, but then the whole movie is sort of one big glitch in verisimilitude. It’s the portrait of a society trying to make lifelike beings, drawn by a man who has been so removed from real life for so long that he doesn’t remember what it looks like. Or, rather, two such men—the reclusive genius who conceived the project and the commercial giant who completed it.

At least the movie based on a computer game knows that it’s junk. Ironically (or perhaps predictably), it carries the same message as the Spielberg epic: what makes us human is our dreams. But Spielberg here is being either disingenuous or naive: his point, surely, is that what exalts the human race is movies, not dreams themselves but dream-makers like himself and Kubrick. The whole picture is a series of self-absorbed allusions to Spielberg and Kubrick—their humanity, their achievement, their work. It’s telling (and potentially more worrying than anything in Final Fantasy) that the most compelling and lifelike performance in A.I. comes from a computer-animated teddy bear.

New York Press, July 24, 2001

The Way of the Perfect Samurai

He wrote near the end that his life was divided into four rivers: writing, theater, body, and action. He memorialized all of it through photographs. Some were conventional. When Yukio Mishima came to New York with his wife for a belated honeymoon in 1960, they were photographed on the Staten Island ferry and before the Manhattan skyline, like any tourist couple.

He wrote near the end that his life was divided into four rivers: writing, theater, body, and action. He memorialized all of it through photographs. Some were conventional. When Yukio Mishima came to New York with his wife for a belated honeymoon in 1960, they were photographed on the Staten Island ferry and before the Manhattan skyline, like any tourist couple.

A bodybuilder for the last two decades of his life, his love of self-display crossed into exhibitionism. Thus, the beautiful, homoerotic photographs: Mishima in a fundoshi, a loincloth, kneeling in new-fallen snow with a dai katana, the great sword of a samurai, or posing as Guido Reni’s St. Sebastian (complete with arrows). He even posed for Barakei (roughly, “Death by Roses”), a magnificently produced luxury book of extraordinary nude photographs, and somehow was disturbed by the consequent letters received from various admirers requesting still bolder portraits—after all, he was a family man with a wife and two children.

Perhaps the four rivers joined in his most famous photograph: Mishima stripped to the waist, his chest bulging with muscle and gleaming with sweat, his brows knotted and eyes glaring, wielding a massive, two-handed, three-foot-long dai katana. It was an elegant weapon, made by the legendary 17th-century swordsmith Seki no Magoroku, and kept razor-sharp. About his head is a hachimaki, a white headband bearing the Rising Sun and a medieval samurai slogan, “Serve the Nation for Seven Lives.”

Yukio Mishima first came to New York in 1951 at twenty-five. Within the previous two years, he had published two outstanding novels, Confessions of a Mask and Forbidden Colors. The critics hailed him as a genius. He spent ten days in the city, going to the top of the Empire State Building, seeing Radio City Music Hall and the Museum of Modern Art, catching Call Me Madam and South Pacific. New York did not appeal to him: he found it, according to biographer John Nathan, “like Tokyo five hundred years from now.”

Mishima was born Kimitake Hiraoka, the eldest son of a middle-class family. Before he was two months old, his paternal grandmother took him from his parents and kept him until he was twelve. Her ancestors had been samurai, related by marriage to the Tokugawa, who were shoguns. She was chronically ill and unstable, yet she loved theater and took him to the great classics, such as The 47 Ronin, a magnificent celebration of feudal allegiance, of loyalty and honor even unto death, and perhaps the most stirring Kabuki play.

Through her family connections, Mishima entered the elite Peers’ School, and by fifteen he was publishing in serious literary magazines. He took the pen name Yukio Mishima to escape his father’s persecution (his father, a Confucian, considered fiction mendacity and destroyed his son’s manuscripts whenever possible). In 1944 he graduated as valedictorian and received a silver watch from the Emperor. His luck held: he failed an army induction physical and thus survived the Second World War.

From the beginning, Mishima’s productivity was stunning: in 1948, he published thirteen stories, a first novel, a collection of novellas, two short plays and two critical essays. On November 25, 1948, after retiring from a nine-month career at the Finance Ministry, he began his first major novel, Confessions of a Mask. Mishima brilliantly evokes his closeted protagonist’s awareness of being different and sense of unique shame. Within two years, Mishima revisited this theme in Forbidden Colors, now noting homosexuality’s ubiquity. Spending all that time in gay bars, taking notes, can do that to you. Besides, homosexuality occupies a different place in Japanese culture than it does in ours. During the two centuries before Japan reopened to the West, some of its most flamboyant heroes were bisexual picaros whose panache and courage on the battlefield were equaled by delicacy and endurance in a diversity of intimate situations.

In July 1957, after Alfred A. Knopf published his Five Modern Noh Plays, Mishima returned to New York. (He told his biographer John Nathan that Knopf dressed “like the King in an operetta, or a whiskey trademark.”) Mishima was interviewed by The New York Times, met Christopher Isherwood, Tennessee Williams, and their friends, saw eight Broadway shows, and went several times to the New York City Ballet.

He returned to Japan to find a wife, which was not as easy as one might think. Although marriages were still often arranged, and he was one of Japan’s most distinguished men of letters, Mishima’s affect was apparently not particularly attractive. (A weekly magazine had polled Japan’s young women on the question, “If the Crown Prince and Yukio Mishima were the only men remaining on earth, which would you prefer to marry?” More than half the respondents preferred suicide.) Nevertheless, his marriage to Yoko Sugiyama proved successful. They stopped in New York on their belated honeymoon, where he saw two of his plays performed in English at the cutting-edge Theatre de Lys. They had two children and he was an attentive, devoted father.

The family lived in a house Mishima had ordered built in the Western manner. It has been described as Victorian colonial, perhaps because the language lacks words to better describe it. “For Mishima,” Nathan explains, “the essence of the West was late baroque, clashing colors, garishness…” He describes him assuring “his horrified architect,” that ‘I want to sit on rococo furniture wearing Levi’s and an aloha shirt; that’s my ideal of a lifestyle.'”

From 1965 to 1970, he worked on his four-volume epic, The Sea of Fertility (Spring Snow, Runaway Horses, The Temple of Dawn, and The Decay of the Angel). “The title,” he said, “is intended to suggest the arid sea of the moon that belies its name.” It is his masterpiece, as he knew it would be.

At first glance, in taking the theme of the transformation of Japanese society over the past century, Mishima is revisiting the tired, even trite conflict between traditional values and the spiritual sterility of modern life. One might better define this work as a lyric expression of longing, which he apparently believed the central force in life: that longing led one to beauty, whose essence is ecstasy, which results in death. His fascination with death is erotic: he was drawn to it as most of us are drawn to the company and the touch of the beloved.

In his essay “Sun and Steel,” he wrote of “a single, healthy apple…at the heart of the apple, shut up within the flesh of the fruit, lurks the core in its wan darkness, tremblingly anxious to find some way to reassure itself that it is a perfect apple. The apple certainly exists, but to the core this existence as yet seems inadequate… Indeed, for the core, the only sure mode of existence is to exist and to see at the same time. There is only one method of solving this contradiction. It is for a knife to be plunged deep into the apple so that it is split open and the core is exposed to the light… Yet then the existence of the cut apple falls to pieces; the core of the apple sacrifices existence for the sake of seeing.”

Mishima stood about five feet, two inches. He glowed with charisma and an undeniable, disturbing sexuality. Every memoir testifies to his extraordinary energy. He was brilliant and witty, even playful. He had self-knowledge and a keen irony, and his own absurdities were often its target. He became politically active on the extreme right and in 1968 organized the Shield Society, which became his elegantly uniformed private army.

Both Japanese and Westerners testified to his extraordinary empathy—his ability to understand and respond to others. Thus his genius for conversation: the man who loved discussing the Japanese classics, Oscar Wilde, or the dozen shades of red differentiated in the Chinese spectrum could also discuss weightlifting or kendo or a thousand other subjects, each gauged to his listener. He could make his companion feel that he or she was the most important person in the world to him, which was a useful gift for a man who understood that he lived behind masks, or in a series of compartments, and that no one knew him whole.

In November of 1970, Yukio Mishima was forty-five. He’d published thirty novels, eighteen plays, twenty volumes of verse, and twenty volumes of essays; he was an actor and director, a swordsman and bodybuilder, a husband and father. He spoke three languages fluently; he had gone around the world seven times, modeled in the nude, flown in a fighter jet, and conducted a symphony orchestra. During the previous evening, he had told his mother that he had done nothing in his life that he had wanted to do.

On November 25, twenty-two years to the day from beginning Confessions of a Mask, he led a party of four members of the Shield Society to a meeting with the commanding general of the Eastern Army of the Japanese Self-Defense Force. He had finished revising the manuscript of The Decay of the Angel only that day; it was on a table in the front hall of his house, ready for his publisher’s messenger.

At army headquarters, with only swords and daggers, Mishima and his men took the commanding general hostage. They demanded that the troops be assembled outside the building to hear Mishima speak. A little before noon, with 800 soldiers milling about, Mishima leaped to the parapet of the building, dressed in the Shield Society’s uniform. About his head was the hachimaki. He began speaking, but the police and television helicopters drowned out many of his words. He spoke of the national honor; and demanded the army join him in restoring the nation’s spiritual foundations by returning the Emperor to supreme power.

He had once said, “I come out on stage determined to make the audience weep and instead they burst out laughing.” It held true now: the soldiers shouted that he was a bakayaro, an asshole. After a few minutes, he gave up. He cried out three times, “Heiko Tenno Banzai” (“Long Live the Emperor”), and stepped back.

He loved Jocho Yamamoto’s classic Hagakure, an 18th-century instruction manual for the warrior. Jocho states, “The way of the samurai is death…the perfect samurai will, in a fifty-fifty life or death crisis, simply settle it by choosing immediate death.”

Mishima had fantasized about kirigini—to go down fighting against overwhelming odds, sword in hand. Now he kicked off his boots and removed his uniform until he wore only a fundoshi. He sat down on the carpet and took a dagger, a yorodoishi, in his right hand. He inhaled deeply. Then his shoulders hunched as he drove the blade into his abdomen with great force. As his body attempted to force out the weapon, he grasped his right hand with his left and continued cutting. The blood soaked the fundoshi. The agony must have been unimaginable. Yet, he completed the cut. His head collapsed to the carpet as his entrails spilled from his body.

He had instructed Morita, his most trusted follower, “Do not leave me in agony too long.” Now, Morita struck down with Mishima’s dai katana. He was inept: the beheading required three strokes. Then Morita took his own life.

Mishima’s motives remain the subject of speculation: madness, burnout, or fatal illness. Some whispered that he might have enjoyed the pain. Others suggested he and Morita had committed shinju, a double love-suicide. Some argued esthetics. A reading of Sun and Steel suggests that suicide was the logical completion of his search for beauty. Others take him seriously. Perhaps it was a matter of honor, and his death the most sincere protest he could muster against modern life.

To this day, thousands of Japanese observe the anniversary of his suicide.

New York Press, May 9, 2000

Imagining Ahab

Next Tuesday, as part of a weekly movie series at Symphony Space, John Huston’s 1956 film version of Moby Dick will be shown in a double bill with John Ford’s The Searchers. The date is November 21, and I keep wondering whether Isaiah Sheffer, the artistic director of Symphony Space, knew when he made up the program that he was scheduling Moby Dick for the 180th anniversary of the incident that probably inspired it, give or take a few hours: the sinking of a Nantucket whaler by an enraged sperm whale

Relatives of Frank William “Billy” Tyne, who captained the ill-fated Andrea Gail and went down with six of his crew during the brutal 1991 storm off New England, are thundering mad at George Clooney’s portrayal of their kin.

In a lawsuit filed in U.S. District Court in Orlando, Fla., against Warner Bros., Tyne’s family claims that the movie “falsely depicted” Tyne as “emotionally aloof, reckless, excessively risk-taking, self-absorbed, emasculated, despondent, obsessed and maniacal.” (New York Post, August 29, 2000)

Next Tuesday, as part of a weekly movie series at Symphony Space, John Huston’s 1956 film version of Moby Dick will be shown in a double bill with John Ford’s The Searchers. The date is November 21, and I keep wondering whether Isaiah Sheffer, the artistic director of Symphony Space, knew when he made up the program that he was scheduling Moby Dick for the 180th anniversary of the incident that probably inspired it, give or take a few hours: the sinking of a Nantucket whaler by an enraged sperm whale in the South Pacific. The name of the ship was the Essex, and she went down on November 20, 1820. The Essex disaster is the subject of Nathaniel Philbrick’s In the Heart of the Sea: the Tragedy of the Whaleship Essex, up for a National Book Award this week. I gather that Philbrick’s competition in the category of nonfiction is the great literary critic Jacques Barzun. All the same, it will be a shame if In the Heart of the Sea doesn’t win. Not only is it a thumping good read, more so even than your average first-rate humdinger of a sea-disaster story; it’s also an interesting piece of cultural criticism. In Philbrick’s book, everything one has never really understood about Moby Dick—why Ahab was kicking up such a rumpus, all that stuff about good and evil, and Calvinism and paganism, the incessant jokes about cannibals, even the footnotes and digressions—is all made intelligible through being put in the context of the Nantucket whaling industry. Reviews of the book largely focused on two elements of the story, playing up the sensational aspect and oversimplifying a literary point. In order to survive, the crew of the Essex, adrift for three months with only food and water enough for half that time, had been forced to resort to cannibalism, actually eating the bodies of their dead shipmates. Worse still, they had, at one point, gone so far as to sacrifice one of their number, drawing lots to determine who the victim and his executioner would be. It’s a haunting, horrifying tale; but almost more compelling than the story itself are Philbrick’s insights into why it so resonated with people at the time, Melville among them. The publicity material that accompanies the book describes the Essex incident as the inspiration for the ending of Moby Dick. In fact, Philbrick suggests (if he doesn’t come right out and say so) that the Essex story must have been a thematic starting point for the whole novel. More shocking even than the means by which the men of the Essex sought to survive was the unprecedented phenomenon of a whale attacking a ship. Such a thing had never happened before. Nantucketers went after whales, not the other way around. That was how it was supposed to be. rathjen_kent_2_200Even when whales did fight back, moreover, they did so in a predictable, time-honored fashion—with their jaws and tails. But this whale had rammed the ship with its head—not once but twice—and had gnashed its teeth “as if distracted with rage and fury,” as first mate Owen Chase wrote in his account of the ordeal. Chase thought the whale’s behavior a result of cool reasoning, that it had attacked the ship in the manner “calculated to do us the most injury,” knowing that the combined speeds of two objects would be greatest in a head-on collision and the impact therefore most destructive. The image of the enraged, vengeful whale is the cornerstone of Moby Dick, of course. It’s what lies behind Ahab’s quest and his obsession, a point so obvious that it only needs to be made in passing. (“To be enraged with a dumb thing, Captain Ahab, seems blasphemous,” says Starbuck, but mostly so that Ahab can answer, “I’d strike the sun if it insulted me.”) Ahab regards Moby Dick much as the crew of the Essex seem to have viewed the whale that attacked them, as a creature capable of rational action. One of the fascinating questions Philbrick raises, though, if only by implication, is where so weirdly modern a notion could possibly have come from. The Nantucketers who had harvested whales for generations, he points out, saw their vocation as part of the Divine Plan. But to ascribe rage to an object of one’s own aggression is to come perilously close to admitting a sense of guilt. You cannot, after all, discern anger or moral outrage in a fellow being unless you also grant it a point of view. Iam on something of a Melville kick just now. It started back in the summer when I went to see A Perfect Storm. That put me in a really foul mood, and I had to rent the Huston Moby Dick as an antidote. Not that I held any brief for Sebastian Junger’s book—I hadn’t read it. There are just certain themes and tropes that I expect to be moved by, and when I’m not, I know I am in the presence of fools: scenes of someone pulling away from land while someone else is left on shore; a chorus of voices singing “For Those in Peril on the Sea”; shots of a wall of names; glimpses of Leonard Craske’s famous statue commemorating the fisherman of Gloucester. You know the one: doughty mariner at the wheel, braced against the wind and peering out into sea above a fragment of the 107th Psalm (“They that go down to the sea in ships, that do business in the great waters; These see the works of the Lord and his wonders in the deep….”). craskeA Perfect Storm had all of those elements, but it was imbued by a sort of “Wreck-of-the-Hesperus” mentality—the kind of thinking according to which, say, if the young lady lashed to the mast in the Longfellow poem had not been possessed of “a bosom white as the hawthorne buds that ope in the month of May,” the whole incident would have been somehow less worthy of our attention. This was a movie that thought the Gloucester men lost at sea in a 1991 hurricane had to be played by movie stars like George Clooney and Mark Wahlberg in order for us to take an interest in them. It assumed that in order for their story to be tragic or poignant it would have to be established that one of them had a girlfriend and a mother who were going to miss him, that another one had a little boy who would be sad without his daddy, that a third, who never seemed to have much luck with women, had (irony of ironies) just met one with whom things might have worked out. I couldn’t understand it. How could you make a sea-disaster story so unutterably boring—particularly one based on a real-life incident? “Truth is boring,” said a friend, and I had no answer. It was only later that I realized what I ought to have said: “Truth is never boring; that’s reality you’re thinking of.” APerfect Storm got me started on Melville, but there were other things. I read the Philbrick book. Then Elizabeth Hardwick’s Melville entry in the Penguin Lives series came out, and a friend whose eyesight is going and who was listening to a recording of Moby Dick suggested that listening to the novel was actually the best way to experience it. So I did, and I found that she was right: that because it takes longer to hear things than it does to read them, images and phrases linger on in the mind, making more of an impression, so that you’re in a better position to see connections and confluences. It has something to do with the physics of time, sound, memory, and imagination. Finally, there was Rinde Eckert’s intriguing music-theater piece And God Created Great Whales, which swung through New York a couple of times. It concerned a brilliant but maimed and narcissistic piano tuner trying to write an opera based on Moby Dick, and it was striking for the way it heaped ridicule on such a foolish idea and at the same time succeeded in translating the intellectual impulse that Moby Dick is about into a piece of musical theater. It offered the same juxtaposition of human aspiration with human frailty. Eckert’s piano tuner (he sang the role himself) had immortal longings, but he also had a degenerative disease that entailed progressive memory loss. Consequently, he couldn’t remember from one moment to the next what he was writing, had written, or had been planning to write. He had a tendency to go off the deep end, so to speak, careening off into some never-never land of passionate philosophical musing or theoretical arcana. Fortunately, he had a muse, half imagined, half remembered—a retired diva he’d once known, who had perhaps loved him or whom he perhaps had loved—and she usually managed to put him back on track. The whole thing was cyclical, or nonlinear anyway, going back and forth between tape recorded messages the composer had left for himself and scenes from the opera that he was trying to write. And God Created Great Whales was part satire and part serious meditation on creation and creative failure. It was about a man who threatened his own muse, and it asked whether this is an act of heroism or folly. Eckert’s composer was destroying Moby Dick in trying to adapt it—not because he didn’t understand the novel but because of the nature of art. And watching Eckert go back and forth between imagining Ahab and impersonating him, one gradually understood the tragedy in the composer-hero’s predicament: his nagging fear that what makes something operatic is also what makes it trite. This is actually the point that Melville makes time and time again in the digressive sections of the novel (and what comes across when you listen to a recording). The definitions and catalogs and histories and phylogenies that have led students and critics of Melville to wonder if he was quite in his right mind are ultimately all about the impossibility of telling the story, of painting an accurate picture of the truth. There’s an extraordinary illustration in Philbrick’s book: an 18th-century map of the Island of Nantucket that, Philbrick points out, against all accuracy makes the harbor into the shape of a whale. Actually, there are two whales in the picture: the island itself forms another. It’s an index of how far Nantucketers allowed the specter of the whale to obsess them—they literally recast their own world in its image. They also imitated it themselves, unconsciously. One of Philbrick’s most telling insights concerns the way in which the society that whaling created—with its cycles, its matriarchal structure, and its long, long stretches of male absenteeism—perfectly mirrored the natural movements of whales themselves.moby_1 Is Moby Dick Ahab’s muse or his nemesis? Does the blasphemy consist in making the whale human, or is it the other way around? And God Created Great Whales pokes fun at creative impulses; at the same time, it has great sympathy for its protagonist. It’s as though Eckert were saying, “Yeah, trying to turn Moby Dick into an opera is dumb, maybe. But it’s better than not having the urge to turn Moby Dick into an opera.” The attempt results in some foolish moments; it also produces moments of great beauty that may or may not owe anything to Melville. I’ve always had a soft spot for the John Huston version of Moby Dick. Most of my favorite bits never occur in the book at all: the prophecy, the wonderful scene where another captain begs Ahab to suspend the hunt and help search for his lost son instead, the scene where Queequeg comes out of his trance because someone is threatening Ishmael. None of that stuff is in the book. I don’t care. I love it. What I love best, though, is something that is in the book, only in a different form. It’s what was lacking in that silly George Clooney movie—so much so that one can understand Billy Tyne’s family wanting to sue the filmmakers. They’re absolutley right. They’re saying the movie version of A Perfect Storm turned Billy Tyne into Ahab, but without giving us any inkling of the forces driving him on. For that you have to go to the John Huston film. You see it on the face of an old sailor with a concertina, playing a few notes of a wistful air, and on the faces of the women standing on the dock as the Pequod pulls away—a sense of tragic inevitability.

New York Press, November 21, 2000

kent-whale1

Nassau Street

Nassau Street was named some time before 1696 in honor of William of Nassau, the Dutch prince who became King William III of England in a 1689 coup d’etat. Now largely a pedestrian mall, it winds south from its intersection with Park Row at Printing House Square to Wall Street. Much of it is lined with late-Victorian office buildings, their imposing masonry and cast-iron facades rising almost unnoticed above the frenetic retailing on their ground floors.

“Nassau Street—where stamp collecting began.” (Old advertising slogan of the Subway Stamp Co., formerly of 111 Nassau Street in lower Manhattan)

Nassau Street was named some time before 1696 in honor of William of Nassau, the Dutch prince who became King William III of England in a 1689 coup d’etat. Now largely a pedestrian mall, it winds south from its intersection with Park Row at Printing House Square to Wall Street. Much of it is lined with late-Victorian office buildings, their imposing masonry and cast-iron facades rising almost unnoticed above the frenetic retailing on their ground floors.

For roughly a century, from the 1860s through the 1970s, Nassau Street was the mecca of American philately—postage stamp collecting. Some called the neighborhood the Stamp District. Entire buildings, like the Morton Building at 116 Nassau, were filled with stamp dealers. Sanders Zuckerman, who has been selling stamps in the area for fifty-nine years—the Daily News proclaimed him “a legend in the stamp business”—says collectors came from all over the world to buy and sell stamps.

Stamp collecting was a new fad in the 1860s. The first postage stamp, Great Britain’s one-penny black, had been issued only in 1840; the first known American stamp collector, William H. Faber of Charleston, South Carolina, began collecting in 1855. New York’s first stamp dealers appeared in the early 1860s. They did business along the fences of New York’s City Hall Park, where stamps were pinned up on boards for the delectation of passersby.

Open-air merchants—whether street pharmacists dealing in controlled substances or vendors selling souvenirs from a cart—are marginal people, engaged in what the Marxists call the early stages of capital accumulation. The man who made stamp dealing a business and Nassau Street the center of American philately was John Walter Scott (1845-1919). Scott had dabbled in stamp dealing in his teens while working as a merchant’s clerk in London. He emigrated to New York in the summer of 1863. At first, this did not seem to be a good idea. There were no jobs: the draft riots in early July had devastated much of the city. Scott’s job search was so unsuccessful that he even considered enlisting in the Union army.

One day, according to Edwin P. Hoyt’s One Penny Black, Scott struck up a conversation with an outdoor stamp dealer in City Hall Park. The dealer advanced him about a hundred dollars’ worth of stamps, which Scott agreed to sell as his agent. He was amazingly successful: he was soon earning $30 a month, roughly the wages of a skilled workman, and quite enough for a single man to live on. Scott then wrote to his sister, who began buying and sending stamps to him from England, and he went into business for himself.

In 1868, he opened an office on Nassau Street. He had been issuing one-page monthly price lists since June 1867. In September 1868, Scott issued a paperbound booklet, A Descriptive Catalogue of American and Foreign Postage Stamps, Issued from 1840 to Date. With the knack for self-important publicity that marked or marred him throughout his career, Scott trumpeted the pamphlet as the “16th edition” of his catalog. This was because he was counting each of his one-page lists as a separate earlier edition.

In the same year, he published a stamp album, a book with blank pages on which collectors might affix their stamps. He also started the American Journal of Philately. He was not the first American philatelic journalist: S. Allan “Just-as-Good Taylor had first published his Stamp Collector’s Record from Montreal in December 1864. (A brilliant counterfeiter, he openly insisted his stamps were “just as good” as the real things.) Scott finessed this fact, as he did most facts that inconvenienced him: his official biography says that he published the first “important” American stamp journal.

Truth presented no barrier to the vaulting imagination of J.W. Scott. He claimed sales of 15,000 albums. There were then probably not 15,000 stamp collectors in the world. His competitors claimed Scott had reduced lying to a science. No one cared.

Like most entrepreneurs, Scott was extraordinarily self-interested. A true child of the Gilded Age, he would turn a blind eye to others’ dishonesty if he could turn an outwardly licit dollar from it. Thus, he often dealt with “stamp finders,” men and women whose nose for rare stamps was often aided by a knack for larceny. Scott never asked where the stamps came from. One of his pet finders, known only as “Mr. McGinnity,” had “entered” the Philadelphia Customs House and raided its records for old stamps; another stamp finder raided the New York Institution for the Blind. He carried off numerous stamps clipped from its old correspondence, promising to return to pay for them. (The Institution is still waiting for the money.)

Scott also lobbied the United States government into cheating collectors by reprinting its old and valuable postage stamps. He even produced what were politely called “representations” of rare stamps, such as the so-called Postmaster stamps issued by individual American post offices before 1847, when the government began issuing its own. Such shenanigans put Scott, in some ways, on a par with “Just-as-Good” Taylor.

Taylor’s boast that his counterfeits were better than the originals was often true. (One scholar characterized Taylor’s forgeries as “fine engravings, totally different from the crude typographic printing” of the real stamps.) By the early 1870s, Taylor was part of the “Boston Gang” of crooked dealers and journalists, specializing in inventing South American issues. Years before El Salvador, Guatemala, Haiti, and Paraguay had released their first stamps, for example, the Boston Gang was printing and selling bogus stamps from these countries, backed by supposedly official documents, which were themselves forgeries. Taylor published equally fictitious articles about these stamps in his magazine, which helped create a market for his product. Only an age that combined slow communications with exploding collector demand for exotic stamps made this possible, and, at the end, only a federal counterfeiting rap brought him down.

Other hustlers were equally artistic, like Sam Singer, the repairman. Torn or mutilated stamps have no value to collectors. According to Hoyt, Singer could take a half-dozen mangled stamps and from them manufacture a composite that fooled most collectors and dealers. Like Taylor, he was proud of his work: he became so good that he sometimes bought stamps that he himself had repaired, not realizing until later that they had been damaged and mended. When the millionaire collector Colonel Edward H.R. Green found himself with one of Sam’s specials, he purchased a magnifier that could enlarge a stamp’s image from one inch to four feet square. It cost him $22,000; the movers had to remove the doorframe to bring it into the Colonel’s townhouse on West 90th Street.

In this century, Nassau Street’s most flamboyant dealer was actually an honest man. Herman “Pat” Herst Jr. (he was born on St. Patrick’s Day, March 17, 1909, which led his friends to nickname him Pat) graduated from Reed College and the University of Oregon in 1932. He then came east by jumping a slow freight and riding the rods. He landed a twelve-dollar-a-week job as a runner for Lebenthal & Co., the municipal bond brokers, that took him into the Stamp District, where he met several Lebenthal clients who collected stamps when not clipping coupons. They rekindled his childhood interest in philately: he began buying and selling stamps as a vest-pocket dealer. By 1936, Lebenthal was paying him $28 a week; his stamp dealings earned twice that, and he left Wall Street for Nassau. His business became so heavy that he welcomed an elevator operators’ strike: it let him catch up on his paperwork.

He published a newsletter, Herst’s Outbursts, from 1940 until 1968. It charmingly combined self-promotion, anecdotes about stamps, and a passion for trivia. (A friend once asked, “Pat, what’s the population of Cincinnati?” Herst replied, “Yesterday or today?”) He also published columns and articles in the philatelic press. Eventually, he recycled his journalism into a series of popular books. Nassau Street, his memoir of stamp dealing in the 1930s and 1940s, has sold more than 100,000 copies in seven editions since 1960.

Herst was among the first dealers to abandon the bustle of Nassau Street. In 1945, he moved his family and his business to Shrub Oak, N.Y., then a hamlet with a population of 674. As he received more than 100,000 pieces of mail a year, the local post office was immediately reclassified from third to second class. However, at that time even a second-class post office did not make household deliveries. From his love of trivia, Herst knew that an 1862 law permitted private posts under just these circumstances. With the help of his children and their German shepherd, Herst’s private post delivered mail door to door for two cents a letter. Naturally, he issued his own stamps, including one depicting the dog. Most went to collectors.

Today, though now headquartered in Ohio, Scott’s still publishes its annual catalog of stamps of the world. From J.W. Scott’s one-page “first edition” it has grown to six massive paper-bound volumes. Scott’s also publishes numerous stamp albums, including the renowned Scott’s International. Volume 1, which is somewhat thicker than the Manhattan Yellow Pages, houses nearly every stamp issued by every nation in the world between 1840 and 1940. Volume 2 only reached 1949. Subsequent albums now appear roughly every year to accommodate the gushing flow of stamps from every nation in the world, most meant for sale to collectors rather than for postal use.

Nassau Street is no longer the mecca of American philately. Even Sanders Zuckerman characterizes himself as the last of the dinosaurs. Gentrification, soaring taxes, rising commercial rents, and increasing competition from mail-order dealers operating from low-tax, low-rent states forced most dealers to move or close during the late 1970s. Today, the Verizon Yellow Pages lists only three dealers in the Nassau Street area under “Stamps for Collectors.”

Zuckerman, who operates Harvey Dolin & Company from 111 Fulton Street, usually wearing a necktie with a pattern of postage stamps, also sells coins, baseball cards, and World’s Fair and World War II memorabilia to get by. He says young people don’t collect stamps. When recently asked why he was still in business, the old man shrugged. “I like the place and I like the people,” he said. “I’m not going to retire till they close the lid on me.”

New York Press, November 5, 2002

Education by Degrees

I first heard of John Bear in 1990, when a man from Michigan named Bob Adams told me about the Ethiopian ear-pickers. In 1966, Southern Methodist University gave Bob Hope an honorary doctorate after the entertainer gave it a substantial donation. Up at Michigan State University, John Bear, earning his doctorate the hard way, resented this. He founded the Millard Fillmore Institute to honor

Bears’ Guide to Earning Degrees by Distance Learning, Ten Speed Press, PO Box 7123, Berkeley, CA 94707, $29.95, www.tenspeed.com

I first heard of John Bear in 1990, when a man from Michigan named Bob Adams told me about the Ethiopian ear-pickers. In 1966, Southern Methodist University gave Bob Hope an honorary doctorate after the entertainer gave it a substantial donation. Up at Michigan State University, John Bear was earning his doctorate the hard way. Bear resented this. He knew that President Fillmore refused all honorary doctorates, even from Oxford. Bear then founded the Millard Fillmore Institute to honor the 13th president’s memory. The Institute awarded doctorates with ornately engraved diplomas on genuine imitation parchment that read, “By virtue of powers which we have invented…” granting “the honorary and meretricious” doctorate “magna cum grano salis”—with a big grain of salt.

Six years later, while studying in London, he tried the same thing on a larger scale. He and some friends created the London Institute for Applied Research and ran advertisements in American publications: “Phony honorary doctorates for sale, $25.” Several hundred were sold, presumably keeping the promoters in whiskey and cigars. As Bear wrote, half the world’s academic establishment thought L.I.A.R. was a great gag. The other half felt it threatened life as we knew it. After wearing out the joke, Bear traded the remaining diplomas to a Dutchman for 100 pounds of metal crosses and Ethiopian ear-pickers. (The Dutchman is still selling them—for $100 a piece.)

With this kind of experience, Bear first published Bear’s Guide, his profoundly serious and wildly funny guide to alternative higher education, more than a quarter-century ago. The latest edition, the 14th, crossed my desk last week. This is probably the best available practical guide to obtaining legitimate college degrees without full-time attendance in a conventional college setting, whether through correspondence, independent study, college credit through examination or life-experience learning, or the Internet. As Bear notes, in 1970, if one wanted to earn a degree without sitting in a classroom for three or four years and wanted to remain in North America, one had two choices: the Universities of London and of South Africa. Today, one has more than 1000 options.

I loved my completely traditional undergraduate experience, down to the last mug of beer. But that was a quarter-century ago, when one could pay a year’s tuition with the money one earned over the summer as a dishwasher. That isn’t the case anymore.

Also, American college education is more about obtaining a credential than inheriting the intellectual legacy of the West. I regret this; so, I sense, does Bear. This is part of a phenomenon that might be called “credentialism.” One might define it as a false objectivity in personnel decisions by substituting credentials—particularly academic diplomas—for the analysis of character, intelligence, and ability or even the intelligent exercise of judgment in hiring, firing, and promoting.

Bear argues that an academic degree is more useful to one’s career than practical knowledge. Whether this is good for society is immaterial. He illustrates this point with an anecdote about a telephone call from the man in charge of sawing off tree limbs for a Midwestern city. The city government had decreed that all agency heads must have baccalaureates. The head sawyer didn’t have one. If he didn’t earn a degree within two years, he would lose the job he had competently performed for two decades. The reality of his competence was immaterial to someone else’s need for false objectivity.

We in New York are not immune from this. The city government now requires applicants for the police examinations to have sixty college credits. Yet no one who has attended college would argue that accumulating credits raises barriers to brutality or provides a sure test of intelligence, industry, courage, and character.

To Bear, traditional education awards degrees for time served and credit earned, pursuant to a medieval formula combining generalized and specialized education in a classroom on a campus. The kind of nontraditional education emphasized by his book awards degrees on the basis of “competencies” and “performance skills,” using “methodologies” that cultivate self-direction and independence through planned independent study, generally off campus.

Granted, nontraditional routes are now radically less expensive. One can obtain a bachelor’s degree from New York’s Excelsior College (formerly Regents College) or New Jersey’s Thomas Edison State College without stepping into a classroom. For example, Excelsior awards degrees to persons who have accumulated sufficient credits through various means, including noncollege learning experience such as corporate training programs, military training and professional licenses; equivalency examintions such as the College-Level Examination Program (CLEP), the Defense Activity for Non-Traditional Education Support (DANTES), the Graduate Record Examinations (GRE); its own nationally recognized examination program; and even educational portfolios evaluated through its partnerships with other institutions, such as Ohio University.

However, in a world that cheapens the humanities to a mere credential and refuses to evaluate intelligence, experience, and common sense, it’s a short step to advancing one’s career through exaggeration and even downright deceit. Remember that a diploma is merely a document evidencing the holder’s completion of a particular course of study.

Even the once-sacred transcript, the official record of the work one has done to earn a degree, is no longer written in stone. Creative use has been made of color copiers and laser printers to alter records; college computer systems have been hacked into–in some instances for fun and in others in order to alter records for profit.

Actually, it would seem that finagling has always been part of the American doctoral tradition. Bear reports that the first American doctorate came about in the following way.

In the beginning, only someone with a doctorate could bestow one on another person. At the end of the 17th century, however, Harvard’s faculty had no instructors with doctorates. Its president, Increase Mather, belonged to a religious sect that was anathema to the Church of England and hence legally ineligible to receive a doctoral degree from any English university. Harvard’s faculty, which then consisted of two people, solved this problem by unanimously agreeing to award Mather an honorary doctorate. Mather, in turn, conferred doctorates upon his instructors. And they began doctoring their students.

Yale awarded America’s first professional doctorate when Daniel Turner, a British physician, gave Yale some fifty medical textbooks. Yale awarded him an M.D. in absentia. (Turner never set foot in America). Some, according to Bear, suggested that the M.D. must stand for multum donavit (“he gave a lot”).

As one might expect, Bear also discusses  the anomaly of the honorary degree. In a country whose government is forbidden from granting titles of nobility, higher education fills the gap with honorary doctorates, which are simply titles bestowed for various reasons upon various individuals. Bear suggests an analogy to an army granting the honorary rank of general to a civilian who may then use it in everyday life.

Of course there are doctorates and there are doctorates. My alma mater grants honorary doctorates to a few distinguished men and women every year. Among them, invariably, is the chief executive of some corporation whose foundation has made a substantial contribution to the college’s endowment. The Rev. Kirby Hensley’s renowned Universal Life Church, which awards an honorary Doctor of Divinity degree to anyone who ponies up $30 (it used to be only $5), merely takes this to its logical extreme.

My favorite chapters in Bear’s book discuss phony degrees and diploma mills, some of which operate wildly beyond the law. In 1978, one diploma mill proprietor was arrested as Mike Wallace was interviewing him for 60 Minutes. Usually unaccredited, usually operating in one of the handful of states that barely regulate private higher education (currently Hawaii seems the happy hunting ground of the degree mill), such institutions flourish because people want to avoid the work involved in getting a real degree. After 60 Minutes aired its program, the network received thousands of telephone calls and letters from people who wanted the addresses and telephone numbers of the diploma mills exposed by the program.

And who can blame them? In some states, a doctorate from a one-room Bible school is sufficient to set up practice as a marriage counselor and psychotherapist. At least one major figure in the New York City Parking Violations Bureau scandals had been a marriage counselor on the strength of his advanced degrees from the College of St. Thomas in Montreal, Canada. This was a theological seminary sponsored by an Old Catholic church whose archbishop, a retired plumber (I met him once: his weakness for lace on his episcopal finery left me cold), operated the college from His Excellency’s apartment. Quebec did not regulate religious seminaries, and this allowed the archbishop to claim—accurately—that the degrees were lawful and valid. They were also worthless.

As Bear notes, in Hawaii and Louisiana the one-man church founded yesterday may sponsor a university today that will grant a doctorate in nuclear physics tomorrow. One Louisiana diploma mill successfully argued that as God created everything, all subjects were the study of God and therefore a religious degree. This may be theologically sound, but if I learned my physician held his M.D. from this school, I would get a referral.

As long as people value others more for whatever pieces of paper they can produce than for their qualities of mind and character, the diploma mill will flourish. But the intelligent careerist will use common sense and the guides of John Bear.

New York Press, September 24, 2002

Song Recycle

For Pete’s sake, why all the fuss about the Baz Luhrmann La Boheme? You’d think that no one had ever thought of updating classical opera before, or casting “realistically” trim and youthful romantic leads. The production, currently at the Broadway Theater, which brings the action forward to the 1950s, opened earlier this month to reviews that ranged from the rhapsodic to the studiously excited. Only Michael Feingold in the Village Voice had the taste and good sense

[From New York Press, January 7, 2003]

Baz Luhrmann’s La Boheme
at the Broadway Theater

Twyla Tharp’s Movin’ Out
at the Richard Rodgers

For Pete’s sake, why all the fuss about the Baz Luhrmann La Boheme? You’d think that no one had ever thought of updating classical opera before, or casting “realistically” trim and youthful romantic leads. The production, currently at the Broadway Theater, which brings the action forward to the 1950s, opened earlier this month to reviews that ranged from the rhapsodic to the studiously excited. Only Michael Feingold in the Village Voice had the taste and good sense to recognize it for what it is, “a well-meaning and conventional little event.”

Actually, it’s a reconstituted version of a well-meaning and conventional little event, a staging by Luhrmann and his wife, the designer Catherine Martin, from more than a decade ago. Created in 1990 for Opera Australia, it has been filmed and is broadcast from time to time on public television. I’ve happened on it myself two or three times. The first time I was transfixed. But that’s a long time ago now, and nothing stinks like old directorial “concepts.” In that production, moreover, Mimi and Rodolfo were sung, respectively, by the London soprano Cheryl Barker and an Australian tenor named David Hobson, two artists whose musical chops excelled even their striking good looks. For Broadway Luhrmann has assembled an “international” cast that includes three different Mimi–Rodolfo pairs, two Musettas and two Marcellos. Some of these, like the Marcello I saw (Eugene Brancoveanu), and like the singers who play Rodolfo’s other two flatmates (Daniel Webb and Daniel Okulitch), may be perfectly good actors with perfectly good instruments.

But the Mimi I saw, a Russian soprano named Ekaterina Solovyeva, looked (in her Act I trenchcoat and bad platinum wig) like character in a Saturday Night Live sketch—and seemed about as fragile. She had a tendency to stagger forward unconvincingly every time she coughed, bending at the waist and clutching her stomach like one suffering from acute gastric distress rather than consumption. As for David Miller, her Rodolfo, it would be hard to envision a more wooden performer.

Ironically, though, the main drawback to this Boheme may arise from its chief asset. Ms. Martin’s scenery, of which there is plenty, seems to have created a rather serious acoustical problem; this, in turn, is exacerbated by a sound system so poorly designed (or manned) that voices not only appear separated from their point of origin but also, at crucial points in the score, from each other. I heard the opera from two rows in front of the balcony overhang, and the night I attended the children’s chorus seemed disembodied and off-key; and during all of those heart-rending melodic juxtapositions between the two pairs of lovers, what Mimi and Rodolfo were singing sounded miles away from what Musetta and Marcello were singing.

Elsewhere, the production is peppered with arresting if familiar images, chief among them that of the two leads embracing before Luhrmann’s signature “L’Amour” sign. All of which is to say that Luhrmann’s Puccini is much like his Shakespeare was with respect to acting (his Romeo + Juliet starred those giants of classical acting, Clare Danes and Leonardo DiCaprio). It’s full of clever visuals and fine as long as you don’t care at all about the music.

Luhrmann has stated over and over again that his purpose in mounting this production was to make the opera more “accessible” to young audiences. That seems like humbug. It would be hard to think of an opera more innately accessible than Boheme, whose plot was almost universally familiar even before Rent, and whose score would evince emotion in a fish—even one with no knowledge of Italian. The libretto becomes no more or less accessible for being set in the bohemia of 1950s Paris than that of 1830s Paris. It’s far more likely that Luhrmann resurrected this production so as to squeeze every last penny out of it, capitalizing on the unaccountable success of his Moulin Rouge, another period extravaganza set in artistic Paris.

High art and popular culture really meet in Movin’ Out, Twyla Tharp’s electrifying new full-length ballet at the Richard Rodgers. Built around a score made up entirely of songs by the pop balladeer Billy Joel, the show isn’t so much an attempt to popularize the form of ballet as it is a statement about “high” and “low” art and part of an ongoing dialogue Tharp has been having with her audience for decades.

It’s an example of what Tharp has termed “crossover” ballet, a form she helped to invent, and which she has been exploring since the early 70s when she galvanized the dance world with a piece for the Joffrey Ballet called Deuce Coup, choreographed to Beach Boys songs. Since then, Tharp has created ballets using ragtime, jazz, blues, Tin Pan Alley standards and new-wave rock, and juxtaposed classical and baroque music with contemporary dance idioms—all partly with a view to suggesting that demarcations between “serious” and popular forms are artificial and limiting in terms of what we allow ourselves to derive from art.

Joel seems at first glance an unlikely collaborator for Tharp. For one thing, he’s not a songwriter of the first water. He’s not a gifted lyricist or melodymaker. For another, he takes himself very seriously as a musician and a composer. Unlike Elton John, the pop singer-songwriter to whom he is most frequently likened (the two are actually touring together just now), Joel has pretensions. His last album, a selection of classical pieces, some of which are used in Movin’ Out, is on sale in the theater lobby: it uses the old Schirmer’s trademark—the narrow green-black lettering and laurel-leaf border stamped on bright matte gold—for its cover art. It doesn’t allude to the design or incorporate it into some piece of original artwork, it simply reproduces it.

This is essentially Joel’s approach to songwriting. The doo-wop stuff, the pseudo-classical pieces, the hard rock anthems, the soft rock ballads, the Springsteen imitations—24 of which are performed, in Movin’ Out, by a ten-man band (led by Michael Cavanaugh at evening performances and Wade Preston at matinees) from a hydraulic lift that spans the stage—it’s all pastiche but without the wit and knowingness that would make it artful. (This is a man who sets lyrics to the second movement of Beethoven’s “Pathétique.”) Joel is like a comic whose repertoire consists of doing impressions of other comedians. He can mimic. What’s missing is the impulse to comment and transform, to offer esthetic input. In this, he couldn’t be more unlike Tharp, who quotes others (and sometimes herself) for the purpose of saying something new.

In fact, one of the remarkable aspects of Movin’ Out is the way in which Tharp, consciously or not, manages to use Joel’s essential mediocrity—drawing on his banality and his propensity for cliché—so as to create places where the form of dance can supply a subtext. Essentially, what Tharp has done is to fashion out of an array of seemingly hackneyed tropes and images from American popular culture (many of them supplied or exemplified by Joel) a narrative that serves the purpose of dance the way certain age-old stories and popular legends served earlier choreographers. She weaves a generic narrative—a familiar, almost boilerplate Vietnam-era story of youthful idealism, disillusion, loss, and restoration—around a set of characters whose names are culled from Joel’s lyrics, who come from his hometown of Hicksville, Long Island, and who metaphorically live their lives against a backdrop of his songs.

Tharp doesn’t make her narrative conform to Joel’s lyrics, though. Her use of the songs is tremendously varied. Rarely do the lyrics actually reflect what a character might be feeling, and they almost never correspond directly to the action. More often, Tharp is content to have what’s happening onstage bounce glancingly off the subject or mood of a song, making contact at only one or two points. Sometimes a song will offer an ironic contrast with what is happening in the story. At one point Tharp uses a love song (“She’s Got a Way”) objectively, as music that’s actually playing while characters are literally dancing. We can’t help noting the contrast between the vacuousness of the lyric and the very real poignancy of what is happening onstage, in which characters thousands of miles apart, separated by war, are dancing with people they don’t know and don’t care about while thinking about each other.

The show is almost completely dialogue-free. The “libretto” resides in the conjunction between what we see and what we hear. What makes Movin’ Out Tharp’s show rather than Joel’s, however, is not this or the fact that of the two casts who dance the principal roles the first is made up entirely of dancers from her own company (Ashley Tuttle, Benjamin Bowman, Elizabeth Parkinson, Keith Roberts and John Selya) or the set by Tharp’s longtime collaborator, Santo Loquasto, but the degree to which, like most of Tharp’s work, it’s about dance itself.

Superficially, Movin’ Out is about Eddie, his girlfriend Brenda, his pals James and Tony, and James’ girlfriend Judy. Brenda dumps Eddie and goes off to find herself while James and Judy get engaged. Brenda reencounters Tony and takes a second look at him. But then the boys go off to war. James is killed in combat trying to look out for Eddie and Judy is left grieving. Eddie and Tony come back from overseas transformed. Tony, violent and uncommunicative, enters into an abusive relationship with Brenda while Eddie hits bottom, hanging out on skid row with drunks and junkies, shooting up a lot. None of this is remotely interesting, but none of it is remotely important either. What infuses the piece with meaning and drama are Tharp’s departures from the balletic structures and forms she has used to tell the story. What happens to the characters isn’t the point. What’s important is what happens to the way they dance.

This manifests itself in terms of the three things that Movin’ Out is more fundamentally about. They’re the three things that more than anything else absorbed the generation that came of age in the 60s: growing up, Vietnam, and “relationships”—not love but literally relationships. The politics of partnering—how to be free, who was more important, where would the power lie? There’s a three-way pun in the title. The title song is about a young man leaving home to start life on his own.

And it seems such a waste of time
If that’s what it’s all about
Mamma, if that’s movin’ up then I’m movin’ out.

The title phrase has a military sense, though, too. (One speaks of troops “moving out.”) It could also evoke the end of a relationship.

Reviews of the ballet, when it first opened, took gentle exception to the fact that Tharp doesn’t provide an explanation for the breakup that takes place during the first number. I thought she made it perfectly clear in the language of dance why the relationship founders. Both characters regard themselves as soloists—the man, Eddie, in particular. He’s thoroughly involved in his own heroic moves. As performed by John Selya, the extraordinary young dancer who plays the role in the evening cast, Eddie is epic, mesmerizing, Baryshnikov-like. But he never looks at Brenda when she dances. This is bad etiquette from a balletic standpoint. Courtesy demands that the traditional hero and heroine of romantic ballet take delight in one another’s virtuosity. That is, after all, what each is falling in love with in the other. If we didn’t know that going into the performance, we learn it from the couple who dance primarily in the language of the classical pas de deux. James never takes his eyes off Judy. Tharp seems to have given Eddie a vocabulary that precludes partnering. No wonder Brenda ditches him.

By the same token, Brenda seems drawn to Tony because his idea of partnering demands input from her. He wins her with a kind of challenge step. It’s a device you see a lot in backstage musicals. One dancer improvises a fancy step that the other picks up. The first dancer is impressed and this creates heat. Then the second dancer does a fancier step that the first has to pick up. More heat. My sense was that by not offering a motivation for the breakup that could be read on any other plane, Tharp was forcing us into reading the choreography in a way we might otherwise have escaped having to do. Tony’s approach to wooing is fun because it’s competitive—almost combative—though ultimately about the joy of the dance; and that’s significant because of the way Tony dances when he’s alone. Movements of Tony’s that start classically tend to turn martial halfway through. With the advent of the real war that Tony naively fantasizes about, the joy disappears from the idea of conflict, and a different style takes over, one that marries Eddie’s epic prowess to Tony’s aggression. And this remains the dominant mode until the idea of the agon is resocialized in a grand, ecstatic pas de deux between Brenda and Tony.

Iasked a friend who had seen Movin’ Out to help me think of a word that would take into account the three notions of separation inherent in the show’s title—romantic, military, and parental. She asked if there wasn’t some word in one of the classical languages. I thought about it and said no, I didn’t think there was. Because separation anxiety isn’t an heroic concept. Maybe that’s the point. Movin’ Out, like the generation it chronicles, is really all about fear of growing up, fear of war, fear of relationships. The baby boomers were an unheroic generation—anti-heroic, if you choose to be kind.

If you want to be really kind, you could say that they were the generation that sought to make self-indulgence and navel-gazing themselves heroic, to ennoble alienation and self-involvement by experiencing them on a grand scale. There’s a sense in which Tharp’s choreography picks up on this, too. Not that there’s any navel-gazing in the ballet, heroic or otherwise. What’s heroic is the dancing, particularly Eddie’s. And it’s chiefly embodied in Selya’s performance. It reaches its pinnacle, though, where Eddie is at his most abject, when the joyous moves that defined his narcissism and ability to revel in self-love morph into drug-addled disconnectedness.

Make no mistake: Movin’ Out isn’t The Deer Hunter or The Things They Carried. But it’s for an audience that has internalized The Deer Hunter, The Things They Carried, and other keynotes of popular culture—novels like Robert Stone’s Dog Soldiers, movies like Who’ll Stop the Rain, Coming Home, and The Big Chill, songs like “White Rabbit” and “Sam Stone”—to such an extent that they have become a part of that society’s experience of itself.

And that, in the end, is the difference between Baz Luhrmann and Twyla Tharp. For all his self-righteous high moral tone, Luhrmann, who reminds us in his program notes that grand opera was the television of its day, and that one should only hear it in the original Italian, doesn’t really care about the audience for opera or Puccini. If he did, he would never condescend to us by cluttering up the production with supertitles in funky typefaces intended to indicate the tone of every utterance. He would not capitalize on our ignorance by directing the scene where Mimi and Rodolfo hunt for her key as though the verb “to search” (cercare) were synonymous with the verb “to gaze” (guardare). (“Are you looking,” Mimi asks? “I’m looking,” he sings, eyes fixed on her so that the audience laughs knowingly.)

Luhrmann can patronize us because he thinks the populace needs him to interpret high art for us. Tharp is probably incapable of condescending to either her audience or her material (she may even like Billy Joel) because she doesn’t see “high” and “low” art as adversarial or distinct. In Tharp’s estimation—and this is the whole point of Movin’ Out, really, and what makes it the intellectual feel-good hit of the season—art and popular culture are interdependent. In the words of a sticker I once saw in the offices of this august publication, “Without pop culture, there can be no culture.”


Of Archbishops, Cardinals and the Order of the Holy Sepulchre

The red hats hang from the ceiling of St. Patrick’s Cathedral, high above the sanctuary. Each galero-a circular, broad-brimmed hat, ornamented with thirty tassels of scarlet thread interwoven with gold, fifteen to a side-denotes that the cardinal who possessed it had once possessed the cathedral and held jurisdiction over the

The red hats hang from the ceiling of St. Patrick’s Cathedral, high above the sanctuary. Each galero-a circular, broad-brimmed hat, ornamented with thirty tassels of scarlet thread interwoven with gold, fifteen to a side-denotes that the cardinal who possessed it had once possessed the cathedral and held jurisdiction over the Catholic people of New York. In the old days, he would have received it from the hands of the pope. The new cardinal, vested with a scarlet, watered-silk cappa magna—the great cape warn by cardinals for more than a thousand years—was led to the Sistine Chapel. He reverenced the Divine Presence on the altar and the enthroned pope. At this point, as Peter C. Van Lierde wrote in his 1964 book, What is a Cardinal?:

After kissing the Pontiff’s hands and cheek…the Cardinals-elect prostrated themselves on the floor before the altar while the Pope read prayers over them. Then…the Pope recited in Latin: ‘To the praise of Almighty God and the honor of His Holy See, receive the red hat, the distinctive sign of the Cardinal’s dignity, by which is meant that even unto death and the shedding of blood you will show yourself courageous for the exaltation of our Holy Father, for the peace and outlet of Christian people, and for the augmentation of the Holy Roman Church. In the name of the Father and of the Son and the Holy Ghost.’

The galero was then presented to the new cardinal. He never wore it again. Later, it would be carried before his coffin, placed at the foot of his bier and finally hung from the ceiling of his cathedral.

Six of the eight archbishops of New York have been cardinals: John McCloskey, John Farley, Patrick Hayes, Francis Spellman, Terence Cooke and John O’Connor. If the first archbishop, John Hughes-the charming, eloquent, self-educated, adamantine “Dagger John,” as obnoxious to the Establishment of his day as Al Sharpton to ours-had lived a little longer, one imagines Pius IX would have granted him the red hat. Michael Augustine Corrigan, the third archbishop, also did not receive the hat. He reigned from 1885 until 1902. An honest, hardworking, competent administrator, His Excellency perhaps lacked finesse in personal and political relations. One imagines Leo XIII was unimpressed when Corrigan excommunicated one of his own priests, Father Edward McGlynn, for reasons rooted in politics. McGlynn was a radical firebrand with a hob-lawyer’s genius for remaining just this side of canon law. During the mayoral campaign of 1886, Corrigan forbade him to speak in support of the United Labor candidate, Henry George.

Nonetheless, on at least one occasion McGlynn appeared on George’s platform without speaking, having first ensured that all present knew Corrigan had silenced him. It was merely one of McGlynn’s many provocations. When Corrigan rose to the bait and not only suspended McGlynn’s faculties as a priest but excommunicated him, bell, book and candle, McGlynn appealed to Rome. Then as now, Rome takes its time in these matters. Five years later, in 1891, Rome ordered McGlynn’s reinstatement. Corrigan was utterly humiliated. Nonetheless, he obeyed Rome as he had expected McGlynn to obey him, without delay or reservation. Peace to an honest man.

To describe a cardinal as a prince of the church merely states a fact. We sometimes forget that the popes wielded temporal power until 1870, when Vittorio Emmanuele II completed the unification of Italy by the seizure of Rome. Both the Congress of Vienna and the Congress of Berlin confirmed, and the Treaty of Versailles in 1918 ratified, that membership in the Sacred College of Cardinals carried with it diplomatic status equal to that of princes of the blood royal. As such, cardinals took official precedence behind emperors, kings, and their immediate heirs, or heads of state. No other person ever outranks a cardinal. Since at least the 17th century, all cardinals have enjoyed the title and dignity of “Eminence.” Thus, in conversation, he is addressed as “Your Eminence”; in formal communications, he is addressed as Most Eminent Lord or Most Eminent Cardinal.

Today precedence is generally a question of protocol, the rules governing the political and social relationships of nations and the people who represent them. As with all codes of etiquette, protocol creates a sense of order in both the ceremonial and the mundane. It is considerably more than the question of the rank of the person receiving a cardinal at the airport, the number of troops in the honor guard or the number of salvoes in an artillery salute. Nonetheless, many cardinals, particularly the Americans, find their princely status uncomfortable. One might think that such a thing was in itself a kind of ostentatious false humility—another manifestation of the capital sin of pride by seeking to substitute one’s uninformed emotional response for the measured judgment of the centuries—but perhaps one is mistaken.

These thoughts were prompted while listening to a garrulous Catholic lawyer acquaintance recount his observations of one of the late cardinal-archbishop’s several funeral Masses. The Cardinal died on a Wednesday around 8 p.m. By Friday evening, dressed in full canonicals, he was lying in state at the cathedral. As he had promised, the coffin bore a union label. My friend was also in attendance, wearing a white mantle and black velvet beret, the ceremonial regalia of a Knight of the Equestrian Order of the Holy Sepulchre of Jerusalem. He was only one of several hundred colorfully dressed persons: the congregation included black-robed Knights of Malta; Franciscan monks in roped gray robes and sandals; bishops in violet; monsignors in black cassocks piped in red; Capuchins in brown; Dominicans in white; and nuns of several orders in their habits.

This was a pale echo of the exotic and flamboyant presences at Francis Cardinal Spellman’s funeral more than thirty years ago. Vatican II and the blanding of the church in America had not begun to take effect (Spellman had sworn to stop the reforms at the water’s edge, and unlike Canute had held back the tide during his lifetime). Then, the Knights of Malta wore red tunics with epaulettes and cocked hats with ostrich plumes. The Knights of the Holy Sepulchre wore white tailcoats “with collar, cuffs, and breast facings of black velvet with gold embroideries, epaulettes of twisted gold cord, white trousers with gold side stripes, a sword, and a plumed cocked hat,” according to Peter Bander van Duren, in Orders of Knighthood and of Merit.

My friend received this honor some years ago for reasons as mysterious to him as to me. This is the kind of distinction for which one does not apply. One must be invited. An acquaintance surprised him by quietly asking whether he would accept a papal knighthood. “I was restrained in my enthusiasm,” my friend said. “I waited three whole seconds before saying, ‘Yes.'” We celebrated his knighthood with a symposium in a local watering hole. After all, symposium comes from the Greek for “drinking party.”

The Order of the Holy Sepulchre is among the last of the Crusader orders of knighthood. It is probably at least 1,000 years old: some historians claimed with more enthusiasm than documentation that the Order might have existed in one form or another before the end of the first century of the Christian era. With the loss of the Latin Kingdom of Jerusalem, the Order gradually came under the protection of the Roman Church.

At one time, its knights enjoyed many privileges, “some of which were of a rather peculiar character,” notes James Vander Velde in The Pontifical Orders of Knighthood. “They had precedence over members of all orders of knighthood, except those of the Golden Fleece. They could create notaries public or change a name given in baptism; they were empowered to pardon prisoners whom they happened to meet while the prisoners were on their way to the scaffold…” They also had the privilege of legitimizing bastards. In New York, this notion may need reviving: there is so much work to do.

Unfortunately, there are no records of the first American knights of the Equestrian Order. One might reasonably speculate they were immigrants who had served in the papal armies. For example, Myles Keogh, a man who went in harm’s way, fought against Garibaldi’s forces at Ancona with the Pontifical Battalion of St. Patrick in 1860; during the War Between the States, he rose to the rank of colonel in the Union army. He then joined the Papal Zouaves and was knighted by Pius IX for valor. He rejoined the U.S. Cavalry and died with Custer at the Little Big Horn.

Popes granted not merely decorations but titles of nobility. Thus, Genevieve Brady, the wife of Nicholas Brady, a turn-of-the-century utility magnate, was ennobled by Pius XI as the Duchess Brady. The descendants of Edward Hearn, a successful contractor turned fundraiser, may still use the title of “Count.”

If one enters Fordham Law School, a large bronze wall plaque lists the benefactors whose generosity enabled the construction of the Lincoln Center campus of the Jesuit University of New York. Among them is George, Marquis MacDonald. If memory serves, he had begun as a contractor, became a spectacularly successful financier, and then a benefactor of the church. He had married a daughter of the infamous police inspector Thomas Byrnes, whose personal fortune came from sources best left unexamined. MacDonald became a Knight of Malta, a Knight Grand Cross of the Holy Sepulchre, a Knight Grand Cross of the Order of St. Gregory the Great, the holder of numberless honorary doctorates and, much to his pleasure, a papal marquis. He loved to have his picture taken in the uniform of one or another of these orders.

Titles, decorations, uniforms: these things have a certain romantic charm. But even my old lawyer friend noticed the important thing. During the Order’s ceremonies, the reading that most moved him is from Ecclesiastes:

Remember your Creator in the days of your youth, before the evil days come And the years approach of which you will say, I have no pleasure in them… Before the silver cord is snapped and the golden bowl is broken, And the pitcher is shattered at the spring, and the broken pulley falls into the well, And the dust returns to the earth as it once was, And the life breath returns to God who gave it. Vanity of vanities, saith the prophet, All is vanity.

New York Press, May 30, 2000