Dear Dean, How to Start?

A murder investigation disguised as a teacher-pleasing ‘oral history project.’

Sam Munson’s “The November Criminals” is either a very short novel or a very long college application essay. “You’ve asked me to explain what my best and worst qualities are,” Addison Schacht, the novel’s narrator and main attraction, writes in its opening sentence. His answer requires telling the full story of how he, a pot-dealing, Virgil-loving, SAT-acing high-school senior, attempted to solve the murder of Kevin Broadus, a fellow student at his Washington, D.C., school.

It all sounds like “Encyclopedia Brown: The Burn-Out Years,” but the plot of “The November Criminals” is never the point. Instead, it’s the vehicle, an obstacle course where Mr. Munson can show off Addison’s sharp and agile voice. That voice entails tons of tangents, italics and words you can’t print in a newspaper, all of which are unified by Addison’s industrial-strength cynicism. He spends a good portion of the novel judging other people, and they spend a good portion of the novel deserving it.

By far the worst offenders— worse than the minor drug players, worse than Addison’s bumbling father—are the teachers and students at Addison’s school. At a student assembly to honor the murdered Kevin, a girl named Alex Faustner starts “complaining about how ‘all this’—she meant singing ‘Mary Don’t You Weep’— violated her First Amendment rights. . . . In addition to being a huge [worse than you think], Alex is—I’m sure you’re astonished to hear this—one hundred percent, rest-of-society-agrees beautiful.”

But Addison criticizes the assembly along these same lines. And when he makes fliers for his murder investigation—disguised to his teachers as “an oral history project,” he says, because “I knew the phrase would turn their gazes glassy with delight”—Addison finds himself staring at a sign hanging over the copying machine that says: “Be Unselfish.” The sign, he muses, is “part of the larger official message hawked by our administration: selfishness is the highest evil. Which means, if you think about it, that all private desires participate in evil.” He adds: “The urge to deface that sign is the strongest feeling I have about my school.”

So which is it? Are selfishness and solipsism generally permissible or should they be restricted to our own frontal lobes? Can we mock the pretension of “exceptionless equality” while still struggling toward some kind of personal evaluative standard? “The November Criminals” comes at these questions from about 50 different angles. Addison never quite formulates a final answer—much less tries to live by it—and that’s one reason he’s so captivating. In his Latin class, Addison rails against a fellow student (Alex, again) for prattling on about how “The Aeneid” “glorified violence”: “No, man, you’re missing the whole point. You can’t apply our virtues here.” But Addison also realizes that nothing in this statement obviates the need to sort out what exactly our virtues are.

As with most first novels, “The November Criminals” contains some repurposing of life experience. Mr. Munson grew up in just the sort of place that his narrator describes as home: “a tree-heavy upper-middle-class neighborhood in Washington, D.C.” They both applied to the University of Chicago in 1999. (Mr. Munson graduated in 2003.) But the most interesting bit of Mr. Munson’s background is that the author worked as a researcher for CNBC host and devoted supply-sider Larry Kudlow and as an editor at Commentary magazine. Every so often, spurred by some kind of creative liberal guilt, someone will ask: Where are the conservative novelists? I can’t speak directly to Mr. Munson’s politics, but it’s pretty clear that “The November Criminals” takes aim at a lot of liberal pieties—everything from Diversity Outreach (“just as horrifyingly inept as its name suggests”) to a history teacher who worships Wilson, Kennedy and FDR (“that’s verbatim; she actually said holy trinity“).

More important than the author’s political leanings, though, is the fact that Mr. Munson does this as a novelist and not as a pundit—he dramatizes his debates, keeps them entertaining and leaves them unresolved.

“The November Criminals” tackles plenty of other important issues, including Jewishness (Addison loves Holocaust jokes) and race (Kevin is black). The book gets a little preachy toward the end—it does start to feel like an essay. But Mr. Munson packs in enough funny moments, enough surprises (notably a short and genuinely affective section on Addison’s mother), and enough satisfying resolutions to more than compensate. And then there’s the humor. After a disquisition on the male teenager’s sexual stamina, Addison explains: “I’m just trying to get everything down, so that you can form a clear picture. My best and worst qualities.”

Mr. Fehrman, a writer in Milford, Conn., is working on a book about presidents and their books.


Full article and photo:

What George Washington Heard

As Americans prepare to celebrate the nation’s birth, it’s safe to say that the most familiar figure connected with that birth is George Washington. Although he didn’t actually sign the Declaration of Independence—he had been in New York since March, commanding the Continental troops there—his image is more completely bound up with the revolution and its success than any of America’s other patriarchs.

Whether known as the “Father of his Country,” the “Atlas of America,” the “Sage of Mount Vernon” or just the “Old Fox,” Washington has always been a figure of mythic proportions, and during his lifetime he was venerated practically as a living saint by many citizens of the new republic.

One of the president’s most endearing and least familiar weakness was for music, theater and dancing.

True, at 6-foot-2 he really did stand taller than most men of his time, and his repeated escapes from death in battle contributed to his somewhat divine aura. But despite the sentimental hagiography of Parson Weems, whose best-seller “The Life of Washington” (1800) fairly reduced his subject’s career to a series of Aesop-like fables, Washington was a very human being, however formally he conducted himself in public. As his biographer Joseph Ellis notes, Washington cultivated his exceptional self-control to compensate for his weaknesses, among them a fiery temper and a passionate love for Sally Fairfax, the wife of a friend.

Perhaps Washington’s most endearing and least familiar weakness, however, was for music, theater and dancing. Unlike his secretary of state, Thomas Jefferson, a talented amateur violinist, Washington wasn’t a musician himself. But he took great pleasure in musical and theatrical events—both of which were closely intertwined in 18th-century America—and from early adulthood eagerly attended performances at theaters in Fredericksburg and Williamsburg, Va.

Throughout his life, Washington attended concerts wherever he traveled. During the American Revolution, while spending a night in Bethlehem, Pa., he enjoyed a concert of chamber music, and on a subsequent visit there in 1782, we read of his being serenaded, to his great pleasure, by a Moravian trombone choir.

Meanwhile, Washington was always eager to pay ready money for good music. In 1757, Philadelphia enjoyed its first two public concerts. For the second of these, then-Lt. Col. Washington purchased a block of tickets costing 52 shillings and six pence, or £2 12s 6d, a considerable outlay at a time when a teacher might earn an annual salary of £60. On another occasion, in 1787, now-Gen. Washington gave a dinner in Philadelphia for which he engaged a nine-piece orchestra to perform, this time paying £7 10s for the pleasure—no mean sum either.

Among his friends was the Philadelphia lawyer Francis Hopkinson, a talented amateur poet and musician who also designed several issues of Continental currency and did sign the Declaration of Independence. Acknowledged as the first native-born American composer, Hopkinson dedicated a set of “Seven Songs for the Harpischord or Forte Piano” to Washington in 1788.

Not surprisingly, musical performances in the newborn nation were often connected with major current events. Thus shortly after America’s victory, in 1781, we find Washington in the audience at the “hotel of the Minister of France” (i.e., the French Embassy) in Philadelphia to enjoy the celebratory premiere of “The Temple of Minerva, America Independent, an oratorical entertainment,” another Hopkinson opus. Similarly, in May 1787, four days after the opening of the Constitutional Convention in Philadelphia, Washington notes in his diary that he “accompanied Mrs. Morris to the benefit concert of a Mr. Juhan.” The program, divided into three “acts,” included overtures by Pierre-Alexandre Monsigny, William Shield and Padre Giovanni Battista Martini (who had given lessons to the young Mozart in Bologna). There was also a “Sonato Piano Forte” [sic] by the English-born Alexander Reinagle, one of America’s leading musicians of the day.

Reinagle, who had been a close friend of Carl Philipp Emanuel Bach in Europe, had settled in Philadelphia while it was the nation’s capital and was engaged by President Washington as music master to his step-granddaughter, Nelly Custis, providing a noteworthy, if indirect, connection between the Father of his Country and the son of Johann Sebastian Bach.

Apart from attending public performances, Washington also relished listening to music- making at home, and not only Nelly but Washington’s stepchildren and step-grandchildren were offered a musical education befitting the landed gentry. A small household collection of music books is still preserved at Washington’s beloved Mount Vernon, along with Nelly’s harpsichord, which Washington bought for her.

In addition, various eyewitness accounts of Washington on the dance floor reveal a side far removed from the starchy image on currency and canvas. Most notable is the account by his step-grandson George Washington Parke Custis of a ball held a few weeks after the conclusion of the American Revolution: “The minuet was much in vogue at that period,” writes Custis, “and was peculiarly calculated for the display of the splendid figure of the chief, and his natural grace and elegance of air and manners. . . . As the evening advanced, the commander-in-chief, yielding to the general gaiety of the scene, went down some dozen couples in the contre-dance with great spirit and satisfaction.”

Throughout his life, Washington enjoyed the balls and “assemblies” that were a popular entertainment in 18th-century America. One of his final letters in 1799 is a poignant response to an invitation to the managers of the Alexandria Assembly in Virginia: “Mrs. Washington and myself have been honored with your polite invitation to the assemblies of Alexandria this winter, and thank you for this mark of your attention. But, alas! our dancing days are no more.”

Though at age 67 Washington was still relatively vigorous, he caught a cold while riding five hours through a snowstorm on Dec. 12 of that year. Two days later the “American Cincinnatus” died, and to the raft of musical compositions already written saluting his military and presidential accomplishments was added another repertoire of dirges, elegies, odes and marches lamenting his passing. The first, Benjamin Carr’s dignified “Dead March and Monody for General Washington” was ready for performance in Philadelphia a mere 12 days later, with musical offerings continuing to appear up and down the Eastern seaboard throughout the winter of 1800.

The music-loving “Sword of the Revolution” probably would have enjoyed listening to them.

Mr. Scherer writes about music and the fine arts for the Journal.


Full article and photo:

Obama and the Fiscal ‘Road to Hell’

G-20 leaders don’t agree with the president that more spending will revive the economy. Nor do most Americans.

At last week’s G-20 meeting, President Barack Obama achieved a two-fer. He suffered a significant international defeat, and he increased the chances his party will suffer a major domestic one this fall.

Mr. Obama’s international defeat was self-inflicted. He went to Toronto to press other major nations to do as he has done: Expand government spending, or suffer, in the president’s words, “renewed economic hardship and recession.”

Canada, Germany, Great Britain and most other countries declined Mr. Obama’s invitation. The German economic minister “urgently” prodded America to cut spending at a press conference on June 21, prior to the G-20 meeting. The president of the European central bank took direct aim at Mr. Obama’s argument, telling the Italian newspaper La Repubblica on June 16 that “the idea that austerity measures could trigger stagnation is incorrect.”

The European Union president, Czech Prime Minister Mirek Topolanek, tore into Mr. Obama’s stimulus and other spending policies in a stunning address to the European Parliament in March 2009, calling them “the road to hell” and saying “the United States did not take the right path.”

If it sounds strange to have European leaders lecturing the U.S. about fiscal restraint, it should. But that is where America finds itself after Mr. Obama’s 17-month fiscal orgy.

The other flaw in his G-20 appearance is domestic. The president’s statements that more deficit spending was “necessary to keep economic growth strong” and his cautioning against “the consequential mistakes of the past” when stimulus spending “was too quickly withdrawn” puts his administration and party squarely in favor of policies unpopular with most Americans.

Since 2000, the Gallup organization has asked voters what they believe will be the most important problem for the U.S. in 25 years. This year Americans are saying the challenge will be the deficit. And last month, almost eight in 10 voters surveyed by the Associated Press called the federal budget deficit an “extremely” or “very important” issue.

There was more bad news Tuesday for Democrats from recent focus groups conducted in battleground congressional districts in Iowa, Ohio, New Jersey, Arkansas and Florida.

A report on these focus groups issued this week by Resurgent Republic (a group I helped found) showed that both political independents and tea party participants passionately denounced federal spending and deficits, using words like “reckless,” “out of control,” “unnecessary” and “unhelpful.” The evidence suggests that both groups remain deeply skeptical of Mr. Obama’s stimulus package and are unpersuaded by the administration’s arguments in its favor.

The authors of the Resurgent Republic study concluded that both independents and tea party voters believe “nearly unanimously” that reckless government spending, not lack of tax revenues, is responsible for the deficits. This goes to the very heart of the modern Democratic agenda with its guiding philosophy of bigger government and higher taxes.

All of this negative news is wearing on the president. At the G-20’s concluding news conference, Mr. Obama—brittle and petulant—attacked GOP critics “who are hollering about deficits,” saying he would be “calling their bluff” next year by “presenting some very difficult choices.” Then “we’ll see how much of . . . the political arguments they’re making right now are real, and how much of it was just politics.”

The president’s problem is largely a mess of his own making. Deficit spending did not begin when Mr. Obama took office. But he and his Democratic allies have supported, proposed, passed or signed and then spent every dime that’s gone out the door since Jan. 20, 2009.

Voters know it is Mr. Obama and Democratic leaders who approved a $410 billion supplemental (complete with 8,500 earmarks) in the middle of the last fiscal year, and then passed a record-spending budget for this one. Mr. Obama and Democrats approved an $862 billion stimulus and a $1 trillion health-care overhaul, and they now are trying to add $266 billion in “temporary” stimulus spending to permanently raise the budget baseline.

It is the president and Congressional allies who refuse to return the $447 billion unspent stimulus dollars and want to use repayments of TARP loans for more spending rather than reducing the deficit. It is the president who gave Fannie and Freddie carte blanche to draw hundreds of billions from the Treasury. It is the Democrats’ profligacy that raised the share of the GDP taken by the federal government to 24% this fiscal year.

This is indeed the road to fiscal hell, and it’s been paved by the president and his party. Voters will have their chance this November to render their verdict on the Obama years. No wonder Republicans feel confident these days.

Mr. Rove, the former senior adviser and deputy chief of staff to President George W. Bush, is the author of “Courage and Consequence” (Threshold Editions, 2010).


Full article:

A Plague of Vagueness

How about a void-for-vagueness doctrine for the U.S. Congress?

Just when you’re thinking all hope is lost, along comes the “void-for-vagueness doctrine,” invoked this past week by the Supreme Court to restrict a hopelessly vague law. If our era needs a bumper sticker, this is it: Void for Vagueness. Paste it on the 2,000-plus pages of the new ObamaCare law, paste it on the 2,000 pages of the floundering financial regulation bill. Hand it out in front of Elena Kagan’s confirmation hearings. Heck, chisel it on the facade of the U.S. Capitol. But my enthusiasm is racing ahead of the story.

In 2006, the most hated man in America was probably Jeff Skilling, who once sat atop Enron, perhaps the most hated corporate name in all American history. This heap of unpopularity notwithstanding, the Supreme Court said last week the government wrongly prosecuted the abominated Jeff Skilling under something called the “honest services fraud” law. The Court ruled—unanimously—that the law was, in a word, too “vague.”

Here is the classic description of the void-for-vagueness doctrine from Justice George Sutherland in 1926: “a statute which either forbids or requires the doing of an act in terms so vague that men of common intelligence must necessarily guess at its meaning . . . violates the first essential of due process of law.”

That any such common-sense rule still exists in law, politics or life is a wonder.

Strictly, the vagueness test applies only to penal law, but in a better world would it not also apply to much else in public life? The world was simpler in 1926.

Our Congress, the one with a current approval rating of 22%, is attempting to enact its hapless answer to the financial crisis—now called, with no hint of irony, the Dodd-Frank bill. It spent the previous 12 months concocting the Obama health-care law. The actual language of the several thousand-page financial regulation bill was made available to the American people for the first time one evening this week.

Texas Rep. Jeb Hensarling, in a comment on the legislation that should go straight into Bartlett’s Familiar Quotations, said, “There are probably three unintended consequences on every single page of this bill.”

Justice Antonin Scalia wrote a separate concurrence on the Skilling decision. Consider his remarks in the context of Congress’s modern legislative enactments, such as Sarbanes-Oxley, which technically are beyond reach of the void-for-vagueness doctrine.

He referred to the honest-services law, enacted by Congress in 1988, as “this indeterminacy.” He calls the legal duties required of individuals under it “hopelessly undefined,” a “smorgasbord” written in “astoundingly broad language” and “to put it mildly, unclear.” His most stupendous example was a court ruling that one could be found in violation for a scheme “contrary to public policy.”

In too many areas where the daily life of commerce intersects with public policy, people feel they are flying blind, uncertain of what a law or regulation requires, uncertain of how the bureaucracies empowered to enforce this morass will interpret them.

One sensed it was heading this way when landowners were prosecuted under the Endangered Species Act for violating the “habitat” of odd creatures found on their property.

This derangement of the laws’ meaning is among the reasons the public is so out of sorts about politics and Congress. They think compliance with the rules is turning into a crap shoot. They are right. Here is Justice Scalia on what happened to the law’s meaning in the Skilling case: “The duty probably did not have to be rooted in state law, but maybe it did. It might have been more demanding in the case of public officials, but perhaps not.” Truly, we are in Wonder Land.

It is an irony, though, that the Supreme Court that can unanimously find in favor of a Jeff Skilling under the void-for-vagueness doctrine is the same court that shows nearly infinite deference to federal administrative bureaucracies that strain to interpret the sloppy legislative language Congress enacts into law.

We are there again. The Dodd-Frank bill, if enabled into law by several Republican Senators, lets the actual meaning of the “Volcker Rule” on banks’ trading practices and much else pass into the hands of the translators at the Federal Reserve, FDIC, other federal agencies and the lobbyists who swarm around them.

It is not an accident that American public policy and law have fallen so far into a condition of unfathomable murk. As with the prosecutors who abused the honest-services statute, opaqueness of the sort Messrs. Dodd, Frank and the president favor shifts the locus of power away from all citizens and toward an administrative minority that reduces the nation’s civil life to a costly game of Mother-may-I?

If only on principle, someone from the GOP in Congress should start demanding that all federal legislation pass through a void-for-vagueness test. This session, none would survive. If the Supreme Court can demand clarity on behalf of convicted felons, how about adopting it on behalf of everyone else, who until their luck runs out, remain innocent?

Daniel Henninger, Wall Street Journal


Full article and photo:

Espionage History and the ‘Russian 10’

The arrest of ‘sleeper agents’ on U.S. soil is the stuff of spy novels, not the Cold War.

The Justice Department’s arrest this week of 10 Russian spies posing as American citizens is not stranger than fiction; it mirrors fiction. Innumerable Cold War novels and films focused on “sleeper agents,” professional Soviet espionage officers superbly trained in language and culture who take on the identity of a native-born American to gain access to U.S. intelligence and policy making.

But in reality the most damaging Cold War spies were native-born Americans—Julius Rosenberg, Alger Hiss, Aldrich Ames, Richard Hansen—who for reasons of ideology, money or psychological perversity chose to betray their country.

Most Soviet espionage was supervised by “legal” KGB officers operating under official cover as diplomats who, when arrested, faced only expulsion, protected by their diplomatic status. Great Britain famously expelled 105 Soviet personnel linked to KGB intelligence in 1971. But none of them had been posing as a British citizen. The KGB also had “illegal” officers who had no diplomatic status, often used false identities and who usually functioned as covert liaisons with native-born traitors. Long-term sleeper agents, as these 10 appear to have been, are rare.

In the late-1950s, the U.S. government arrested, tried and convicted five Soviet illegals in connection with the Soble-Soblen spy ring: Jack Soble, his wife Myra, his brother Robert Soblen (the two brothers had anglicized their Lithuanian name, Sobolevicius, slightly differently), Jacob Albam and Mark Zborowski. None had diplomatic cover, but neither were they “deep penetration” agents. All used their true identities, simply pretending to be innocent immigrants.

Moreover, their espionage work was confined largely to “agent handling,” i.e., acting as liaison with native-born Americans, mostly Communists, who had been recruited as Soviet spies years earlier. Their major accomplishment was to infiltrate the American Trotskyist movement and the Russian emigré community, targets with no direct connection to the U.S. government. Soble and associates had no plans or prospects of entering American think tanks or other institutions with access to high-level American policy makers.

There were two Soviet illegals exposed in the late 1950s whose activities came a bit closer to the recently arrested 10. An illegal officer, KGB Col. Rudolf Abel (real name Vilyam Fisher), entered the U.S. in 1948 and operated under a variety of false identities. He was finally exposed when his assistant and fellow illegal, KGB Lt. Col. Reino Hayhanen, defected in 1957. (Hayhanen, of Finnish background, had been sent to the U.S. using false papers identifying him as an American of Finnish ancestry.) Abel, who never admitted his real name, was convicted and sentenced to 30 years in prison.

After only five years he was freed in exchange for Francis Gary Powers, the U-2 pilot shot down over the Soviet Union on a CIA reconnaissance mission. While Hayhanen and Abel assumed false identities as Americans, their function was to maintain contact and pick up information from native-born Americans who spied for the Soviets. Abel’s initial task, for example, was to re-establish KGB contact with Theodore Hall, an American physicist and secret Communist who had provided U.S. atomic secrets to the USSR while working at Los Alamos. Hayhanen and Abel were illegals but not deep-penetration sleeper agents.

Thus, the FBI’s arrest of 10 Russian sleeper agents on U.S. soil has no precedent in Cold War history, even if fans of Walter Wager’s novel “Telefon” (later a movie staring Charles Bronson) find it familiar. Also unprecedented, and reassuringly so, is that FBI counterintelligence had identified these Russian sleepers early on, had been monitoring them for years, and finally decided that it had gained what it could from such surveillance and rolled up the Russian networks.

Deep-penetration agents are a very, very expensive investment. Not only the training of the professional officers themselves, but covertly supporting them, communicating with them, and supervising their activities is a major bureaucratic expense for any intelligence agency. The loss of 10 such agents and the resulting collateral damage makes this a catastrophe for Russian foreign intelligence. The FBI also identified a number of Russian “legal” officers who made surreptitious contact with the sleepers and, thus exposed, these Russian officers are now useless for intelligence fieldwork.

The SVR—Russian Foreign Intelligence, successor to the KGB—also cannot be sure that the FBI has disclosed all that it knows of the 10 agents’ activities (11 with the arrest of a confederate in Cyprus). Prudence dictates that the SVR must assume that any other Russian officers who had covert contact with the 11 may have been identified by American security. Use of these potentially compromised officers in future espionage field-work would be risky and foolish.

We don’t know what additional shoes will drop in this case. Will any of the 10 talk to avoid a long prison term? Rudolf Abel was defiant and refused any cooperation. Jack Soble, however, dodged the death penalty by fully confessing, telling all he knew of KGB operations in the U.S. and Western Europe, and even testified against his brother. These 10 (or 11, if we count the agent arrested in Cyprus) don’t face the death penalty but do face potentially long terms in prison, and there aren’t any Francis Gary Powers available for exchanges.

Messrs. Klehr and Haynes are co-authors, along with Alexander Vassiliev, of “Spies: The Rise and Fall of the KGB in America” (Yale University Press, 2010).


Full article:

The Ugly Party vs. the Grown-Up Party

My political friendships and sympathies are increasingly determined not by ideology but by methodology. One of the most significant divisions in American public life is not between the Democrats and the Republicans; it is between the Ugly Party and the Grown-Up Party.

This distinction came to mind in the case of Washington Post blogger David Weigel, who resigned last week after the leak of messages he wrote disparaging figures he covered. Weigel is, by most accounts, a bright, hardworking young man whose private communications should have been kept private. But the tone of the e-mails he posted on a liberal e-mail list is instructive. When Rush Limbaugh went to the hospital with chest pain, Weigel wrote, “I hope he fails.” Matt Drudge is an “amoral shut-in” who should “set himself on fire.” Opponents are referred to as “ratf — -ers” and “[expletive] moronic.”

This type of discourse is an odd combination between the snideness of the cool, mean kids in high school and the pettiness of Richard Nixon rambling on his tapes. Weigel did not intend his words to be public. But they display the defining characteristic of ugly politics — the dehumanization of political opponents.

Unlike Weigel, most members of the Ugly Party — liberal and conservative — have little interest in keeping their views private. “My only regret with Timothy McVeigh,” Ann Coulter once said, “is he did not go to the New York Times building.” Radio host Mike Malloy suggested that Glenn Beck “do the honorable thing and blow his brains out.” Conservatives carry signs at Obama rallies: “We Came Unarmed (This Time).” Liberals carried signs at Bush rallies: “Save Mother Earth, Kill Bush.” Says John Avlon, author of “Wingnuts: How the Lunatic Fringe Is Hijacking America,” “If you only take offense when the president of your party is compared to Hitler, then you’re part of the problem.”

The rhetoric of the Ugly Party shares some common themes: urging the death or sexual humiliation of opponents or comparing a political enemy to vermin or diseases. It is not merely an adolescent form of political discourse; it encourages a certain political philosophy — a belief that rivals are somehow less than human, which undermines the idea of equality and the possibility of common purposes.

Such sentiments have always existed. But the unfiltered media — particularly the Internet — have provided both stage and spotlight. Now everyone can be Richard Nixon, threatening opponents and composing enemies lists.

But the Internet is also a permanent record, as Weigel found. His reaction to exposure was honest and admirable. He admitted to being “cocky” and “needlessly mean” — the kind of introspection that promises future contribution. But when members of the Ugly Party are exposed, generally they respond differently. Obscenity? The real obscenity is an unjust war, or imposing socialism or devotion to Israel. It is an argument that makes any deep policy disagreement an excuse for verbal violence. Or an offense against taste and judgment is dismissed as humor and satire.

The alternative to the Ugly Party is the Grown-Up Party — less edgy and less hip. It is sometimes depicted on the left and on the right as an all-powerful media establishment, stifling creativity, freedom and dissent. The Grown-Up Party, in my experience, is more like a seminar at the Aspen Institute — presentation by David Broder, responses from E.J. Dionne Jr. and David Brooks — on the electoral implications of the energy debate. I am more comfortable in this party for a few reasons: because it is more responsible, more reliable and less likely to wish its opponents would die.

Many of the entrepreneurs of the new media, on left and right, are talented, vivid and entertaining. Many are also squandering important things they do not value. They are making politics an unpleasant chore, practiced mainly by the vicious and angry, and are feeding dangerous resentments in a volatile time.

Eventually, all edginess becomes old. Obscenity reaches the limits of language. People read yesterday’s hot blogger, watch yesterday’s cable star, roll their eyes and say, “Not again.” And maybe then the Grown-Up Party will prove more enduring and interesting after all.

Michael Gerson, Washington Post


Full article:

To Tweet, Or Not to Tweet

How weary, stale, flat and unprofitable—at times—seem all the digital distractions of this world.

A catastrophic event unfolds. A seemingly healthy professional embarks on his daily commute, only to come to the frightening realization that his battered and beloved BlackBerry lies vulnerable and unused in a distant corner of his home. An unwholesome panic descends. No matter how far away from home he is, and no matter how needless the device may be in a practical sense, he is impelled to hightail it back to his house and reconnect with the world.

William Powers offers this beleaguered man (me), and everyone else who has faced a similar ordeal, a roadmap to contentment in “Hamlet’s BlackBerry,” a rewarding guide to finding a “quiet” and “spacious” place “where the mind can wander free.”

Based on the author’s much-discussed 2006 National Journal essay, “Hamlet’s BlackBerry: Why Paper is Eternal” (and how I wish that were true), the former Washington Post staff writer argues that the distractions of manic connectivity often lead to a lack of productivity and, if allowed to permeate too deeply, to an assault on the beauty and meaning of everyday life.

Obviously this is not a unique grievance, or a fresh one: As Mr. Powers acknowledges, concerns about the deleterious effects of a new world supplanting the old go back to Plato. But there has been an awful lot of grousing about digital distraction lately—Nicholas Carr’s “The Shallows: What the Internet Is Doing to Our Brains” came out just a few weeks ago—and it is easy to feel skeptical of worrywarts agonizing about Americans “wrestling” with too many choices and “coping” with the effects of too much Internet use.

There is simply too much good that comes of innovation for that sort of Luddite hand-wringing. The farmer a century ago who pulled himself off the straw mattress at 4 a.m. to till the earth so his family wouldn’t starve led a fairly straightforward, undistracted existence, but he was almost certainly miserable most of the time. And he probably regarded the arrival of radio as a sort of miracle. In discussions of this type I tend to rely on the wisdom of P.J. O’Rourke: “Civilization is an enormous improvement on the lack thereof.”

But even a jaded reader is likely to be won over by “Hamlet’s BlackBerry.” It convincingly argues that we’ve ceded too much of our existence to what he calls Digital Maximalism. Less scold and more philosopher, Mr. Powers certainly bemoans the spread of technology in our lives, but he also offers a compelling discussion of our dependence on contraptions and of the ways in which we might free ourselves from them. I buy it. I need quiet time.

To accept “Hamlet’s BlackBerry” is to accept that we are super busy. “It’s staggering,” writes Mr. Powers, “how many balls we keep in the air each day and how few we drop. We’re so busy, sometimes it seems as though busyness itself is the point.” Though I don’t find all that ball-juggling as staggering as the author, and I don’t know anyone who acts as if chaos is the point of it all, it would be foolish not to concede that our lives have become far more complex than ever before.

What can be done? What should be done? Mr. Powers’s answer is, in essence: Just say no. Try to cultivate a quieter or at least more focused life. The most persuasive and entertaining parts of “Hamlet’s BlackBerry” are found in Mr. Powers’s efforts to practice what he preaches. (Most of us, it should be noted, do not have the option of moving from a dense Washington, D.C., suburb to an idyllic Cape Cod town to grapple with the demons of gadgetry addiction.) His skeptical wife and kids agree that if they’re allowed to use their laptops during the week, they will turn the computers off on the weekend. Mr. Powers discovers that friends and relatives quickly adapt to the family’s digital disconnect (they call it the “Internet Sabbath”). The family spends more time face-to-face instead of Facebooking.

Mr. Powers proposes that we take into account the “need to connect outward, as well as the opposite need for time and space apart.” It is a powerful desire, the balanced life. Most of us yearn for it. Neither technology nor connectivity is injurious unless we allow them to consume us. Mr. Powers argues that letting life turn into a blizzard of snapshots—that’s what all those screenviews amount to, after all—isn’t enough. We would be happier freeing ourselves for genuine, unfiltered experience and then reflecting on it, not tweeting about it. The busy person will pause here to nod in sympathy.

I’m not sure that many of us have found that spacious place where our minds can wander free of technological intrusions, of beeps and buttons and emails and tweets, but “Hamlet’s BlackBerry” makes the case that we can—or should—find it. Recently, while watching some hypnotically dreadful movie, I instinctively reached for my BlackBerry to fetch some worthless biographical information about a third-rate actress that would do no more than clog my brain still further.

Then I remembered something in Mr. Powers’s book—which takes its title from a scene in “Hamlet” when the prince refers to an Elizabethan technical advance: specially coated paper or parchment that could be wiped clean. A book that included heavy, blank, erasable pages made from such paper—an almanac, for example—was called a table. “Yea, from the table of my memory / I’ll wipe away all trivial fond records,” Hamlet says. Or, as Mr. Powers paraphrases: ” ‘Don’t worry,’ Hamlet’s nifty device whispered, ‘you don’t have to know everything. Just the few things that matter.’ ”

Mr. Harsanyi is a nationally syndicated columnist for the Denver Post.


Full article and photo:

Another Dead-End Summit

Going up the garden path, again. Barack Obama accompanied by fellow ramblers Jose Manuel Barroso, Silvio Berlusconi, Angela Merkel at Nicolas Sarkozy during the G-8 summit in Canada on June 25.

Marked by the EU-US divide over the best way out of the crisis, the world leaders at the G-20 summit in Toronto spurned Europe’s proposals to tax banks and regulate markets. The only consensus reached was on deficit reduction, an objective championed by all 27 EU member states. European papers are scathing in their editorials on the G-20 gathering.

“A summit that could just as well not have been held,” writes Poland’s Dziennik Gazeta Prawna, offering its final verdict on last weekend’s G-20 summit. “In Toronto, the G-20 leaders didn’t solve a single economic problem,” the daily adds. “The world’s most influential politicians were unable to agree on anything tangible,” particularly “on the principle of a global bank levy or on the instruments to bolster bank capital.”

Likewise, France’s Libération pronounces “the ‘Gs’ at a standstill.” “The Huntsville G-8 and Toronto G-20 displayed more differences of opinion than progress in getting out of the crisis. The idea of a bank levy or international financial tax has been shelved indefinitely, and everyone pledged to cut deficits, to be sure, but on their own terms.”

Germany’s Frankfurter Allgemeine Zeitung, which advocates liberalizing international trade as the crisis remedy, says the G-8 and last weekend’s G-20 once again proved how ineffectual summits are. “The fact that certain industrialized countries are incapable of listening to emerging countries’ wishes and opinions jeopardizes the future of the G-20,” writes the FAZ, particularly in view of the EU’s attempt to tax financial transactions in the face of opposition from emerging countries that were spared by the crisis.

“Their pique should induce Europeans to wonder whether such taxes make any sense and give up on them,” the German daily opines. “If the point of the G-20 is to sign off on European ideas, we might as well give it up. And if the G-20 is to become a serious international economic forum, it would be a pipedream to imagine that European notions are the measure of all things,” the FAZ editorial concludes.

Every Man for Himself

“The G-20 has ushered in the return of ‘every man for himself’,” bemoans France’s Le Figaro. “The attempt to define a consensus-based economic policy to get out of the crisis proved abortive. Between a Germany obsessed with cutting deficits … a United States that is fretful about hamstringing growth by excessive austerity and a France halfway between the two, a common guideline is nowhere in sight. The G-20, which was created at the peak of financial turmoil, has proved its utility in times of crisis. But the meeting in Toronto also bared its limitations. Global economic governance of one sort or another, which is already so hard to hammer out at the European level, is not about to be put in place overnight.”

In fact, explains Germany’s Die Tageszeitung, “The disagreements within the G-8 and G-20 now force (the conferees) to refocus on matters that can really be changed: For Europeans today, that means Europe.” So, concludes the TAZ, “The best news from this summit is that Merkel and Sarkozy seemed determined to tax financial transactions in Europe — or in the euro zone if London holds out.”

In spite of all, notes the EUobserver, “the statement on halving public deficits by 2013 was hailed as a victory for European politicians.” “By setting that target, the G-20 came to a close under the banner of German-brand rigour,” remarks La Repubblica. “The match played out” in Canada “was not Germany vs. the US,” even if “Angela Merkel might give the impression she won the day,” explains the Italian daily. “After having foisted its doctrine on Europe, Germany is now exporting it worldwide. Barack Obama, the last of the Keynesian leaders, seems to be beating a retreat. He did not convince Berlin of the benefits of states’ spending their way to growth. But appearances are deceiving, and Merkel’s triumph will soon prove a Pyrrhic victory. It serves to assuage the anxiety of the German public,” which favors fiscal rigour, and to “accelerate the marginalization of Europe” by shifting even faster “the geometries of power towards the new dynamics between America, China, India, Brazil and Russia.”


Full article and photo:,1518,703570,00.html

Owning the news

Copyrighting facts as well as words

FACTS, ruled America’s Supreme Court in 1918 in the “hot news doctrine”, cannot be copyrighted. But a news agency can retain exclusive use of its product so long as it has a commercial value. Now newspapers, fed up with stories being “scraped” by other websites, want that ruling made into law.

The idea is floated in a discussion document published by the Federal Trade Commission, which is holding hearings on the news industry’s future. Media organisations would have the exclusive right, for a predetermined period, to publish their material online. The draft also considers curtailing fair use, the legal principle that allows search engines to reproduce headlines and links, so long as the use is selective and transformative (as with a list of search results). Jeff Jarvis, who teaches journalism students to become entrepreneurs at New York’s City University, says this sounds like an attempt to protect newspapers more than journalism.

Germany is mulling something similar. A recent paper by two publishers’ associations proposed changing copyright law to protect not only articles but also headlines, sentences and even fragments of text.

Critics say that would extend copyright to facts. It would also be hard to make either regime work in practice. In America, a regulator would presumably need to determine the period of commercial value: perhaps two hours for news of an earthquake, 30 minutes for sports results. In Germany, publishers want a fee on commercial computer use. Germany’s justice minister last week hinted at support for the news industry, but also said that a new law would not stir young people to buy newspapers. New products, she says, would be a better response to flagging demand.


Full article and photo:

Posted in Law

Secret of AA: After 75 Years, We Don’t Know How It Works

Some 1.2 million people belong to one of AA’s 55,000 meeting groups in the US.

The church will be closed tomorrow, and the drunks are freaking out. An elderly lady in a prim white blouse has just delivered the bad news, with deep apologies: A major blizzard is scheduled to wallop Manhattan tonight, and up to a foot of snow will cover the ground by dawn. The church, located on the Upper West Side, can’t ask its staff to risk a dangerous commute. Unfortunately, that means it must cancel the Alcoholics Anonymous meeting held daily in the basement.

A worried murmur ripples through the room. “Wha… what are we supposed to do?” asks a woman in her mid-twenties with smudged black eyeliner. She’s in rough shape, having emerged from a multiday alcohol-and-cocaine bender that morning. “The snow, it’s going to close everything,” she says, her cigarette-addled voice tinged with panic. “Everything!” She’s on the verge of tears.

A mustachioed man in skintight jeans stands and reads off the number for a hotline that provides up-to-the-minute meeting schedules. He assures his fellow alcoholics that some groups will still convene tomorrow despite the weather. Anyone who needs an AA fix will be able to get one, though it may require an icy trek across the city.

That won’t be a problem for a thickset man in a baggy beige sweat suit. “Doesn’t matter how much snow we get—a foot, 10 feet piled up in front of the door,” he says. “I will leave my apartment tomorrow and go find a meeting.”

He clasps his hands together and draws them to his heart: “You understand me? I need this.” Daily meetings, the man says, are all that prevent him from winding up dead in the gutter, shoes gone because he sold them for booze or crack. And he hasn’t had a drink in more than a decade.

The resolve is striking, though not entirely surprising. AA has been inspiring this sort of ardent devotion for 75 years. It was in June 1935, amid the gloom of the Great Depression, that a failed stockbroker and reformed lush named Bill Wilson founded the organization after meeting God in a hospital room. He codified his method in the 12 steps, the rules at the heart of AA. Entirely lacking in medical training, Wilson created the steps by cribbing ideas from religion and philosophy, then massaging them into a pithy list with a structure inspired by the Bible.

The 200-word instruction set has since become the cornerstone of addiction treatment in this country, where an estimated 23 million people grapple with severe alcohol or drug abuse—more than twice the number of Americans afflicted with cancer. Some 1.2 million people belong to one of AA’s 55,000 meeting groups in the US, while countless others embark on the steps at one of the nation’s 11,000 professional treatment centers. Anyone who seeks help in curbing a drug or alcohol problem is bound to encounter Wilson’s system on the road to recovery.

It’s all quite an achievement for a onetime broken-down drunk. And Wilson’s success is even more impressive when you consider that AA and its steps have become ubiquitous despite the fact that no one is quite sure how—or, for that matter, how well—they work. The organization is notoriously difficult to study, thanks to its insistence on anonymity and its fluid membership. And AA’s method, which requires “surrender” to a vaguely defined “higher power,” involves the kind of spiritual revelations that neuroscientists have only begun to explore.

What we do know, however, is that despite all we’ve learned over the past few decades about psychology, neurology, and human behavior, contemporary medicine has yet to devise anything that works markedly better. “In my 20 years of treating addicts, I’ve never seen anything else that comes close to the 12 steps,” says Drew Pinsky, the addiction-medicine specialist who hosts VH1’s Celebrity Rehab. “In my world, if someone says they don’t want to do the 12 steps, I know they aren’t going to get better.”

Wilson may have operated on intuition, but somehow he managed to tap into mechanisms that counter the complex psychological and neurological processes through which addiction wreaks havoc. And while AA’s ability to accomplish this remarkable feat is not yet understood, modern research into behavior dynamics and neuroscience is beginning to provide some tantalizing clues.

One thing is certain, though: AA doesn’t work for everybody. In fact, it doesn’t work for the vast majority of people who try it. And understanding more about who it does help, and why, is likely our best shot at finally developing a system that improves on Wilson’s amateur scheme for living without the bottle.

AA doesn’t work for everybody, but when it does, it can be transformative. Members receive tokens to mark periods of sobriety, from 24 hours to one month to 55 years.

AA originated on the worst night of Bill Wilson’s life. It was December 14, 1934, and Wilson was drying out at Towns Hospital, a ritzy Manhattan detox center. He’d been there three times before, but he’d always returned to drinking soon after he was released. The 39-year-old had spent his entire adult life chasing the ecstasy he had felt upon tasting his first cocktail some 17 years earlier. That quest destroyed his career, landed him deeply in debt, and convinced doctors that he was destined for institutionalization.

Wilson had been quite a mess when he checked in the day before, so the attending physician, William Silkworth, subjected him to a detox regimen known as the Belladonna Cure—hourly infusions of a hallucinogenic drug made from a poisonous plant. The drug was coursing through Wilson’s system when he received a visit from an old drinking buddy, Ebby Thacher, who had recently found religion and given up alcohol. Thacher pleaded with Wilson to do likewise. “Realize you are licked, admit it, and get willing to turn your life over to God,” Thacher counseled his desperate friend. Wilson, a confirmed agnostic, gagged at the thought of asking a supernatural being for help.

But later, as he writhed in his hospital bed, still heavily under the influence of belladonna, Wilson decided to give God a try. “If there is a God, let Him show Himself!” he cried out. “I am ready to do anything. Anything!”

What happened next is an essential piece of AA lore: A white light filled Wilson’s hospital room, and God revealed himself to the shattered stockbroker. “It seemed to me, in the mind’s eye, that I was on a mountain and that a wind not of air but of spirit was blowing,” he later said. “And then it burst upon me that I was a free man.” Wilson would never drink again.

At that time, the conventional wisdom was that alcoholics simply lacked moral fortitude. The best science could offer was detoxification with an array of purgatives, followed by earnest pleas for the drinker to think of his loved ones. When this approach failed, alcoholics were often consigned to bleak state hospitals. But having come back from the edge himself, Wilson refused to believe his fellow inebriates were hopeless. He resolved to save them by teaching them to surrender to God, exactly as Thacher had taught him.

Following Thacher’s lead, Wilson joined the Oxford Group, a Christian movement that was in vogue among wealthy mainstream Protestants. Headed by a an ex-YMCA missionary named Frank Buchman, who stirred controversy with his lavish lifestyle and attempts to convert Adolf Hitler, the Oxford Group combined religion with pop psychology, stressing that all people can achieve happiness through moral improvement. To help reach this goal, the organization’s members were encouraged to meet in private homes so they could study devotional literature together and share their inmost thoughts.

In May 1935, while on an extended business trip to Akron, Ohio, Wilson began attending Oxford Group meetings at the home of a local industrialist. It was through the group that he met a surgeon and closet alcoholic named Robert Smith. For weeks, Wilson urged the oft-soused doctor to admit that only God could eliminate his compulsion to drink. Finally, on June 10, 1935, Smith (known to millions today as Dr. Bob) gave in. The date of Dr. Bob’s surrender became the official founding date of Alcoholics Anonymous.

In its earliest days, AA existed within the confines of the Oxford Group, offering special meetings for members who wished to end their dependence on alcohol. But Wilson and his followers quickly broke away, in large part because Wilson dreamed of creating a truly mass movement, not one confined to the elites Buchman targeted. To spread his message of salvation, Wilson started writing what would become AA’s sacred text: Alcoholics Anonymous, now better known as the Big Book.

The core of AA is found in chapter five, entitled “How It Works.” It is here that Wilson lists the 12 steps, which he first scrawled out in pencil in 1939. Wilson settled on the number 12 because there were 12 apostles.

In writing the steps, Wilson drew on the Oxford Group’s precepts and borrowed heavily from William James’ classic The Varieties of Religious Experience, which Wilson read shortly after his belladonna-fueled revelation at Towns Hospital. He was deeply affected by an observation that James made regarding alcoholism: that the only cure for the affliction is “religiomania.” The steps were thus designed to induce an intense commitment, because Wilson wanted his system to be every bit as habit-forming as booze.

The first steps famously ask members to admit their powerlessness over alcohol and to appeal to a higher power for help. Members are then required to enumerate their faults, share them with their meeting group, apologize to those they’ve wronged, and engage in regular prayer or meditation. Finally, the last step makes AA a lifelong duty: “Having had a spiritual awakening as the result of these steps, we tried to carry this message to alcoholics and to practice these principles in all our affairs.” This requirement guarantees not only that current members will find new recruits but that they can never truly “graduate” from the program.

Aside from the steps, AA has one other cardinal rule: anonymity. Wilson was adamant that the anonymous component of AA be taken seriously, not because of the social stigma associated with alcoholism, but rather to protect the nascent organization from ridicule. He explained the logic in a letter to a friend:

[In the past], alcoholics who talked too much on public platforms were likely to become inflated and get drunk again. Our principle of anonymity, so far as the general public is concerned, partly corrects this difficulty by preventing any individual receiving a lot of newspaper or magazine publicity, then collapsing and discrediting AA.

Bill Wilson’s Gospel

On Dec. 14, 1934, a failed stockbroker named Bill Wilson was struggling with alcoholism at a New York City detox center. It was his fourth stay at the center and nothing had worked. This time, he tried a remedy called the belladonna cure — infusions of a hallucinogenic drug made from a poisonous plant — and he consulted a friend named Ebby Thacher, who told him to give up drinking and give his life over to the service of God.

Wilson was not a believer, but, later that night, at the end of his rope, he called out in his hospital room: “If there is a God, let Him show Himself! I am ready to do anything. Anything!”

As Wilson described it, a white light suffused his room and the presence of God appeared. “It seemed to me, in the mind’s eye, that I was on a mountain and that a wind not of air but of spirit was blowing,” he testified later. “And then it burst upon me that I was a free man.”

Wilson never touched alcohol again. He went on to help found Alcoholics Anonymous, which, 75 years later, has 11,000 professional treatment centers, 55,000 meeting groups and some 1.2 million members.

The movement is the subject of a smart and comprehensive essay by Brendan I. Koerner in the July 2010 issue of Wired magazine. The article is noteworthy not only because of the light it sheds on what we’ve learned about addiction, but for what it says about changing behavior more generally. Much of what we do in public policy is to try to get people to behave in their own long-term interests — to finish school, get married, avoid gangs, lose weight, save money. Because the soul is so complicated, much of what we do fails.

The first implication of Koerner’s essay is that we should get used to the idea that we will fail most of the time. Alcoholics Anonymous has stood the test of time. There are millions of people who fervently believed that its 12-step process saved their lives. Yet the majority, even a vast majority, of the people who enroll in the program do not succeed in it. People are idiosyncratic. There is no single program that successfully transforms most people most of the time.

The second implication is that we should get over the notion that we will someday crack the behavior code — that we will someday find a scientific method that will allow us to predict behavior and design reliable social programs. As Koerner notes, A.A. has been the subject of thousands of studies. Yet “no one has yet satisfactorily explained why some succeed in A.A. while others don’t, or even what percentage of alcoholics who try the steps will eventually become sober as a result.”

Each member of an A.A. group is distinct. Each group is distinct. Each moment is distinct. There is simply no way for social scientists to reduce this kind of complexity into equations and formula that can be replicated one place after another.

Nonetheless, we don’t have to be fatalistic about things. It is possible to design programs that will help some people some of the time. A.A. embodies some shrewd insights into human psychology.

In a culture that generally celebrates empowerment and self-esteem, A.A. begins with disempowerment. The goal is to get people to gain control over their lives, but it all begins with an act of surrender and an admission of weakness.

In a culture that thinks of itself as individualistic, A.A. relies on fellowship. The general idea is that people aren’t really captains of their own ship. Successful members become deeply intertwined with one another — learning, sharing, suffering and mentoring one another. Individual repair is a social effort.

In a world in which gurus try to carefully design and impose their ideas, Wilson surrendered control. He wrote down the famous steps and foundations, but A.A. allows each local group to form, adapt and innovate. There is less quality control. Some groups and leaders are great; some are terrible. But it also means that A.A. is decentralized, innovative and dynamic.

Alcoholics have a specific problem: they drink too much. But instead of addressing that problem with the psychic equivalent of a precision-guidance missile, Wilson set out to change people’s whole identities. He studied William James’s “The Varieties of Religious Experience.” He sought to arouse people’s spiritual aspirations rather than just appealing to rational cost-benefit analysis. His group would help people achieve broad spiritual awakenings, and abstinence from alcohol would be a byproduct of that larger salvation.

In the business of changing lives, the straight path is rarely the best one. A.A. illustrates that even in an age of scientific advance, it is still ancient insights into human nature that work best. Wilson built a remarkable organization on a nighttime spiritual epiphany.

David Brooks, New York Times


Full article:

Published and Perished

Glossy magazines—and parties—for the shiny time before Wall Street’s fall.

If a hustling Candide had told the story of the Great Wall Street Meltdown, it might read something like this book—a not-so-innocent’s chronicle of crafty charlatans and vulpine finaglers who left the hero dazed and diminished in the bankruptcy of his dreams.

Randall Lane’s notion with “The Zeroes” is to use the sad tale of his slick-magazine enterprise, Doubledown Media, as a proxy for the now-familiar story of the financial collapse. Doubledown published giveaway titles aimed at the nouveaux riches spawned by the big bubble. The publications had names like Trader Monthly, Dealmaker, Private Air, Corporate Leader and Cigar Report. The idea was to use other people’s money to leverage them into an international, multi-media publishing colossus that would make the founders as rich as their target readers.

“The Zeroes” was embargoed—until today—by the publisher in a marketing ploy meant to suggest that the sizzling content had to be safeguarded against leaks. But the book’s hot stuff turns out to be mostly lukewarm. The big news: The juiced ex-Mets baseball player-turned-stock-picker Lenny Dykstra supposedly traded access to Jim Cramer, CNBC’s screaming “Mad Money” man, for $250,000 in penny stock; the psychedelic-art hack Peter Max is a wily operator; and Wall Street sharks made even more money and behaved even worse than you imagined as the markets careened toward disaster in the first decade—the Zeroes—of the new century.

Mr. Lane, who started out in journalism helping compile Forbes magazine’s billionaire scorecards, is an ingenuous narrator who likes to remind the reader that he was essentially a schlump living in a fourth-floor walkup while his glossies burnished the egos and tweaked the appetites of Wall Street’s new wildcatters.

As the boom inflates, Mr. Lane teams up with a London-based publishing wizard in 2004 to launch Trader Monthly. Its pitch-perfect slogan: “See it. Make it. Spend it.” Over the next few years, they add editions in London and Dubai and acquire or start other titles. Soon the pair comes under the spell of a business-mag vet named Jim Dunning, who pumps his own millions into the enterprise and spins a vision of tiny Doubledown quadrupling down in a bid to become an international marketing machine stalking the new “working wealthy.”

The hunt for money to grow on puts Lane & Co. on a treadmill to oblivion. Mr. Lane meets with a grotesque assortment of bankers, venture capitalists, merger partners, potential acquirers and other scalawags. Black books and deal sheets are exchanged. Credit lines are dangled and jerked away. At one point his venture is valued at $25 million; at another, $17 million.

Such healthy valuations were strange because as “The Zeroes” goes along it becomes obvious that, while Mr. Lane’s company is churning out a half-million free copies a month, it is really in the business of staging parties. Advertisers and potential advertisers pay Doubledown for the privilege of pouring the latest designer vodka down the gullets of Wall Street’s new aristocracy, peddling $10,000 watches on the wrists of arm-candy models and enticing rich marks into $300,000 Maybach luxury sedans and time-share condos in Las Vegas.

Mr. Lane’s commercial bacchanals are tame compared with the blasts staged by others. One trader tells him of a golf outing where each twosome was assigned its own stripper. “The women would dangle on the back of the cart from hole to hole,” Mr. Lane writes, “and then prostrate themselves, legs open, on the putting greens, providing the traders a target.”

It isn’t until page 171 that the reader learns that all of Mr. Lane’s frenetic activity produced $3 million in annual losses for Doubledown in 2005, 2006 and 2007. The party addiction was so strong that on Sept. 16, 2008—the day the feds took over AIG, 24 hours after Lehman Brothers cratered—Dealmaker magazine gave a party for a thousand shell-shocked Wall Streeters.

Many sketchy types cross our hero’s path, but none can match Lenny Dykstra. The ballplayer nicknamed “Nails” had somehow morphed into Jim Cramer’s stock-handicapping protégé. Mr. Dykstra had just sold his West Coast car-wash business for $25 million and bought Wayne Gretzky’s L.A. mansion; he drove a Maybach, flew in his own jets and had an investment scheme for rich pro athletes called “The Players Club.” As portrayed in “The Zeroes,” Mr. Dykstra (who piggybacked on a pro’s stock tips) seems seriously demented. Among other tics, he likes to stay up for four or five days at a stretch before crashing. He freely admits to Mr. Lane that he used steroids while playing ball. Despite everything, Mr. Lane goes into business with him; it all ends in tears and surreal litigation.

Mr. Lane gets involved with a host of other characters: Henry Hill, the real-life “Goodfellas” turncoat booted from the federal witness-protection program; Jacob the Jeweler, the money launderer for Detroit’s Black Mafia drug gang; a thieving Caribbean prime minister; and Peter Max. The artist signs on to peddle paintings of the Wall Street icons celebrated in Mr. Lane’s magazines. Mr. Max’s ingenious “One-Plus-Three” gimmick is to do his supposedly single portraits as inseparable tetraptychs—and charge the subjects four times his usual rate.

All of this was as doomed as the credit-default-swap lunacy of those days. Despite his familiarity with money culture, Mr. Lane breaks the sacred code: He dumps $283,000 of his own—and $115,000 of his mother’s!— into the sinking ship and spends another $130,000 redeeming his personal pledge for the office lease. Doubledown winds up in Chapter 7 bankruptcy, its titles, contents and lists fetching $50,000 from a newsletter publisher.

He muses that, as in the old days of Wall Street partnerships, he had risked his own stake in his business. He’d lost half a million dollars but bought peace of mind. “Maybe, in that example,” he suggests, “there was a lesson for Wall Street.” Nope. After the bailout, bankers’ and traders’ bonuses for 2009 set a record. “The game played on, timeless and unabated,” Mr. Lane concedes, sadder and inescapably wiser.

Mr. Kosner is the author of “It’s News to Me,” a memoir of his career as editor of Newsweek, New York magazine, Esquire and the New York Daily News.


Full article and photo:

Afghanistan: Eyes Wide Shut

President Obama’s ambivalence toward the war is energizing our enemies and undermining our allies.

With a wink of its left eye, the Obama administration tells its liberal base that a year from now the U.S. will be heading for a quick Afghan exit. “Everyone knows there’s a firm date,” insists White House chief of staff Rahm Emanuel.

With a wink of its right, the administration tells Afghanistan, Pakistan, NATO allies and its own military leadership that the July 2011 date is effectively meaningless. The notion that a major drawdown will begin next year “absolutely has not been decided,” says Defense Secretary Robert Gates.

The winks are simultaneous. When it comes to Barack Obama’s “war of necessity,” pretty much everyone thinks he’s blinked.

Not the least of the ironies of the president’s decision to sack Stanley McChrystal in favor of David Petraeus is that, in the name of asserting civilian control over the military, the president has a commander in Afghanistan whom he cannot realistically fire. It isn’t just that St. Dave has, for the GOP, the potential political potency of Dwight Eisenhower. It’s that the president needs the general’s credibility in Afghanistan because he has so little of his own.

Wars are contests of wills. If our efforts in Afghanistan have an increasingly ghostly quality—visible to the naked eye but incapable of achieving effects in the physical world—it has more to do with a widespread perception that we just aren’t prepared to do what it takes to win than it does with the particulars of counterinsurgency strategy or its execution. Gen. Petraeus won in Iraq because George W. Bush had his back and the people of Iraq, friend as well as foe, knew it.

By contrast, the fact that we have been unable to secure the small city of Marja, much less take on the larger job of Kandahar, is because nobody—right down to the village folk whom we are so sedulously courting with good deeds and restrictive rules of engagement—believes that Barack Obama believes in his own war. The vacuum in credibility begets the vacuum in power.

On Friday, the New York Times reported that Pakistan is seeking to expand its influence in Afghanistan. “Coupled with their strategic interests,” noted the Times, “the Pakistanis say they have chosen this juncture to open talks with [Afghan President Hamid] Karzai because, even before the controversy with Gen. McChrystal, they sensed uncertainty—’a lack of fire in the belly,’ said one Pakistani—within the Obama administration over the Afghan fight.”

The Times followed up the next day with a story about the effects of the Af-Pak rapprochement on Afghanistan’s minorities: “‘Karzai is giving Afghanistan back to the Taliban, and he is opening up the old schisms,’ said Rehman Oghly, an Uzbek member of Parliament and once a member of an anti-Taliban militia. ‘If he wants to bring in the Taliban, and they begin to use force, then we will go back to civil war and Afghanistan will be split.'”

Well, that would be bad, just as it would be bad if Pakistan reasserted itself in Afghanistan via its sometime “asset” in the so-called Haqqani network, which more recently has been an ally of al Qaeda but may yet want a seat in a future Afghan cabinet. But this is what inevitably flows when the U.S. can set no more ambitious a military goal for itself than the promise, as the president put it last week, “to break the Taliban’s momentum.” How about breaking the Taliban itself?

Perhaps the job-secure Gen. Petraeus could press the administration to stop talking about withdrawal schedules and start using the word “victory” with frequency and conviction. Or perhaps the general could, in his usual politic way, speak that way himself. Doing so would reassure our remaining Afghan friends and deter importuning outsiders. It might steady the unsteady Mr. Karzai. Above all, it would persuade the Afghans whose support we need that they won’t soon find themselves on the wrong end of a Taliban firing squad for having once sided with us.

But against these arguments must be weighed the president’s personal determination to end this war sooner rather than later. As Newsweek’s Jonathan Alter describes the president’s mind when he decided on an Afghan surge last fall, “this would not be a five- to seven-year nation-building commitment,” and July 2011 would mark the “beginning of a real—not a token—withdrawal.” The president, Mr. Alter reports, told his war council that “I don’t want to be going to Walter Reed for another eight years.”

No president would. Then again, few presidents would wage a war they weren’t fully committed to winning. This is where Mr. Obama finds himself now: seeking to calibrate some notional measure of “success”—how much Afghan “capacity” built; how much political “reconciliation” achieved, and so on—even as the rest of the world, the Taliban included, calls his bluff.

Gen. Petraeus will do what he can to turn things around, though he must know that every appearance of success will whet the administration’s appetite for a precipitous withdrawal. Maybe he can persuade the White House that this is a war without shortcuts, one that the U.S. has no choice but to win. Failing that, a president’s ambivalence will soon become a general’s nightmare. And that will be a tragedy for two countries.

Bret Stephens, Wall Street Journal


Full article:

Philosophy App

In his “Hitchhiker’s Guide to the Galaxy,” the science fiction writer Douglas Adams introduces Deep Thought — a computer the size of a small city, designed millions of years ago by a race of hyperintelligent pan-dimensional beings searching for the meaning of life. The super computer is described as a “so amazingly intelligent that even before the data banks had been connected up it had started from ‘I think therefore I am’ and got as far as the existence of rice pudding and income tax before anyone managed to turn it off …”

We’re a little way off from a handheld Deep Thought, but since life and meaning continue to perplex, a new philosophy application for smart phones might be the next best thing. — a popular online resource for questions philosophical — has launched an app — AskPhil —for iPhones, iPods and Android phones.

Alexander George, a professor of philosophy at Amherst College, launched in 2005 (he discusses the site in his post for The Stone, “The Difficulty of Philosophy”). He describes the AskPhil app in an Amherst press release: “When philosophical questions occur to people away from their desks or computer screens they’ll now have the opportunity through their mobile devices to see quickly whether other people have already asked that question and whether it’s received interesting responses.” deploys a panel of over 30 professional philosophers to tackle the questions which have vexed mankind for generations, including problems of logic, love and ethics.

Unlike Deep Thought, AskPhil does not deliver, or purport to deliver, definitive answers. Rather the panelists respond with thoughtful clarifications; they introduce concepts and sometimes suggest useful further reading. They address the questions posed as opposed to answering them.

And they do so relatively quickly. Adams’ hyperintelligent beings asked Deep Thought for the Ultimate Answer to Life the Universe and Everything. Deep Thought took but a brief seven and a half million years to respond. Its definitive answer: 42.

As the super computer kindly pointed out, the Ultimate Answer is baffling, because no one actually knew the Ultimate Question of Life the Universe and Everything that it was a response to. And at least there’s now an iPhone app to help with that.

But is this a good thing? Does this sort of merging of handy technology with deep thought (the lowercase, human kind, that is) enrich philosophical activity or does it fragment and devalue it?

Natasha Lennard, New York Times


Full article:

Lost in the Clouds?

Those in the ivory tower might think themselves enlightened; those on the ground find them irrelevant.
— Mahmood, Richmond

Philosophers are little men in little offices who write unreadable papers about symbolic logic or metaethics. That’s all.
— Ace-K

These sentiments — posted by readers in response to “What Is a Philosopher?” by Simon Critchley — touch on a common complaint: that the concerns of philosophers are far removed from daily lives of most people. Here we offer two more views on the matter: one from Alexander George, a professor of philosophy at Amherst College who runs; and another from Frieda Klotz, an editor of a forthcoming book on Plutarch.

The Difficulty of Philosophy

By Alexander George

One often hears the lament: Why has philosophy become so remote, why has it lost contact with people?

The complaint must be as old as philosophy itself.  In Aristophanes’ “Clouds,” we meet Socrates as he is being lowered to the stage in a basket.  His first words are impatient and distant: “Why do you summon me, o creature of a day?”  He goes on to explain pompously what he was doing before he was interrupted: “I tread the air and scrutinize the sun.”  Already in Ancient Greece, philosophy had a reputation for being troublesomely distant from the concerns that launch it.

Is the complaint justified, however?  On the face of it, it would seem not to be.  I run, a Web site that features questions from the general public and responses by a panel of professional philosophers.  The questions are sent by people at all stages of life: from the elderly wondering when to forgo medical intervention to successful professionals asking why they should care about life at all, from teenagers inquiring whether it is irrational to fear aging to 10-year-olds wanting to know what the opposite of a lion is. The responses from philosophers have been humorous, kind, clear, and at the same time sophisticated, penetrating, and informed by the riches of the philosophical traditions in which they were trained.  The site has evidently struck a chord as we have by now posted thousands of entries, and the questions continue to arrive daily from around the world.  Clearly, philosophers can — and do — respond to philosophical questions in intelligible and helpful ways.

But admittedly, this is casual stuff.  And at the source of the lament is the perception that philosophers, when left to their own devices, produce writings and teach classes that are either unhappily narrow or impenetrably abstruse.  Full-throttle philosophical thought often appears far removed from, and so much more difficult than, the questions that provoke it.

It certainly doesn’t help that philosophy is rarely taught or read in schools.  Despite the fact that children have an intense interest in philosophical issues, and that a training in philosophy sharpens one’s analytical abilities, with few exceptions our schools are de-philosophized zones.  This has as a knock-on effect that students entering college shy away from philosophy courses.  Bookstores — those that remain — boast philosophy sections cluttered with self-help guides.  It is no wonder that the educated public shows no interest in, or perhaps even finds alien, the fully ripened fruits of philosophy.

While all this surely contributes to the felt remoteness of philosophy, it is also a product of it: for one reason why philosophy is not taught in schools is that it is judged irrelevant.  And so we return to the questions of why philosophy appears so removed and whether this is something to lament.

This situation seems particular to philosophy.  We do not find physicists reproached in the same fashion.  People are not typically frustrated when their questions about the trajectory of soccer balls get answered by appeal to Newton’s Laws and differential calculus.

The difference persists in part because to wonder about philosophical issues is an occupational hazard of being human in a way in which wondering about falling balls is not.  Philosophical questions can present themselves to us with an immediacy, even an urgency, that can seem to demand a correspondingly accessible answer.  High philosophy usually fails to deliver such accessibility — and so the dismay that borders on a sense of betrayal.

Must it be so?  To some degree, yes.  Philosophy may begin in wonder, as Plato suggested in the “Theaetetus,” but it doesn’t end there.  Philosophers will never be content merely to catalog wonders, but will want to illuminate them — and whatever kind of work that involves will surely strike some as air treading.

But how high into the air must one travel?  How theoretical, or difficult, need philosophy be?  Philosophers disagree about this and the history of philosophy has thrown up many competing conceptions of what philosophy should be.  The dominant conception today, at least in the United States, looks to the sciences for a model of rigor and explanation.  Many philosophers now conceive of themselves as more like discovery-seeking scientists than anything else, and they view the great figures in the history of philosophy as likewise “scientists in search of an organized conception of reality,” as W.V. Quine, the leading American philosopher of the 20th Century, once put it.  For many, science not only provides us with information that might be pertinent to answering philosophical questions, but also with exemplars of what successful answers look like.

Because philosophers today are often trained to think of philosophy as continuous with science, they are inclined to be impatient with expectations of greater accessibility.  Yes, philosophy does begin in wonder, such philosophers will agree.  But if one is not content to be a wonder-monger, if one seeks illumination, then one must uncover abstract, general principles through the development of a theoretical framework.

This search for underlying, unifying principles may lead into unfamiliar, even alien, landscapes.  But such philosophers will be undaunted, convinced that the correct philosophical account will often depend on an unobvious discovery visible only from a certain level of abstraction.  This view is actually akin to the conception advanced by Aristophanes’ Socrates when he defends his airborne inquiries: “If I had been on the ground and from down there contemplated what’s up here, I would have made no discoveries at all.”  The resounding success of modern science has strengthened the attraction of an approach to explanation that has always had a deep hold on philosophers.

But the history of philosophy offers other conceptions of illumination.  Some philosophers will not accept that insight demands the discovery of unsuspected general principles.  They are instead sympathetic to David Hume’s dismissal, over 250 years ago, of remote speculations in ethics: “New discoveries are not to be expected in these matters,” he said.  Ludwig Wittgenstein took this approach across the board when he urged that “The problems [in philosophy] are solved, not by giving new information, but by arranging what we have always known.”  He was interested in philosophy as an inquiry into “what is possible before all new discoveries and inventions,” and insisted that “If one tried to advance theses in philosophy, it would never be possible to debate them, because everyone would agree to them.”  Insight is to be achieved not by digging below the surface, but rather by organizing what is before us in an illuminatingly perspicuous manner.

The approach that involves the search for “new discoveries” of a theoretical nature is now ascendant.  Since the fruits of this kind of work, even when conveyed in the clearest of terms, can well be remote and difficult, we have here another ingredient of the sense that philosophy spends too much time scrutinizing the sun.

Which is the correct conception of philosophical inquiry?  Philosophy is the only activity such that to pursue questions about the nature of that activity is to engage in it.  We can certainly ask what we are about when doing mathematics or biology or history — but to ask those questions is no longer to do mathematics or biology or history.  One cannot, however, reflect on the nature of philosophy without doing philosophy.  Indeed, the question of what we ought to be doing when engaged in this strange activity is one that has been wrestled with by many great philosophers throughout philosophy’s long history.

Questions, therefore, about philosophy’s remove cannot really be addressed without doing philosophy.  In particular, the question of how difficult philosophy ought to be, or the kind of difficulty it ought to have, is itself a philosophical question.  In order to answer it, we need to philosophize — even though the nature of that activity is precisely what puzzles us.

And that, of course, is another way in which philosophy can be difficult.

Alexander George is professor of philosophy at Amherst College. A new book drawn from, “What Should I Do?: Philosophers on the Good, the Bad, and the Puzzling,” is forthcoming from Oxford University Press.

The Philosophical Dinner Party

By Frieda Klotz

What is the meaning of life? Is there a god? Does the human race have a future? The standard perception of philosophy is that it poses questions that are often esoteric and almost always daunting. So another pertinent question, and one implicitly raised by Mr. George’s discussion, is can philosophy ever be fun?

Philosophy was a way of life for ancient philosophers, as much as a theoretical study — from Diogenes the Cynic, masturbating in public (“I wish I could cure my hunger as easily” he replied, when challenged) to Marcus Aurelius, obsessively transcribing and annotating his thoughts — and its practitioners didn’t mind amusing people or causing public outrage to bring attention to their message. Divisions between academic and practical philosophy have long existed, for sure, but even Plato, who was prolific on theoretical matters, may have tried to translate philosophy into action: ancient rumor has it that he traveled to Sicily to tutor first Dionysios I, king of Syracuse, and later his son (each ruler fell out with Plato and unceremoniously sent him home).

For at least one ancient philosopher, the love of wisdom was not only meant to be practical, but also to combine “fun with serious effort.” This is the definition of Plutarch, a Greek who lived in the post-Classical age of the second century A.D., a time when philosophy tended to focus on ethics and morals. Plutarch is better known as a biographer than a philosopher. A priest, politician and Middle Platonist who lived in Greece under Roman rule, he wrote parallel lives of Greeks and Romans, from which Shakespeare borrowed liberally and Emerson rapturously described as “a bible for heroes.” At the start and end of each “life” he composed a brief moral essay, comparing the faults and virtues of his subjects. Although they are artfully written, the “Lives” are really little more than brilliant realizations of Plutarch’s own very practical take on philosophy, aimed at teaching readers how to live. 

Plutarch thought philosophy should be taught at dinner parties. It should be taught through literature, or written in letters giving advice to friends. Good philosophy does not occur in isolation; it is about friendship, inherently social and shared. The philosopher should engage in politics, and he should be busy, for he knows, as Plutarch sternly puts it, that idleness is no remedy for distress.

Many of Plutarch’s works are concerned with showing readers how to deal better with their day-to-day circumstances. In Plutarch’s eyes, the philosopher is a man who sprinkles seriousness into a silly conversation; he gives advice and offers counsel, but prefers a discussion to a conversation-hogging monologue. He likes to exchange ideas but does not enjoy aggressive arguments. And if someone at his dinner-table seems timid or reserved, he’s more than happy to add some extra wine to the shy guest’s cup.

He outlined this benign doctrine over the course of more than 80 moral essays (far less often read than the “Lives”). Several of his texts offer two interpretive tiers — advice on philosophical behavior for less educated readers, and a call to further learning, for those who would want more. It’s intriguing to see that the guidance he came up with has much in common with what we now call cognitive behavioral therapy. Writing on the subject of contentment, he tells his public: Change your attitudes! Think positive non-gloomy thoughts! If you don’t get a raise or a promotion, remember that means you’ll have less work to do. He points out that “There are storm winds that vex both the rich and the poor, both married and single.”

In one treatise, aptly called “Discussions Over Drinks,” Plutarch gives an account of the dinner-parties he attended with his friends during his lifetime. Over innumerable jugs of wine they grapple with 95 topics, covering science, medicine, social etiquette, women, alcohol, food and literature: When is the best time to have sex? Did Alexander the Great really drink too much? Should a host seat his guests or allow them to seat themselves? Why are old men very fond of strong wine? And, rather obscurely: Why do women not eat the heart of lettuce? (This last, sadly, is fragmentary and thus unanswered). Some of the questions point to broader issues, but there is plenty of gossip and philosophical loose talk.

Plutarch begins “Discussions” by asking his own philosophical question — is philosophy a suitable topic of conversation at a dinner party? The answer is yes, not just because Plato’s “Symposium” is a central philosophic text (symposium being Greek for “drinking party”); it’s because philosophy is about conducting oneself in a certain way — the philosopher knows that men “practice philosophy when they are silent, when they jest, even, by Zeus! when they are the butt of jokes and when they make fun of others.”

Precisely because of its eclecticism and the practical nature of his treatises, Plutarch’s work is often looked down on in the academic world, and even Emerson said he was “without any supreme intellectual gifts,” adding, “He is not a profound mind … not a metaphysician like Parmenides, Plato or Aristotle.” When we think of the lives of ancient philosophers, we’re far more likely to think of Socrates, condemned to death by the Athenians and drinking hemlock, than of Plutarch, a Greek living happily with Roman rule, quaffing wine with his friends.

Yet in our own time-poor age, with anxieties shifting from economic meltdowns to oil spills to daily stress, it’s now more than ever that we need philosophy of the everyday sort. In the Plutarchan sense, friendship, parties and even wine, are not trivial; and while philosophy may indeed be difficult, we shouldn’t forget that it should be fun.

Frieda Klotz is a freelance journalist living in Brooklyn. She is co-editing a book on Plutarch’s “Discussions over Drinks” for Oxford University Press.


Full article and photos:

The Psychology of Bliss

In 2003, a German computer expert named Armin Meiwes advertised online for someone to kill and then eat. Incredibly, 200 people replied, and Meiwes chose a man named Bernd Brandes. One night, in Meiwes’s farmhouse, Brandes took some sleeping pills and drank some schnapps and was still awake when Meiwes cut off his penis, fried it in olive oil and offered him some to eat. Brandes then retreated to the bathtub, bleeding profusely. Meiwes stabbed him in the neck, chopped him up and stored him in the freezer. Over the next several weeks, he defrosted and sautéed 44 pounds of Brandes, eating him by candlelight with his best cutlery.

Hold on a minute. Why does this story appear in “How Pleasure Works,” a book whose jacket copy promises a “new understanding of pleasure, desire and value”? If this is a “new under­standing,” maybe we’ll just stick with the old one, thank you very much. For heaven’s sake, we’re only on Chapter 2, and already we’re deep into cannibalism, compounded by a suicidal-masochistic impulse. Still to come are such topics as rubber vomit, human grimacing contests and monkey pornography.

But stick with it and trust the author, Paul Bloom, to use these weird digressions to get us someplace interesting. Bloom, a professor of psychology at Yale, has written a book that is different from the slew already out there on the general subject of happiness. No advice here about how to become happier by organizing your closets; Bloom is after something deeper than the mere stuff of feeling good. He analyzes how our minds have evolved certain cognitive tricks that help us negotiate the physical and social world — and how those tricks lead us to derive pleasure in some rather unexpected places.

“Many significant human pleasures are universal,” Bloom writes. “But they are not biological adaptations. They are byproducts of mental systems that have evolved for other purposes.” Evolutionary psychologists like Bloom are fond of explaining perplexing psychological attributes this way. These traits emerged, the argument goes, as accidental accompaniments to other traits that help us survive and reproduce.

Our most puzzling sources of pleasure, according to this view, are side effects of our inborn “essentialism,” the idea that “things have an underlying reality or true nature . . . and it is this hidden nature that really matters.” It was to our ancestors’ advantage to be essentialists, so they could categorize the plants and animals in their environment into “dangerous” and “harmless” and thereby know which ones to avoid. Today, our ability to recognize the essence of things explains, for instance, why someone would be willing to pay $48,875 for a tape measure once owned by John F. Kennedy.

Pornography is another example of pleasure via essentialism. Why do some men spend more time looking at Internet porn than interacting with flesh-and-blood lovers? There may be “no reproductive advantage” to liking pornography, Bloom writes, but there is an advantage to its source: an urge to look at real-world “attractive naked people,” which makes us want sex, which in turn is good for continuation of the species. Pornography uses the same pleasure mechanism as actual sex, which is handy since “there aren’t always attractive naked people around when you need them.”

Then there are the (sometimes) more G-rated pleasures of the imagination: the joys of fiction, movies, television, daydreaming. “Surely we would be better off pursuing more adaptive activities — eating and drinking and fornicating, establishing relationships, building shelter and teaching our children,” Bloom writes. But when we retreat into an imagined world, it’s almost like experiencing the pleasure for real. Bloom calls it “Reality Lite — a useful substitute when the real pleasure is inaccessible, too risky or too much work.”

Bloom’s ideas go against the traditional view of pleasure as purely sensory: that is, that we get pleasure from food because of how it tastes, from music because of how it sounds, from art because of how it looks. The sensory explanation is only partially true, he writes. “Pleasure is affected by deeper factors, including what the person thinks about the true essence of what he or she is getting pleasure from.” When we pay good money for tape measures that famous people have touched, or treasure our children’s clumsy kindergarten art, it is because we believe that something about the person’s essence exists in the object itself. How else to explain Jonathan Safran Foer’s collection of blank sheets of paper? These are not just any blank sheets: they are the sheets that were about to be written on next by Paul Auster, Susan Sontag and David Foster Wallace, who sent them to Foer at his request.

And what about the seamier sides of pleasure, like the performance artist who sold 90 cans of his own feces, half of which were improperly autoclaved and would eventually explode, for as much as $61,000 a can? Bloom writes about them without judgment; they just help prove his point, albeit in some grotesque ways. Remember the cannibal Armin Meiwes and his victim, Bernd Brandes? Meiwes believed he was eating Brandes’s essence. “With every bite, my memory of him grew stronger,” Meiwes told the authorities. He also noted that Brandes had been fluent in English — and that since eating him, his own English had much improved.

Robin Marantz Henig is the author, most recently, of “Pandora’s Baby: How the First Test Tube Babies Sparked the Reproductive Revolution.”


Full article and photo:

The Muslim Past

In the United States, a country saturated with instant punditry, serious scholars rarely attain celebrity as public intellectuals. Yet Bernard Lewis, a professor emeritus of Near Eastern studies at Princeton, has long radiated influence far beyond his specialization in Ottoman studies. A friend of Henry Kissinger and a mentor to subsequent cohorts of conservative policy makers, Lewis arguably has done more than any Mideast expert to mold American attitudes to the region.

His latest book, “Faith and Power,” a collection of essays, lectures and speeches from the past two decades loosely linked to the theme of relations between Islam and the state, reminds us why. Lewis is a fine writer, with a commanding authorial voice that sweeps magisterially across the ages. His linkage of diverting historical anecdotes to pressing current issues and his skill at contracting complex ideas into clever apothegms do much to explain his appeal to politicians in search of a punchy quote.

Here, for instance, is Lewis contrasting political structures at home and abroad: “In America one uses money to buy power, while in the Middle East one uses power to acquire money.” Even a subject as vexed as the search for Arab-Israeli peace boils down to this satisfyingly pithy formula: “If the conflict is about the size of Israel, then long and difficult negotiations can eventually resolve the problem. But if the conflict is about the existence of Israel, then serious negotiation is impossible.”

Such distillations can be salutary, but may also prove dangerously reductionist. Take Lewis’s remark that democracies do not make war, and dictatorships do not make peace. This glib elaboration of a neoconservative mantra is easily challenged. The strongmen who ran Grenada, Panama and Iraq may have been bad guys, but there is no disputing that it was the United States that attacked them, not the other way around. The Egyptian dictator Anwar Sadat made peace with Israel, not the democratically elected Hamas party.

Yet Lewis blithely hoists his rhetoric to even more contentious heights. Like Nazi Germany and Communist Russia, he asserts, Middle Eastern dictators need war to justify their tyranny. This means peace will come only with their collapse or their defeat. In other words, democracies must clobber every dictator. And that’s not all. Giving a more specific nudge to policy makers, Lewis pretends to have discerned “deep roots” of democratic traditions in Iraq and Iran, of all places. Democracy might easily prevail there, he opines, and inspire others in the region, “given the chance.”

It might be argued that it is hardly Lewis’s fault if some in the Bush administration took such expert advice a little too literally. Yet Lewis himself makes his intentions pretty clear in another essay: “Either we bring them freedom, or they destroy us.”

The quaintly missionary idea of “bringing freedom” to benighted peoples may simply betray Lewis’s age: he was born in England in 1916, in the already waning glory of the British Empire. But the shrill alarmism jars with his repute as a historian whose most notable contribution has been to chronicle the relative decline of Islam in the past three centuries. It is a fair judgment to say that the four-fifths of the world’s people who are not Muslim appear in no immediate or even distant danger of extinction at the point of a scimitar.

If it were only the present that Lewis perceived through a gently distorted mirror, this might not detract from his distinction as a historian. But he gets the past subtly wrong, too, often by omitting vital context. He says that when the Arabs rejected the partition of Palestine in 1947, it was simply because they refused to accept having a Jewish state next door. Yet Arabs were not alone in questioning the United Nations plan to allocate 56 percent of Palestine’s territory to a minority consisting mostly of recent immigrants, which made up barely a third of the population and owned just 7 percent of the land. Greece, India and Cuba, among others, also voted no, while China, Ethiopia, Colombia, Chile and Mexico abstained. The overriding motive of all these doubters was presumably not bigotry, as Lewis implies, but concern about Palestinians’ rights.

Modern history may not be Lewis’s forte. Yet even regarding older eras, his views sometimes seem at odds with those of another distinguished historian. Fred M. Donner is a professor of Near Eastern history at the University of Chicago. His new book, “Muhammad and the Believers,” is a learned and brilliantly original, yet concise and accessible study of Islam’s formative first century.

Western historians have tended to ascribe the astonishing success of the new faith to external factors, like economic and political conditions in seventh-century Arabia. Donner persuasively returns the faith itself to centrality. Equally convincing is his well-documented assertion that Islam, at its origins, was rather different from the religion later understood by either its practitioners or by non-Muslims.

This more sophisticated reading of history explains Islam not as a static doctrine, but as one that evolved from an ecumenical, syncretic, pietist and millenarian cult into a more dogmatic and exclusivist faith. In contrast to Lewis, who depicts Islam as aggressive from the start, Donner shows that contemporary followers of other religions initially, and perhaps even for several generations, regarded Islam as an open-minded and not specially threatening movement with universalist aspirations. A Nestorian Christian patriarch writing to a bishop in A.D. 647 testified not only that his new Muslim rulers were peaceable, but also that they honored priests and bestowed monasteries with gifts. An Armenian bishop recorded around A.D. 660 that the first governor of Muslim Jerusalem was Jewish.

The documentary evidence suggests that the term “Muslim” came into common use only in the eighth century. The earlier word, “Believers,” described a community that embraced many faiths.

gain in contrast to Lewis, Donner shows that while the theocratic leanings of Islam make it seem different from other monotheistic faiths today, at the beginning they merely perpetuated the models of the contemporary great powers, Christian Byzantium and Sasanian Persia. Over time, as Donner shows, doctrinal and dynastic divisions among the Muslims created a need to enforce orthodoxy, rendering Islam more distinct from other faiths and hardening its boundaries.

Indeed, it was Muslim historians themselves, writing only after this process was well under way, who began to portray Islam as having been doctrinally rigid from the start. The Muslim triumphalism that Lewis discerns, it seems, was largely introduced in retrospect, to explain the seemingly miraculous spread of the faith as a result of heavenly favor. Donner’s explanation of the process by which Muslims came to define themselves is both fascinating and enlightening. Surely, this kind of subtle understanding of how history works comes closer to the truth than Lewis’s lapidary pronouncements from on high.

Max Rodenbeck is the Middle East correspondent for The Economist.


Full article and photo:

An empire gives way

Blogs are growing a lot more slowly. But specialists still thrive

ONLINE archaeology can yield surprising results. When John Kelly of Morningside Analytics, a market-research firm, recently pored over data from websites in Indonesia he discovered a “vast field of dead blogs”. Numbering several thousand, they had not been updated since May 2009. Like hastily abandoned cities, they mark the arrival of the Indonesian version of Facebook, the online social network.

Such swathes of digital desert are still rare in the blogosphere. And they should certainly not be taken as evidence that it has started to die. But signs are multiplying that the rate of growth of blogs has slowed in many parts of the world. In some countries growth has even stalled.

Blogs are a confection of several things that do not necessarily have to go together: easy-to-use publishing tools, reverse-chronological ordering, a breezy writing style and the ability to comment. But for maintaining an online journal or sharing links and photos with friends, services such as Facebook and Twitter (which broadcasts short messages) are quicker and simpler.

Charting the impact of these newcomers is difficult. Solid data about the blogosphere are hard to come by. Such signs as there are, however, all point in the same direction. Earlier in the decade, rates of growth for both the numbers of blogs and those visiting them approached the vertical. Now traffic to two of the most popular blog-hosting sites, Blogger and WordPress, is stagnating, according to Nielsen, a media-research firm. By contrast, Facebook’s traffic grew by 66% last year and Twitter’s by 47%. Growth in advertisements is slowing, too. Blogads, which sells them, says media buyers’ inquiries increased nearly tenfold between 2004 and 2008, but have grown by only 17% since then. Search engines show declining interest, too.

People are not tiring of the chance to publish and communicate on the internet easily and at almost no cost. Experimentation has brought innovations, such as comment threads, and the ability to mix thoughts, pictures and links in a stream, with the most recent on top. Yet Facebook, Twitter and the like have broken the blogs’ monopoly. Even newer entrants such as Tumblr have offered sharp new competition, in particular for handling personal observations and quick exchanges. Facebook, despite its recent privacy missteps, offers better controls to keep the personal private. Twitter limits all communication to 140 characters and works nicely on a mobile phone.

A good example of the shift is Iran. Thanks to the early translation into Persian of a popular blogging tool (and crowds of journalists who lacked an outlet after their papers were shut down), Iran had tens of thousands of blogs by 2009. Many were shut down, and their authors jailed, after the crackdown that followed the election in June of that year. But another reason for the dwindling number of blogs written by dissidents is that the opposition Green Movement is now on Facebook, says Hamid Tehrani, the Brussels-based Iran editor for Global Voices, a blog news site. Mir Hossein Mousavi, one of the movement’s leaders, has 128,000 Facebook followers. Facebook, explains Mr Tehrani, is a more efficient way to reach people.

The future for blogs may be special-interest publishing. Mr Kelly’s research shows that blogs tend to be linked within languages and countries, with each language-group in turn containing smaller pockets of densely linked sites. These pockets form around public subjects: politics, law, economics and knowledge professions. Even narrower specialisations emerge around more personal topics that benefit from public advice. Germany has a cluster for children’s crafts; France, for food; Sweden, for painting your house.

Such specialist cybersilos may work for now, but are bound to evolve further. Deutsche Blogcharts says the number of links between German blogs dropped last year, with posts becoming longer. Where will that end? Perhaps in a single, hugely long blog posting about the death of blogs.


Full article:

Why Friedrich Hayek Is Making a Comeback

With the failure of Keynesian stimulus, the late Austrian economist’s ideas on state power and crony capitalism are getting a new hearing.

He was born in the 19th century, wrote his most influential book more than 65 years ago, and he’s not quite as well known or beloved as the sexy Mexican actress who shares his last name. Yet somehow, Friedrich Hayek is on the rise.

When Glenn Beck recently explored Hayek’s classic, “The Road to Serfdom,” on his TV show, the book went to No. 1 on Amazon and remains in the top 10. Hayek’s persona co-starred with his old sparring partner John Maynard Keynes in a rap video “Fear the Boom and Bust” that has been viewed over 1.4 million times on YouTube and subtitled in 10 languages.

Why the sudden interest in the ideas of a Vienna-born, Nobel Prize-winning economist largely forgotten by mainstream economists?

Friedrich Augustus Von Hayek, ca. 1940.

Hayek is not the only dead economist to have garnered new attention. Most of the living ones lost credibility when the Great Recession ended the much-hyped Great Moderation. And fears of another Great Depression caused a natural look to the past. When Federal Reserve Chairman Ben Bernanke zealously expanded the Fed’s balance sheet, he was surely remembering Milton Friedman’s indictment of the Fed’s inaction in the 1930s. On the fiscal side, Keynes was also suddenly in vogue again. The stimulus package was passed with much talk of Keynesian multipliers and boosting aggregate demand.

But now that the stimulus has barely dented the unemployment rate, and with government spending and deficits soaring, it’s natural to turn to Hayek. He championed four important ideas worth thinking about in these troubled times.

First, he and fellow Austrian School economists such as Ludwig Von Mises argued that the economy is more complicated than the simple Keynesian story. Boosting aggregate demand by keeping school teachers employed will do little to help the construction workers and manufacturing workers who have borne the brunt of the current downturn. If those school teachers aren’t buying more houses, construction workers are still going to take a while to find work. Keynesians like to claim that even digging holes and filling them is better than doing nothing because it gets money into the economy. But the main effect can be to raise the wages of ditch-diggers with limited effects outside that sector.

Second, Hayek highlighted the Fed’s role in the business cycle. Former Fed Chairman Alan Greenspan’s artificially low rates of 2002-2004 played a crucial role in inflating the housing bubble and distorting other investment decisions. Current monetary policy postpones the adjustments needed to heal the housing market.

Third, as Hayek contended in “The Road to Serfdom,” political freedom and economic freedom are inextricably intertwined. In a centrally planned economy, the state inevitably infringes on what we do, what we enjoy, and where we live. When the state has the final say on the economy, the political opposition needs the permission of the state to act, speak and write. Economic control becomes political control.

Even when the state tries to steer only part of the economy in the name of the “public good,” the power of the state corrupts those who wield that power. Hayek pointed out that powerful bureaucracies don’t attract angels—they attract people who enjoy running the lives of others. They tend to take care of their friends before taking care of others. And they find increasing that power attractive. Crony capitalism shouldn’t be confused with the real thing.

The fourth timely idea of Hayek’s is that order can emerge not just from the top down but from the bottom up. The American people are suffering from top-down fatigue. President Obama has expanded federal control of health care. He’d like to do the same with the energy market. Through Fannie and Freddie, the government is running the mortgage market. It now also owns shares in flagship American companies. The president flouts the rule of law by extracting promises from BP rather than letting the courts do their job. By increasing the size of government, he has left fewer resources for the rest of us to direct through our own decisions.

Hayek understood that the opposite of top-down collectivism was not selfishness and egotism. A free modern society is all about cooperation. We join with others to produce the goods and services we enjoy, all without top-down direction. The same is true in every sphere of activity that makes life meaningful—when we sing and when we dance, when we play and when we pray. Leaving us free to join with others as we see fit—in our work and in our play—is the road to true and lasting prosperity. Hayek gave us that map.

Despite the caricatures of his critics, Hayek never said that totalitarianism was the inevitable result of expanding government’s role in the economy. He simply warned us of the possibility and the costs of heading in that direction. We should heed his warning. I don’t know if we’re on the road to serfdom, but wherever we’re headed, Hayek would certainly counsel us to turn around.

Mr. Roberts teaches economics at George Mason University and co-created the “Fear the Boom and Bust” rap video with filmmaker John Papola. His latest book is “The Price of Everything” (Princeton, 2009).


Full article and photo:

When Words Go Lightly to Screen

How a ‘rather bitter’ novella became the film ‘Breakfast at Tiffany’s.’

The metamorphosis from paper to celluloid is never smooth, and the film “Breakfast at Tiffany’s” (1961) presented Paramount studios with an array of difficulties. Sam Wasson’s account of the making of the movie covers them all. En route, “Fifth Avenue, 5 A.M.”—as appropriately slender as the 1958 Truman Capote novella from which the film was made—offers lots of savory tidbits. Capote, for example, wanted Marilyn Monroe to play the lead character, Holly Golightly. Monroe’s acting coach, Paula Strasberg, refused to let America’s reigning sex symbol impersonate “a lady of the evening,” and the part went to Audrey Hepburn, who didn’t really want it.

“To think,” Mr. Wasson begins, that the movie “almost didn’t come off . . . that the censors were railing against the script, that the studio wanted to cut ‘Moon River,’ that [director] Blake Edwards didn’t know how to end it.” That Capote’s work “was considered unadaptable,” Mr. Wasson writes, “seems almost funny today.”

No, it doesn’t. The novella was unadaptable. Although the movie is fondly remembered by those who were very young in 1961, Capote’s acute character studies of a blithe, air-headed “socialite” who escorted wealthy men around Manhattan after dark, and of her colorful in-group, were hammered into a cinematic gallery of grotesques.

Caparisoned in smashing Givenchy ensembles and wielding a cigarette holder the size of a javelin, Audrey Hepburn did some elegant posing in lieu of acting; George Peppard was rigid and humorless as a romantic leading man—a hint of what was to come when he starred in the TV series “The A-Team.” To bottom it all off, Mickey Rooney, badly miscast as Mr. Yunioshi, Holly’s Japanese neighbor, delivered a racist caricature.

There were a few alleviating moments—Buddy Ebsen as the yokel whom Holly Golightly married when she was 14; Patricia Neal as the woman who regards Peppard’s character as her boy-toy. Alas, they couldn’t hide an absence of plot, theme or wit. But because Capote is one of the central literary and social figures of 20th-century New York, because Audrey Hepburn became the most memorable Hollywood gamine since Lillian Gish, and because Blake Edwards went on to direct the Inspector Clouseau comedies, “Fifth Avenue, 5 A.M.” (the title refers to the moment when Hepburn was filmed arriving at Tiffany’s in a cab) is as compelling as it is trivial.

A producer tries to convince Audrey that she’d be an ideal Holly. “You have a wonderful script,” the star demurs, “but I can’t play a hooker.” He purrs: “We don’t want to make a movie about a hooker. We want to make a movie about a dreamer of dreams.” And she buys the line. In an effort to sanitize Paramount’s portrait of a demimondaine, the studio publicity department churned out reams of flapdoodle, defining Holly as a “kook” rather than a B-girl. After all, as one of the publicists observes, “the star is Audrey Hepburn, not Tawdry Hepburn.”

A talent rep at Creative Management Associates pushes Mickey Rooney for the part of Mr. Yunioshi. A few years later, the rep finds himself meeting with another client, the director Akira Kurosawa. Chill time. “When he realized that I had been involved with the decision to cast Mickey Rooney as a Japanese man, he almost couldn’t talk to me. I felt awful. I was so embarrassed.”

Letty Cottin Pogrebin, a co-founder of Ms. Magazine, thanks “Breakfast at Tiffany’s” for her feminist liberation. To her, as to many other co-eds of the time, the movie represented the dawn of the modern woman. In their eyes, as she recounts to Mr. Wasson, Holly Golightly “was a single girl living a life of her own, and she could have an active sex life that wasn’t morally questionable. I had never seen that before.” Inspired to adopt some of Holly’s “kookiness” for herself, Letty went out and bought a scooter, a dog, a rabbit and a duck.

The reviews were mixed, but Capote did not waver in his appraisal. The book, he said, was “rather bitter” and “real,” but the film was “a mawkish valentine to New York City and Holly and, as a result, was thin and pretty, whereas it should have been rich and ugly. It bore as much resemblance to my work as the Rockettes do to [the Russian ballerina Galina] Ulanova.”

Mr. Wasson brings a lively and impudent approach to his subject—he offers sub-headings like “Mr. Audrey Hepburn” (referring to Mel Ferrer, Hepburn’s unhappy husband) and “What Truman Capote Does in Bed” (he writes). Most of the anecdotes have a ring of authenticity, justifying the price of admission. Still, those of us old enough to have consumed the under-nourishing film the first time around should have the right to demand a senior discount.

Mr. Kanfer, a contributing editor of City Journal, is the author of “Somebody: The Reckless Life and Remarkable Career of Marlon Brando” (2008).


Full article and photo:

Petraeus and Obama’s Uncertain Trumpet

There is a mismatch between the general’s Afghan mission and the president’s summons to his countrymen.

The chroniclers tell us that Lyndon Johnson never took to the Vietnam War. He prosecuted it, it became his war, but it was, in LBJ’s language, a “bitch of a war.” He fought it with a premonition that it could wreck his Great Society programs.

He had a feel for the popular mood. “I don’t think the people of the country know much about Vietnam, and I think they care a hell of a lot less.” We know how that war ended, and the choreography of President Obama relieving Gen. Stanley McChrystal of his command notwithstanding, there is to this Afghan campaign a sense of eerie historical repetition. There is no need to overdo the analogy, but there is a good measure of similarity to that earlier ill-fated campaign. There is the same ambivalence at the top, a disjunction between the military battlefield and the political world at home.

So a beleaguered president has replaced a talented but indiscreet military commander with a talented, discreet successor. The large questions about the war persist, and there persists as well that unsettling sense that the president is prosecuting a war he can neither abandon nor fight to a convincing victory.

For Mr. Obama, this Afghan campaign doubtless bears the crippling impact of its beginnings. It was out of Mr. Obama’s desire to demonstrate that he was no pacifist that his commitment to the Afghan war had begun. It was in the midst of his run for the presidency that he was to draw a distinction between “stupid wars” (Iraq as the primary exhibit) and wars worth fighting.

Afghanistan became the good war of necessity. He was to sharpen the distinction between these two wars in the course of his first year in office. On the face of it, this was a president claiming a distant war, making it his own. But there was a lack of fit between this call on Afghanistan and the president’s overall summons to his country.

Mr. Obama’s is an uncertain trumpet. He had vowed to fight in Afghanistan while belittling the challenge that radical Islamism posed to American security. He had told his devotees that the anti-Americanism in the Islamic world was certain to blow over in the aftermath of his election. He had attributed much of that anti-Americanism to the Iraq war and to the ideological zeal of his predecessors. His foreign policy was to explicitly rest on a rupture with the foreign policy of the past. Like Jimmy Carter’s in the 1970s, this was to be a foreign policy of contrition for America’s presumed sins.

A big battle loomed at home, and this was where Mr. Obama’s heart and preferences lay—a struggle between economic freedom and the marketplace on one side and an intrusive, redistributionist state on the other. In this new climate of national introversion, Afghanistan was at best a sideshow. The war was going badly, and Mr. Obama feared that this war would overwhelm his presidency.

So last December, after a period of drawn-out assessment, Mr. Obama opted to split the difference. He who had opposed the Iraq surge when in the Senate launched a surge of his own. He would give his commanders additional forces, but this was a surge with an Obama twist. The announcement of a new commitment was at once the announcement of an exit strategy. The troops would be sent but American withdrawal would begin in the summer of 2011.

Mullah Omar in Quetta may not be schooled in the arcane details of American politics, but he had all the knowledge he needed: The Americans were not in this fight for long. He would wait them out and then make a run at the regime in Kabul.

Our “ally” in Kabul, Hamid Karzai, also made his own calculations. The faithlessness he has showed in recent months was the nervousness of a man who feared that his American patrons and protectors were on their way out, and that he, like so many Afghan leaders before him, would be left to the wrath of the mob. In Mr. Karzai’s ideal world, the Americans, with their guns and machines, and their vast treasure and contracts, would never leave.

In the phase to come, the deadline for the start of the American withdrawal from Afghanistan, will stalk this military campaign. It will be fought in the inner councils of the Obama administration, and will, in time, become a matter of public disputation.

For the president and his vice president—and no doubt for Democrats in the House and the Senate—the July 2011 deadline will be what it is. For the U.S. military, and for the secretary of defense and the national security hawks, that deadline is, by necessity, flexible, meant to convey, as Gen. David Petraeus put it before the Senate Armed Services Committee in mid-June, a “message of urgency.” Secretary of Defense Robert Gates has put it this way: The withdrawal will be determined by “the conditions on the ground.”

The “conditions on the ground” are a euphemism for the ability of the Afghan forces to assume the burden of security for their own homeland. After all, counterinsurgency requires a native regime that would hold its own against insurgents and defend its own homeland. No serious assessment holds out the promise of a capable Afghan regime and a devoted national army that would fight for the incumbent government. Afghanistan is what it is, a land riven by corruption and sectarianism, a population weighed down by illiteracy and hardened by years of betrayal and abdication. The “Afghanization” of the war is a utopian idea.

The history of the Vietnam War offers a cautionary precedent. Deadlines of withdrawal, once announced, take on a life of their own. In his incomparable recollection of the American “extrication” from Vietnam—his word—former Secretary of State Henry Kissinger writes that the promise of “Vietnamization” served to confirm Hanoi “in its course of waiting us out.” Withdrawal of American troops, Mr. Kissinger memorably observed, became like “salted peanuts” to the American public, “the more U.S. troops come home, the more will be demanded.” If this pattern holds, the war at home over Afghanistan has only just begun.

We have a peerless commander on his way to the Afghan theater of war. He knows the ways of the East, and he has mastered them the hard way. In his time in Iraq he was fond of a maxim of T.E. Lawrence: “Do not try to do too much with your own hands. Better the Arabs do it tolerably than you do it perfectly. It is their war, and you are there to help them, not win it for them.”

Gen. Petraeus takes that maxim with him to the land of the Afghans; he and his soldiers can do their best for them. But he can’t rid them of their historic afflictions. Nor can his mission end in success if our country isn’t in this fight for real. The East is merciless this way. It has an unsentimental feel for the intentions and the staying power of strangers.

Mr. Ajami is professor at The Johns Hopkins University School of Advanced International Studies and a senior fellow at the Hoover Institution.


Full article and photo:

Henry James Walked Here


The 13th-century basilica dedicated to St. Francis as seen from the fortress above Assisi.

IT was love at first sight. Henry James was 26 when he crossed the border from Switzerland and made his way, on foot, down into Italy — “warm & living & palpable,” as he wrote ecstatically to his sister on Aug. 31, 1869. The romance kindled that day lasted nearly 40 years, and played a significant part in his career; he set some of his greatest works in Italy, including “Daisy Miller,” “The Aspern Papers” and “The Wings of the Dove.” 

All three are excellent traveling companions, particularly if you’re en route to Rome and Venice — but a more direct (though of course inescapably Jamesian, and therefore at times convoluted) expression of his contagious passion for what he declared to be the “most beautiful country in the world” can be found in his travel writing. 

Henry James as tour guide? He won’t lead you step by step, waving a pennant so you don’t get lost, but he does show the way. His fine, reverberating consciousness sets off a corresponding reverberation in the sympathetic reader, who can’t help but admire the way Italy liberates an appetite for sensual experience in this most cerebral of authors. 

If you’re thinking of visiting Umbria and Tuscany, James has even thoughtfully planned out your route: in 1874, when his Italian romance was in its infancy (and the Kingdom of Italy was a newborn nation, having achieved unification only in 1861), James wrote for The Atlantic Monthly a travel essay called “A Chain of Cities,” in which he describes his springtime wanderings in Assisi, Perugia, Cortona and Arezzo, ancient hill towns well stocked with artistic treasures and expansive views — all neatly arranged within easy distance of one another. James, traveling by train, lounges and loafs along the way, examining and judging an artist’s work, or sitting on a sunny bench beneath the ramparts of a ruined fortress, or strolling aimlessly, merely savoring the flavor of “adorable Italy.” A 21st-century traveler whose schedule is fixed by the tyranny of airline reservations may be tempted to pick up the pace (certainly a possibility if you’ve rented a car), but accident and adventure, the kind of chance encounter that loitering invites, are just as important, in the search for the essence of a place, as methodical contemplation. 

James’s principal interests are scenery and art, though he occasionally casts his eye — while holding his nose — on the unwashed populace (the Puritan in him was shocked by the Italian peasant’s indifference to soap). All four towns are perched high and blessed with stunning views, but of course the views were even more gorgeous in the 19th century, before the valleys were streaked with highways, dotted with factories and warehouses and veiled by smog. 

In Assisi, James looks out over “the teeming softness of the great vale of Umbria,” and watches “the beautiful plain mellow into the tones of twilight.” Today the plain is still “teeming” (though with human activity rather than nature’s bounty), and the mellow haze in the distance looks suspiciously chemical. But if the views are less pristine, the art and the architectural monuments are far more accessible, preserved and curated with care and intelligence. Each of these towns is home to more masterpieces than you can comfortably absorb in one visit; this is an itinerary overflowing with artistic riches. 

If James insists on a measured tempo (in Perugia he warns that a visitor’s “first care must be to ignore the very dream of haste, walking everywhere very slowly and very much at random”), at least part of the reason is that in these towns there’s little choice. Most of the streets, especially in Assisi, Perugia and Cortona, are steep, narrow and crooked; haste would soon leave you panting. Arezzo is gentler, but there, too, James is right: even if you’re fit enough to race along, a leisurely stroll is infinitely more rewarding when nearly every building has half a millennium of history attached to it. 

In Assisi, James counsels, the visitor’s “first errand” is with the 13th-century basilica dedicated to St. Francis. The church, which houses the saint’s tomb — “one of the very sacred places of Italy” — is a magnet for religious pilgrims. James hits on a suggestive metaphor for the basilica’s astonishing structure: it consists of two churches, one piled on top of the other, and he imagines that they were perhaps intended as “an architectural image of the relation between heart and head.” The lower church, built in the Romanesque style, is somber, cave-like and complex, whereas the upper church, a fine example of Italian Gothic, is bright, spacious, rational. (Though he often favored head over heart, reason over emotion, James was a master at turning the tables.) Both churches are famously decorated with frescoes hugely important to the history of art, most of them traditionally ascribed to Giotto (c. 1267-1337). Studying them closely, James pays tribute to the artist’s expressive power: “Meager, primitive, undeveloped, he is yet immeasurably strong” — a judgment still valid today. 

Having trained his eager eye on these masterpieces, James saunters off to explore a palpably ancient town (“very resignedly but very persistently old”) that no longer exists. Assisi in the 21st century is pleasant and pretty, but fixed up and licked clean — after the damage from a 1997 earthquake — and wholly focused on the business of accommodating tourists and pilgrims (which seems mostly to involve selling them ice cream and religious knickknacks). James especially likes the ruined castle perched above the town, which is happily unrenovated. But he came along too soon to have glimpsed the curious monument erected, so to speak, on the steep road up to the fortress: a wire fence lovingly decorated with the discarded chewing gum of countless bored kids on their school trip to Assisi. It resembles a modernist sculpture, an abstract, folk-art Giacometti stretched along the path. I like to think of James pondering the meaning of this bizarre masticated tribute to modern adolescence. 

IN Perugia, James admires the extravagant view (“a wondrous mixture of blooming plain and gleaming river and wavily-multitudinous mountain”), and picks a fight with the city’s leading artist, Perugino (1446-1524), whose paintings are graced with serene figures that seem to James just a little too serene and neat — too “mechanically” produced. 

Although his report on this “accomplished little city” is lively and evocative, it’s possible that his preoccupation with the artist and the creative process distracted him from his travel writer’s duty to give the reader a distinct taste of a particular spot, and somewhat distorted his judgment. He may have overplayed his delight with the view, saying that he preferred it to “any other visible fruit of position or claimed empire of the eye” (strained phrases that smack of hyperbole). He pits the painter against the prospect, then pronounces his verdict: “I spent a week in the place, and when it was gone, I had had enough of Perugino, but hadn’t had enough of the View.” 

The trick, of course, is not to spend an entire week in Perugia. It’s a fascinating place, defined by the contrast between the broad, elegant Corso that runs through the center of the town like a super-wide catwalk, purpose-built for people-watching, and the tortuously cramped streets that roller-coaster around it in an exhausting topographical tangle. 

A map is essential here, but you’ll get lost anyway, defeated by the twisting, the turning, the dipping and the climbing. James’s description is peppered with adjectives that paint a grimmer picture than one sees today (he writes of “black houses … the color of buried things”), but even in this tidied-up era the medieval sections of Perugia retain their “antique queerness.” 

Two days in Perugia is plenty, unless you disagree with James about Perugino, in which case a third day might be necessary, if only to visit the church of San Pietro, an oasis of tranquillity just below the city walls where hidden away in the sacristy there are five tiny Peruginos well worth the detour — and to revisit both the Galleria Nazionale dell’Umbria, where Perugino plays a starring role, and the Collegio del Cambio, which he decorated. The latter is the moneychangers’ guildhall, and it can safely be said that no financial institution has ever bequeathed a more pleasing monument to posterity than the room Perugino created with his wonderfully calm and graceful frescoes. 

Cortona, which James calls “the most sturdily ancient of Italian towns,” is even more narrowly up-and-down than Perugia. A small town with a comically higgledy-piggledy central piazza, it’s like Assisi these days: in danger of seeming quaint or cute instead of beautiful or picturesque. But it’s calm, quiet and dignified (at least during the off-season), and if you set off for a ramble in any direction, you’ll pass several charming churches before you’ve reached the town’s well-preserved ramparts and registered the welcome shock of yet another panoramic view. 

Arriving on a festival day, James saw neither the interior of Cortona’s churches nor its museums. He expresses mild, passing regret (“the smaller and obscurer the town the more I like the museum”), before turning to the serious business of loafing. Had he known what he was missing, he might have extended his stay. The town’s artistic treasures, now stored in the Museo Diocesano, include a handful of muscular and disturbingly odd paintings by a brilliant native son, Luca Signorelli (c. 1445-1523), and a glorious Annunciation by Fra Angelico (c. 1395-1455), one of the most delicately enchanting paintings of the early Italian Renaissance. In the valley below, you’ll see the dome of a perfectly proportioned 15th-century church, Santa Maria delle Grazie al Calcinaio. 

By the time he reaches Arezzo, James has surrendered entirely to the charm of Tuscany. He mentions the museum, the “stately” duomo, and the “quaint” colonnades on the facade of Santa Maria della Pieve, but only in passing, in an apologetic aside, as if he knew that in the neighborhood there were monuments and artworks of importance to be studied, but, really, he’d rather just lounge around near the ruined castle that sits at the top of the town, just as he did in Assisi and Cortona, and sop up the “cheerful Tuscan mildness.” 

No one who has visited Arezzo on a warm day in late spring can blame him — the settled, unforced, somehow inevitable beauty of the place demands unhurried, disinterested appreciation — though some would prefer to while away the hours in the lovely Piazza Grande, a sloping, comfortably enclosed space not unlike Siena’s famous Piazza del Campo, only more intimate. 

The spectacle of Henry James morphing into a lazy, contented, “uninvestigating” tourist — especially after his strenuous intellectual engagement with Giotto’s frescoes in Assisi and Perugino’s in Perugia — gives “A Chain of Cities” a very satisfactory narrative arc. But as is so often the case, a pretty shape comes at a price: James leaves out any mention of Arezzo’s most famous work of art, “The Legend of the True Cross,” a cycle of frescoes in the Basilica di San Francesco by Piero della Francesca (c. 1415-1492), which today’s guidebooks insist is the town’s principal attraction. 

There’s no evidence that James ever saw “The Legend of the True Cross” or formed an opinion of the artist’s work (Piero’s name doesn’t crop up in James’s oeuvre, or in his correspondence), but having come this far in his company, it seems only appropriate to fill in the blanks — to add another link to the chain — and imagine James’s reaction to this rich pageant. 

In Assisi, the result of his communion with Giotto was “a great and even amused charity,” a gentle mood of indiscriminate benevolence. Here, in the hushed choir of San Francesco, he would have recognized a great artist’s bold technical advances (Piero was pioneering in his use of light and perspective), and marveled at the sleepwalker’s trance that gives Piero’s figures an ethereal spirituality even in the heat of battle. He would have envied the scope of the achievement, the variety of the scenes and the harmony of the overall composition. And he would have stumbled out into the handsome streets of placid Arezzo with his own artistic ambitions inflamed. 

ADAM BEGLEY, the former books editor of The New York Observer, is at work on a biography of John Updike. 

Out of line

An insubordinate general. A soccer mutiny. Why hierarchy matters, even in an egalitarian world.

It’s been a bad week for the chain of command. First, international soccer fans witnessed the petulant meltdown of the French World Cup team: Star player Nicolas Anelka was kicked off the team for profanely insulting the head coach in the locker room midgame, and his teammates protested his dismissal by staging a mutiny — refusing to practice last Sunday, taking the team bus back to their hotel, and leaving the abandoned coaching staff to find their own ride. The fractiously underperforming team, full of top-flight talent, didn’t make it out of the tournament’s first round.

Then, on Tuesday, General Stanley McChrystal, commander of the US-led NATO security mission in Afghanistan, was summoned to Washington to answer for derisive and arguably insubordinate comments he and his aides made to a Rolling Stone reporter about several of the senior members of the White House national security team — and about President Obama himself, the man who, the Constitution specifies, was McChrystal’s ultimate boss. Upon his arrival in Washington, McChrystal was relieved of command.

The two events were not, of course, equal in global import. One was a drama on a sports team, the other may alter the course of a war. But both caught the attention of the world as they unfolded. And for all the distinctive political and cultural strands that each separately touched on, they both triggered an immediate and visceral sense that certain widely understood rules of appropriate behavior had been violated. Notably, in all of the commentary that swirled up around the two scandals, it was virtually impossible to find voices rooting for the rebellious underdogs, for the “runaway general” or the soccer players who turned on their coach.

What was at stake in each was a very basic idea: deference to the social hierarchy. Where people stand on the social ladder is a fact that governs all sorts of daily interactions, as well as how we build organizations, police one another’s behavior, and understand our own identity. It’s also something that social scientists are taking an increasing interest in. Talk of hierarchy or social rank may sound antiquated, especially in countries like America and France that each had its own revolution two centuries ago to overthrow an aristocratic political and social order. If all men are created equal, then thinking and talking about rank seems pernicious, a recipe for inflated egos on the one hand or crippled self-esteem on the other.

But psychologists who study status and power in social settings — and a growing number are — have found that human beings, in surprising ways, actually seem to thrive on a sense of social hierarchy, and rely on it. In certain settings, having a clear hierarchy makes us more comfortable, more productive, and happier, even when our own place in it is an inferior one. In one intriguing finding, NBA basketball teams on which large salary differentials separate the stars from the utility players actually play better and more selflessly than their more egalitarian rivals.

“Status is such an important regulating force on people’s behavior, hierarchy solves so many problems of conflict and coordination in groups,” says Adam Galinsky, a psychologist at Northwestern University’s Kellogg School of Management who did the research on social hierarchies on basketball teams. “In order to perform effectively, you often need to have some pattern of deference.”

None of this means that unquestioned obedience and institutionally mandated inequality are the building blocks of the ideal society. But research into social hierarchy does suggest that a taste for rank is a key part of the bundle of traits that make human beings such a successfully social species. Even the most equable among us have this inborn human understanding, psychologists say, and a sense of when its codes have been broken. That applies not only in situations with strictly delineated chains of command like the military or a pro sports team, but in any social situation. Knowing what’s right and wrong is often just a matter of knowing who’s the boss.

The French soccer rebellion and the loose words of McChrystal have both been harshly judged in the court of public opinion. In France, a nation where everyone from firefighters to doctors routinely goes on strike, the World Cup walkout was roundly condemned, with the nation’s newspapers, its former soccer stars, its minister of sport, its finance minister, and even President Nicolas Sarkozy expressing outrage. Here in the United States, McChrystal, a hugely accomplished soldier in a country ferociously proud of its military, was criticized across the political spectrum for his words and the way he allowed them to become public.

In announcing that he had accepted McChrystal’s resignation, Obama said his decision had been a necessary one, brought on by the fact that McChrystal’s conduct “undermines the civilian control of the military that is at the core of our democratic system.” Civilian control of the military is spelled out in the country’s Constitution to prevent the military from taking over — or even unduly influencing — the elected government. But in reasserting his authority, Obama was also addressing a more basic human need to know who is in charge.

Human beings are social animals, a fact that is central to how we as a species see the world. And like other social animals, whether wolves or chickens or chimpanzees, we sort ourselves into rankings. These rankings aren’t static, they can change over time, but they impose order on social interaction: In the wild, they create a framework for dividing up vital tasks among a group, and because they clearly codify differences in power or strength or ability, they prevent every interaction from disintegrating into an outright fight over mates or resources — someone’s rank tells you how likely she is to beat you in a fight, and you’re less likely to bother her if you already know.

To the extent that explicit social hierarchies are still with us, in the popularity pecking order of high school or the restrictive membership policies of certain country clubs, they’re seen as the unfortunate vestiges of an earlier era, or the ugly outgrowth of social insecurity. Yet psychologists are finding that our tendency toward social hierarchy is at once a more deep-seated and complex impulse than we thought.

For one thing, it turns out that people are ruthlessly clear-eyed judges of their own place in the social hierarchy. This is notable because they tend to be poor judges of just about everything else about themselves. Study after study has shown that people are incorrigible self-inflaters, wildly overestimating their own intelligence, sexual attractiveness, driving skills, income rank, and the like. But not social status, that they turn out to be coldly impartial about.

For example, a team of social psychologists led by Cameron Anderson of the University of California, Berkeley ran a study in which strangers were put into groups that met once a week, and were tasked with solving various collaborative problems. After each meeting, the participants rated their own status in the group and that of their teammates. By and large, people’s self-evaluations matched up with how their peers rated them.

The explanation for this, the researchers argue, is that the costs of error are so high: Those few people who thought they ranked higher than they actually did were strongly disliked by their teammates. Overestimating one’s own intelligence or sex appeal may be simply annoying, but overestimating one’s social position can be a ticket to ostracism, and up until relatively recently in the timescale of human evolution, ostracism could have serious consequences, even death.

Other research has shown the unexpected dividends that having a clearly delineated hierarchy can pay even if it enshrines great status disparities. Studies show a host of physiological benefits to having high status, whether you’re a senior partner at a bank or the alpha male in a baboon troop. But while that may come as no surprise, there are also findings that suggest people derive psychic benefits from being low-status, as long as there’s no question about where they stand.

In a 2003 study by Larissa Tiedens and Alison Fragale, both then at Stanford University, subjects who displayed submissive body language were found to feel more comfortable around others who displayed dominant body language than around those who also displayed submissive body language — and to like those with more dominant posture better, as well. People, it seems, prefer having their evaluation of social hierarchy confirmed, even when they see themselves at the bottom of it.

These two linked findings — that people derive comfort from an established hierarchy and that they react particularly strongly to those who buck it — may help explain why McChrystal’s insubordinate comments and the French soccer mutiny were so compelling as public dramas: They were conflicts over who is in charge, and over what punishment the loser would suffer.

Perhaps the strongest, if the most surprising, evidence for the importance of clearly delineated social hierarchies is work that suggests that more inequality can make for better teams. While Celtics fans in particular have grown used to extolling the virtues of teams without superstars, where any player can be the hero on any given night, there’s some evidence that more rigid talent caste systems can actually create more teamwork. Galinsky and his fellow researchers found that NBA teams with greater pay disparities not only won more, but ranked higher in categories like assists and rebounding, suggesting a higher degree of cooperation. The clearer the status imbalance, the researchers argued, the less question there is about where one stands.

Good teamwork, in other words, requires a general acceptance of disparity. Everyone knows his job and he does it even if he’d rather have someone else’s job. This is what the military is built on, and successful teams of all kinds. And that seems to be what General McChrystal and Les Bleus forgot.

Drake Bennett is the staff writer for Ideas.


Full article and photo:

Language police

A failure I’d love to watch

You may have missed this news, but The Queen’s English Society, self-appointed defenders of proper speech and writing since 1972, recently announced plans to set up an Academy of English.

The goal is to guard against “impurities” and “bastardizations” by ruling on what in English is correct, and what is simply unacceptable. The academy would be modeled after the Académie Française, which for nearly 400 years has rigorously policed which words are allowed into official French, as well as similar bodies in Spain and Italy.

The idea of an Academy of English isn’t a new one — Jonathan Swift suggested one in 1712, with one of his goals being to prevent people from pronouncing words like rebuked with two syllables instead of three (he preferred re-buke-ed). But it’s not one that has ever made much progress towards reality.

As a lexicographer, I used to be strongly against the idea of an Academy of English. English is too widespread and dynamic, and English speakers too creative, to be reined in by some stodgy committee debating whether or not toughicult (tough + difficult) or oneitis (the condition of concentrating romantic attention on one person) can be considered “standard English.”

But this recent attempt by the Queen’s English Society has me thinking, cynically, that perhaps this time an Academy of English is a good idea. Not because English needs a standards body — or could ever possibly obey one — but because I think that, by showing just how ludicrous and unworkable a standards-setting body would be, we can get people to think more kindly of English as it is, and stop lamenting that everyone else’s language isn’t up to snuff.

The founder of this current incarnation of the “Save English” movement is Martin Estinel, a 71-year-old retired translator and interpreter who lives in Switzerland. Part of his motivation for founding the academy lies in his discomfort with people who use the word gay to mean anything other than “happy,” and his desire to keep any other words from going down the same path.

It’s hard to find anyone under the age of 71 who feels as strongly about gay as Mr. Estinel, and the other bugbears of the Queen’s English Society seem just as wrongheaded. The society has taken a stand against gender-neutral language (such as chairperson) and the use of the title Ms.; it is strongly opposed to txtspeak (though the overwhelming evidence shows that txtspeak is not overtaking standard English), and deplores Americanisms. Its battle plan, in other words, is one long rear-guard action against natural language change.

A new academy would (in its own words) “have a body to sit in judgment,” but hasn’t yet nominated anyone to do the actual judging. This is where things could get interesting. From my point of view, anything that focused our attention on the validity of the rules themselves, rather than on someone supposedly breaking the rules, would be a great thing for English.

We would want, of course, a process that unfolded as publicly as possible, starting with written statements of what the nominees believe to be standard and nonstandard English. There could be Oxford Union-style debates between candidates. Which writers, linguists, and general-purpose pundits would qualify? Who would be comfortable even allowing their work to be scrutinized to the extent necessary for their confirmation? Since it’s impossible for even the most devout prescriptivist to follow all the rules that he or she espouses (as the linguist Geoffrey Pullum has pointed out, even E.B. White, coauthor of “The Elements of Style,” broke his own rules), we could imagine the nominees having to defend first their rules, and then their infractions — much to the edification of those watching.

And, of course, we would want the potential academicians to take a public stand on real words — which they thought were useful additions to English, and which were pointless fads. They’d have to explain why some verbings of nouns were okay (campaign) while others were unacceptable (impact, gift), and exactly who is insulted by gender-neutral use of the word dude. There would be hours of discussion about what distinguishes a useful new word from vulgar slang or unacceptable jargon. (Bling and top kill alone could occupy entire news cycles.)

The UK-based academy is seeking a royal charter, which would imply some degree, however small, of governmental authority — and create other delicious questions, like whether it could blackball words from government publications, or sell licenses or seals of approval. Considering that they don’t just hand out royal charters, however, it’s fair to consider that something of a long shot.

There’s obviously something appealing in the idea of a set of rules for English: Just follow these few precepts and no one can criticize you, or so the thinking goes. It works for playing board games. But English is (thankfully) messier, wider-ranging, and much too alive to be hemmed in by a set of checklists and “don’ts.” So bring on the academy, I say: Let the arguments begin!

Erin McKean is a lexicographer and founder of


Full article:

Uncommon knowledge

How to get Johnny to study

How do we motivate kids — especially kids in rough situations — to want education? Researchers at the University of Michigan studied middle school students in Detroit and found that, while almost 90 percent expected to go to college, only half wanted a career that actually required education. And this difference was critical. Students whose career goals did not require education (e.g., sports star, movie star) spent less time on homework and got lower grades. The good news is that the researchers found it was easy to make education more salient, and thereby motivate kids. When students were shown a graph depicting the link between education and earnings, they were much more likely to hand in an extra-credit homework assignment the next day than if they were shown a graph depicting the earnings of superstars.

Destin, M. & Oyserman, D., “Incentivizing Education: Seeing Schoolwork as an Investment, Not a Chore,” Journal of Experimental Social Psychology (forthcoming)

The spirit of capitalism

One of the classic works of sociology is “The Protestant Ethic and the Spirit of Capitalism” by Max Weber, who argues that the former facilitates the latter. Scholars have been trying to test this theory ever since, typically by analyzing economic patterns at the international level. An ideal scientific test of the theory, however, would require randomly indoctrinating one group of people with one religion and another group of people with another religion. This is obviously easier said than done, but economists at Cornell and Yale universities have figured out a very rough approximation. They recruited over 800 students — including Protestants, Catholics, Jews, and atheists — and asked them to take a sentence-unscrambling test. Half of the students were given some sentences that contained religious references, as a way to subconsciously activate each student’s religious values. The students were then asked to make various economically relevant decisions. A few of the findings: Protestants became more willing, but Catholics less willing, to contribute to the public good. Catholics also expected others to contribute less and were more willing to take risks. Jews were willing to work more for a given wage.

Benjamin, D. et al., “Religious Identity and Economic Behavior,” National Bureau of Economic Research (April 2010).

The adult effects of teen consent laws

Although the rate of abortion climbed in the decade after the US Supreme Court’s Roe v. Wade decision, it has since fallen off, and there’s no consensus on why. A new analysis gives some credit to parental notice and consent laws, which require minors to involve at least one parent in the abortion decision. But the surprise is that the laws appear to affect the abortion rates among adults as well. The first states to pass parental involvement laws have the lowest abortion rates for adult women and experienced the earliest declines, even among conservative states. The authors suggest that parental involvement laws have a long-term effect on behavior, changing the choices people make even after they become adults. Enacting a parental involvement law in 1994, for example, reduced abortion rates among adult women in 2000 by an estimated 9.6 percent.

Medoff, M., “State Abortion Policy and the Long-Term Impact of Parental Involvement Laws,” Politics & Policy (April 2010).

Increasing consumption by mistake

In a recent book, titled “Nudge: Improving Decisions about Health, Wealth, and Happiness,” Richard Thaler and Cass Sunstein — both esteemed professors and friends of President Obama — advocate for subtly manipulating the way choices are framed so as to nudge people towards the socially preferred outcome. Yet it seems that not everyone is on board. Economists at UCLA analyzed the energy consumption of customers in California who were issued special utility bills comparing their own usage to that of their neighbors, with the goal of nudging people to conserve more. Liberals, and those surrounded by liberals, cut back their consumption. Conservatives tended to increase their consumption.

Costa, D. & Kahn, M., “Energy Conservation ‘Nudges’ and Environmentalist Ideology: Evidence from a Randomized Residential Electricity Field Experiment,” National Bureau of Economic Research (April 2010).

Productivity through hanging out

The ideal employee is supposed to be singularly focused on his or her job. Taking breaks or socializing at work is generally considered a sign of inefficiency. Nevertheless, researchers at the Massachusetts Institute of Technology are finding evidence that, to some extent, the opposite may be true. Workers in a large call center at a major bank were asked to wear special badges designed by the researchers to track social interaction. Two teams of workers were allowed to take breaks together as a group, while two other teams had to take staggered breaks (the status quo for workers in the call center). Teams with a simultaneous break developed a stronger social bond, and this social bond was associated with higher productivity.

Waber, B. et al., “Productivity through Coffee Breaks: Changing Social Networks by Changing Break Structure,” Massachusetts Institute of Technology (January 2010).

Kevin Lewis is an Ideas columnist.


Full article:

Too Complicated for Words

Are our brains big enough to untangle modern art?

Literary types recently celebrated Bloomsday, a “holiday” not generally recognized by those who haven’t read James Joyce’s “Ulysses,” a novel whose principal character is named Leopold Bloom and that takes place in Dublin on June 16, 1904. As always, the celebrations included a marathon bash at New York’s Symphony Space during which excerpts from “Ulysses” were read. One participant was Stephen Colbert, who admitted to a reporter: “Performing ‘Ulysses’ on Bloomsday at Symphony Space is the only way I’ll ever finish the damn book.”

 [siatonal]James Joyce (1931)

Very funny—but also very much to the point. The novels of Joyce and Gertrude Stein, the poetry of Ezra Pound and John Ashbery, the music of Pierre Boulez and Elliott Carter, the paintings of Willem de Kooning and Jackson Pollock: All have at one time or another been dismissed as complicated to the point of unintelligibility.

Modern art comes in many varieties, and countless works once thought to be unintelligible now strike most of us as clear. But I have yet to notice a collective change of heart when it comes to such exercises in hermetic modernism as Joyce’s “Finnegans Wake,” which contains thousands of sentences like this: “It is the circumconversioning of antelithual paganelles by a huggerknut cramwell energuman, or the caecodedition of an absquelitteris puttagonnianne to the herreraism of a cabotinesque exploser?”

Are certain kinds of modern art too complex for anybody to understand? Fred Lerdahl thinks so, at least as far as his chosen art form is concerned. In 1988 Mr. Lerdahl, who teaches musical composition at Columbia University, published a paper called “Cognitive Constraints on Compositional Systems,” in which he argued that the hypercomplex music of atonal composers like Messrs. Boulez and Carter betrays “a huge gap between compositional system and cognized result.” He distinguishes between pieces of modern music that are “complex” but intelligible and others that are excessively “complicated”—containing too many “non-redundant events per unit [of] time” for the brain to process. “Much contemporary music,” he says, “pursues complicatedness as compensation for a lack of complexity.” (To read his paper online, go to:

Mr. Lerdahl’s paper isn’t widely known outside the field of music theory. I’d never heard of it until a musician friend told me about it the other day. But it stirred up a huge stink when it was published, and it continues to make certain of his colleagues understandably angry. For if he’s right, then a fair amount of classical music written in the past century is too complicated for ordinary listeners to grasp—meaning it is never going to find an audience.

Mr. Lerdahl is on to something, and it is applicable to the other arts, too. Can there be any doubt that “Finnegans Wake” is “complicated” in precisely the same way that Mr. Lerdahl has in mind when he says that a piece of hypercomplex music like Mr. Boulez’s “Le marteau sans maître” suffers from a “lack of redundancy” that “overwhelms the listener’s processing capacities”?

The word “time” is central to Mr. Lerdahl’s argument, for it explains why an equally complicated painting like Pollock’s “Autumn Rhythm” appeals to viewers who find the music of Mr. Boulez or the prose of Joyce hopelessly offputting. Unlike “Finnegans Wake,” which consists of 628 closely packed pages that take weeks to read, the splattery tangles and swirls of “Autumn Rhythm” (which hangs in New York’s Metropolitan Museum of Art) can be experienced in a single glance. Is that enough time to see everything Pollock put into “Autumn Rhythm”? No, but it’s long enough for the painting to make a strong and meaningful impression on the viewer.

That is why hypercomplex modern visual art is accessible in a way that hypercomplex literature and music are not. You can’t get through a complicated novel faster by turning the pages more quickly. Reading demands a greater investment of time than looking at a complicated painting, and the average reader is not prepared to invest that much time in a book, no matter what critics say about it. I feel the same way. I suppose I could get to the bottom of “Finnegans Wake” if I worked at it—but would it be worth the trouble? Or would I be better served by spending the same amount of time rereading the seven volumes of Marcel Proust’s “Remembrance of Things Past,” a modern masterpiece that is not gratuitiously complicated but rewardingly complex?

“You have turned your back on common men, on their elementary needs and their restricted time and intelligence,” H.G. Wells complained to Joyce after reading “Finnegans Wake.” That didn’t faze him. “The demand that I make of my reader,” Joyce said, “is that he should devote his whole life to reading my works.” To which the obvious retort is: Life’s too short.

Mr. Teachout, the Journal’s drama critic, writes “Sightings” every other Saturday. He is the author of “Pops: A Life of Louis Armstrong.”


Full article and photo:

Ponte Vecchio, a Bridge That Spans Centuries

This marvel of medieval construction provides a lens on Florence’s layered history

In the 1850s, the city architect for Florence, Giuseppe Martelli, proposed a redesign for Ponte Vecchio with Art Nouveau facades and, inspired by London’s Crystal Palace, a glass roof overhead. The project was never built. Expense was the cited reason, but one senses that its sheer size would have ruined the bridge’s picturesque character.

It’s not that Ponte Vecchio has never changed. It has been altered continuously since 1345 when this structure, the city’s “Old Bridge,” went up over the Arno. But design-by-accretion (and -subtraction) works slowly and carefully in Florence, providing a lens on the city’s layered history.

Ponte Vecchio’s inimitable architecture begins with three graceful masonry spans—long, low and a marvel of medieval construction. Above these sit the famous gold and jewelry shops, crowded along the carriageway and cantilevered asymmetrically—chaotically—over the river. Above one side runs the Vasari Corridor, built in the late Renaissance with all the artistic and political bravado of the Medici rulers who commissioned it.

You could call Ponte Vecchio a pasticcio, a mish-mash, or at least an arrangement of contradictory architectural notes. But dissonant they are not. When the morning sun reaches across the Arno and hits the varied ochre tones, the whole bridge glows as one in the hazeless Florentine light. Visually, distinct elements weave together like immutable strands— genius, commerce and tyranny, to name three big ones—of the Florentine character.

No one can say absolutely why Ponte Vecchio touches the spirit the way it does. Why, for instance, did the panorama that the structure dominates liberate the protagonist Lucy Honeychurch Bartlett in “A Room With a View,” E.M. Forster’s novel and later a feature film?

Perhaps it’s because Ponte Vecchio’s architecture of countless fragments reflects this city’s incalculable memory or, more simply, its lovely imperfections touch deep emotions. In fact, the construction we see today is far from the first to grace the site. For some 2,000 years the central bridge of Florence has crossed this narrow point in the Arno—at least since 59 B.C. when Romans settled the untamed floodplain that became a colony called Florentia. Engineers, Rome’s true conquerors, laid out the city, drained the marshes and built a bridge with stone piers. As intended, Florence became the crucial link between the north and south of Italy.

The bridge was destroyed, often by vengeful flood, and rebuilt several times. The version preceding the present one was completed around 1200. Remembered for its five high arches and “camel back” profile, it became the old bridge as several new ones went up shortly afterward to serve a population of 100,000.

Florence grew, and Ponte Vecchio figured into many key episodes of local history. Among the most fateful, Buondelmonte de’ Buondelmonti was murdered as he crossed it in 1216. The young nobleman had abandoned an arranged marriage for his true love, triggering a feud and a rift between Guelphs and Ghibellines, factions that fought bloodily for centuries. Dante recorded this milestone a century later in his “Divine Comedy,” and a plaque with his passage about it marks the spot.

Of the present bridge, its 1345 completion date is definitive, though its authorship is not. At least three designers have been proposed as its architect. The most frequently cited may also be the least likely: Taddeo Gaddi, a painter who was Giotto’s godson. (It was said that Giotto, who designed the Duomo’s campanile, was too busy to take it on.)

No matter whose work, Ponte Vecchio represented an astounding technical feat for its time. The real mystery of the bridge lies in how its builders had the knowledge to design three segmental arches—each less than a semicircle in depth—long enough to allow maximum water to pass underneath and low enough to give the paved surface its comfortable gradient. They were almost certainly the longest such spans in Europe at the time and perhaps the first segmental arches on the continent since Roman times. The precise proportions of this technology would not even be written down for another hundred years; at this point the only other bridge known to be using a segmental arch was the An Ji Bridge in Hebei, China, which raises the intriguing question of whether Chinese-Florentine trade brought this touch of engineering knowledge to Italy along with silk and spices.

When built, the bridge had stone walls and battlements along the top, a fortress in symbol if not in fact. Florentines were bellicose but even more ferocious as traders, so shops were also built along the sides to benefit from foot traffic. In the 1400s, the city sold the stores to private owners to raise money for war. They were quickly enlarged, vertically at first, and traces of the old crenellation are still visible in the second-story masonry.

Ponte Vecchio’s most radical addition came in 1565 with the Vasari Corridor, built by Grand Duke Cosimo de’ Medici to connect Palazzo Vecchio, the seat of city government, to the Medici residence at Palazzo Pitti across the river. A private passage required a measure of arrogance, but architect Giorgio Vasari mitigated dissent by respecting private property when he could. Notably, his design detours the Mannelli Tower at the end of the bridge with a broad shelf and strapping stone brackets, which constitute Ponte Vecchio’s most curious architectural gesture.

Florence remains a city of large buildings like the Duomo and the Uffizi, which fill guidebooks and dissertations. But visitors, not to mention poets and artists, also love the delicate sensations linking the city’s past to the present. In “Mornings in Florence,” John Ruskin counsels to look deeper than the lively shops that populate Ponte Vecchio and inspect its masonry. “The old stones of it,” he marveled in 1877, “are unshaken to this day.”

For most of its history and certainly in recent centuries, the meaning and importance of Ponte Vecchio has been left to the muses, but in 1944 it was subject to hideous bargaining between the Nazis, who were preparing to level most of Florence before retreating, and local art lovers—including German diplomats—who pleaded to save the city’s artistic treasures. It was said that the generals wanted to blow up all the bridges to impede the Allied advance, but Hitler ordered Ponte Vecchio saved. The surreal finale came when some desperate connoisseurs were willing to trade Ponte Vecchio for the exquisite Ponte Santa Trinita.

In fact, Ponte Santa Trinita was dynamited and later rebuilt block by block, with many stones being retrieved from the river. Ponte Vecchio was damaged but not destroyed.

Mr. Pridmore is the author of books on the architecture of Chicago, Shanghai, and a forthcoming book on Ponte Vecchio.


Full article and photo:

On the Surly Bonds of Marriage

A debut novel ponders marital life—and ‘Rear Window,’ Sam Sheppard and a salty snack

After 13 years of marriage, David Pepin, a videogame entrepreneur, finds that a perverse daydream has come true: His wife, Alice, is dead—not from any of the violent ends he imagined for her but from anaphylactic shock after eating peanuts. And he is the prime suspect. Two detectives assigned to the case, Ward Hasteroll (who thinks that Pepin is guilty) and Sam Sheppard (who thinks that he is innocent), have their own marital miseries. Hasteroll’s wife is depressed and in bed; Sheppard’s wife winds up dead, in bed.

If this all sounds a bit macabre, it is, but it is more than that. “Mr. Peanut,” Adam Ross’s over-written debut novel, is in part a murder mystery and in part—in dominant part, one might say—an examination of the need for freedom and the many ways that, in marriage, it is punitively denied. “Only married men, Sheppard thought, should be detectives. They’d been to places in their hearts that single men hadn’t.”

In Mr. Ross’s narrative there is a great deal of “couple telepathy”— unspoken thoughts, many of them menacing, that one spouse sends out to the other, neither member of the couple acknowledging them outright but both aware of the messages being given and received. The novel’s most dire brainwaves come from Pepin himself, a dark-bearded, barrel-chested entrepreneur who “looked like a Jewish Henry VIII.” The detectives in the novel may not be sure who killed Pepin’s wife—and readers may not be sure either—but Pepin’s murderous thoughts make him seem guilty of a thought crime at least.

In the scenes before Alice dies, it is clear that her life with such a husband is not an easy one. “I don’t know about you,” Alice says to Pepin, “but I feel like we walk around all the time with this other self who wants to say things and do things but can’t.” Her death naturally gives a special poignancy and force to this comment.

In case we miss the point, though, Mr. Ross hammers away at his message throughout “Mr. Peanut,” as he follows the main whodunit plotline and several tributary narratives: We are trapped; we dream of starting over. Couples yearn for a clean slate, seek to begin again—separately. Wives want to suddenly disappear, husbands to make a new life for themselves in a strange city. “The heart is half criminal,” says one of the detectives.

The novel’s repetitions and circlings feel purposeful. The narrator tells us more than once: “We orbit, we retreat.” The mind-bending work of the Dutch artist M.C. Escher inspires the computer games that Pepin designs for a living. It somehow doesn’t come as a surprise that Pepin is also writing a novel very like the one we’re reading. “It was odd,” Mr. Ross writes, “how marriage flattened time, compressed it, hid its passing, time past and time present looping on each other, foreground gone background and back, until the new was the same as the old and the past impossibly novel and strange.” As Mr. Ross surveys marriage, he examines the effects of habit, of boredom, of sameness, of repetition itself.

Still, marriage persists and people persist in wanting it. Mr. Ross is well aware of this odd fact, which saves “Mr. Peanut” from being some sort of demented polemic. The tone is less angry than ironical and investigatory, like the voice of someone trying to solve a puzzle. The central question raised in the novel is posed by one of the characters: “Can marriage save your life, or is it just the beginning of a long double homicide?” Alas, Mr. Ross takes far too long to answer the question, if he ever really does. He wanders instead between multiple plotlines, long stretches of dialogue and essay-like ruminations—with marriage and its “problem” always serving as his guiding obsession.



Mr Take Pepin’s disquisition on the 1954 Alfred Hitchcock film “Rear Window.” You might think that the movie was about the Jimmy Stewart character, confined to a wheelchair by a broken leg, spying on his neighbors, realizing that in a nearby apartment a husband has murdered his wife and enlisting the help of his girlfriend, played by Grace Kelly, in bringing the killer to justice. Pepin thinks the movie is about Stewart’s character feeling so beleaguered by his marriage-minded girlfriend and so desperate for freedom that he “projects his fantasies of killing her on to a jewelry salesman who might have actually have killed his invalid wife.”

Hitchcock figures prominently in “Mr. Peanut”—as does the filmmaker’s own marriage. Pepin and Alice in fact first met in a college film course called “Marriage and Hitchcock.” The name of Pepin’s videogame company is Spellbound, after the Hitchcock movie. A walk-on policeman is named Thorwald—the name of the salesman in “Rear Window.” One detective once had a mistress named Margot Wendice (Grace Kelly’s character in “Dial M for Murder”). And those are just some of the more sly references. Less sly is a 12-page discourse by Mr. Ross on the director’s movies and private life. Did you know a chicken appears in every one of the director’s films? That he never had sex with his wife after she gave birth?

“Mr. Peanut” is positively overlarded with excesses, creating a tedium that might rival that of the worst marriage. The capstone of Mr. Ross’s self-indulgence is his 127-page recapitulation of the notorious Sam Sheppard murder case, which has been the subject of many books and is thought to have inspired the television series “The Fugitive” (its creators denied any connection). Sam Sheppard, you will remember, is the name that Mr. Ross gives to one of his detectives.

On the night of July 4, 1954, Sheppard’s pregnant wife was brutally slain in the bedroom of their house in Bay Village, Ohio. Sheppard told police that he was asleep on the couch that night and woke up only when he heard her calling his name. He ran to the bedroom, where he saw someone whom he described later as a “bushy haired man” bludgeoning his wife. The attack knocked him unconscious, Sheppard told the police. Or did the womanizing doctor actually kill her himself?

It is peculiar enough that Mr. Ross recruits Sheppard, a long-dead, real-life figure doomed by history, to label a fictional character who is certainly not in need of a doppelgänger to be interesting. Odder still is the author’s willingness to rehearse the facts of the case at such length, including Sheppard’s conviction in 1954, his retrial in 1966 and his son’s effort to clear his father’s name a few years ago with a civil suit for wrongful imprisonment.

Mr. Ross’s exhaustive recitation of the Sheppard story—”Mr. Peanut” is a work of fiction, remember—is just one of the eye-popping features of a quirky novel in which there are no chapters or chapter-headings. The friendly little fellow on the Planters Peanuts can, tipping his top hat, pops up as a kind of fantasized character, looking down on the institution of marriage through his monocle. We also meet a dwarf named Mobius, a gun-for hire who forces the plot to an improbable conclusion that, it seems, both Mr. Ross and David Pepin are helpless to avoid. The character’s name, of course, recalls the Mobius Strip, the twisted loop that never reaches an end.

It is all rather heady stuff, but uneven too. The book’s set-pieces may seem ingenious, but they often feel like random add-ons. The prose ranges from the self-consciously artful to the undistinguished. And the novel’s marriage Angst, while yielding insights, goes on much too long. Mr. Ross, who does not always grasp, can however be commended for his reach.

Mr. Théroux’s latest novel is “Laura Warholic: Or, The Sexual Intellectual” (Fantagraphics).


Full article and photos:

The Feuding Fathers

Americans lament the partisan venom of today’s politics, but for sheer verbal savagery, the country’s founders were in a league of their own. Ron Chernow on the Revolutionary origins of divisive discourse.

In the American imagination, the founding era shimmers as the golden age of political discourse, a time when philosopher-kings strode the public stage, dispensing wisdom with gentle civility. We prefer to believe that these courtly figures, with their powdered hair and buckled shoes, showed impeccable manners in their political dealings. The appeal of this image seems obvious at a time when many Americans lament the partisan venom and character assassination that have permeated the political process.


Unfortunately, this anodyne image of the early republic can be quite misleading. However hard it may be to picture the founders resorting to rough-and-tumble tactics, there was nothing genteel about politics at the nation’s outset. For sheer verbal savagery, the founding era may have surpassed anything seen today. Despite their erudition, integrity, and philosophical genius, the founders were fiery men who expressed their beliefs with unusual vehemence. They inhabited a combative world in which the rabble-rousing Thomas Paine, an early admirer of George Washington, could denounce the first president in an open letter as “treacherous in private friendship…and a hypocrite in public life.” Paine even wondered aloud whether Washington was “an apostate or an imposter; whether you have abandoned good principles, or whether you ever had any.”

Such highly charged language shouldn’t surprise us. People who spearhead revolutions tend to be outspoken and courageous, spurred on by a keen taste for combat. After sharpening their verbal skills hurling polemics against the British Crown, the founding generation then directed those energies against each other during the tumultuous first decade of the federal government. The passions of a revolution cannot simply be turned off like a spigot.

By nature a decorous man, President Washington longed for respectful public discourse and was taken aback by the vitriolic rhetoric that accompanied his two terms in office. For various reasons, the political cleavages of the 1790s were particularly deep. Focused on winning the war for independence, Americans had postponed fundamental questions about the shape of their future society. When those questions were belatedly addressed, the resulting controversies threatened to spill out of control.

The Constitutional Convention of 1787 had defined a sturdy framework for future debate, but it didn’t try to dictate outcomes. The brevity and generality of the new charter guaranteed pitched battles when it was translated into action in 1789. If the constitution established an independent judiciary, for instance, it didn’t specify the structure of the federal court system below the Supreme Court. It made no reference to a presidential cabinet aside from a glancing allusion that the president could solicit opinions from department heads. The huge blanks left on the political canvas provoked heated battles during Washington’s time in office. When he first appeared in the Senate to receive its advice and consent about a treaty with the Creek Indians, he was so irked by the opposition expressed that he left in a huff. “This defeats every purpose of my coming here,” he protested.


Like other founders, Washington prayed that the country would be spared the bane of political parties, which were then styled “factions.” “If I could not go to heaven but with a party,” Thomas Jefferson once stated, “I would not go there at all.” Washington knew that republics, no less than monarchies, were susceptible to party strife. Indeed, he believed that in popularly elected governments, parties would display their “greatest rankness” and emerge as the “worst enemy” to the political system. By expressing narrow interests, parties often thwarted the popular will. In Washington’s view, enlightened politicians tried to transcend those interests and uphold the commonweal. He was so opposed to anything that might savor of partisanship that he refused to endorse congressional candidates, lest he seem to be meddling.

In choosing his stellar first cabinet, President Washington applied no political litmus test and was guided purely by the candidates’ merits. With implicit faith that honorable gentlemen could debate in good faith, he named Alexander Hamilton as treasury secretary and Jefferson as secretary of state, little suspecting that they would soon become fierce political adversaries. Reviving his Revolutionary War practice, Washington canvassed the opinions of his cabinet members, mulled them over at length, then arrived at firm conclusions. As Hamilton characterized this consultative style, the president “consulted much, pondered much; resolved slowly, resolved surely.” Far from fearing dissent within his cabinet, Washington welcomed the vigorous interplay of ideas and was masterful, at least initially, at orchestrating his prima donnas. As Gouverneur Morris phrased it, Washington knew “how best to use the rays” of intellect emitted by the personalities at his command.

During eight strenuous years of war, Washington had embodied national unity and labored mightily to hold the fractious states together; hence, all his instincts as president leaned toward harmony. Unfortunately, the political conflicts that soon arose often seemed intractable: states’ rights versus federal power; an agrarian economy versus one intermixed with finance and manufacturing; partiality for France versus England when they waged war against each other. Anything even vaguely reminiscent of British precedent aroused deep anxieties in the electorate.

As two parties took shape, they coalesced around the outsize personalities of Hamilton and Jefferson, despite their joint membership in Washington’s cabinet. Extroverted and pugnacious, Hamilton embraced this role far more openly than Jefferson, who preferred to operate in the shadows. Although not parties in the modern sense, these embryonic factions—Hamiltonian Federalists and Jeffersonian Republicans—generated intense loyalty among adherents. Both sides trafficked in a conspiratorial view of politics, with Federalists accusing the Republicans of trying to import the French Revolution into America, while Republicans tarred the Federalists as plotting to restore the British monarchy. Each side saw the other as perverting the true spirit of the American Revolution.

As Jefferson recoiled from Hamilton’s ambitious financial schemes, which included a funded debt, a central bank, and an excise tax on distilled spirits, he teamed up with James Madison to mount a full-scale assault on these programs. As a result, a major critique of administration policy originated partly within the administration itself. Relations between Hamilton and Jefferson deteriorated to the point that Jefferson recalled that at cabinet meetings he descended “daily into the arena like a gladiator to suffer martyrdom in every conflict.”

The two men also traded blows in the press, with Jefferson drafting surrogates to attack Hamilton, while the latter responded with his own anonymous essays. When Hamilton published a vigorous defense of Washington’s neutrality proclamation in 1793, Jefferson urged Madison to thrash the treasury secretary in the press. “For God’s sake, my dear Sir, take up your pen, select the most striking heresies, and cut him to pieces in the face of the public.” When Madison rose to the challenge, he sneered in print that the only people who could read Hamilton’s essays with pleasure were “foreigners and degenerate citizens among us.”

Slow to grasp the deep-seated divisions within the country, Washington also found it hard to comprehend the bitterness festering between Hamilton and Jefferson. Siding more frequently with Hamilton, the president was branded a Federalist by detractors, but he tried to rise above petty dogma and clung to the ideal of nonpartisan governance.

Afraid that sparring between his two brilliant cabinet members might sink the republican experiment, Washington conferred with Jefferson at Mount Vernon in October 1792 and expressed amazement at the hostility between him and Hamilton. As the beleaguered president confided, “he had never suspected [the conflict] had gone so far in producing a personal difference, and he wished he could be the mediator to put an end to it,” as Jefferson recorded in a subsequent memo. To Hamilton, Washington likewise issued pleas for an end to “wounding suspicions and irritating charges.” Both Hamilton and Jefferson found it hard to back down from this bruising rivalry. To his credit, Washington never sought to oust Jefferson from his cabinet, despite their policy differences, and urged him to remain in the administration to avoid a monolithic uniformity of opinion.

Feeding the venom of party strife was the unrestrained press. When the new government was formed in 1789, most newspapers still functioned as neutral publications, but they soon evolved into blatant party organs. Printing little spot news, with no pretense of journalistic objectivity, they specialized in strident essays. Authors often wrote behind the mask of Roman pseudonyms, enabling them to engage in undisguised savagery without fear of retribution. With few topics deemed taboo, the press lambasted the public positions as well as private morality of leading political figures. The ubiquitous James T. Callender typified the scandalmongers. From his poison-tipped pen flowed the expose of Hamilton’s dalliance with the young Maria Reynolds, which had prompted Hamilton, while treasury secretary, to pay hush money to her husband. Those Jeffersonians who applauded Callender’s tirades against Hamilton regretted their sponsorship several years later when he unmasked President Jefferson’s carnal relations with his slave Sally Hemings.

At the start of his presidency, Americans still viewed Washington as sacrosanct and exempt from press criticism. By the end of his first term, he had shed this immunity and reeled from vicious attacks. Opposition journalists didn’t simply denigrate Washington’s presidential record but accused him of aping royal ways to prepare for a new monarchy. The most merciless critic was Philip Freneau, editor of the National Gazette, the main voice of the Jeffersonians. Even something as innocuous as Washington’s birthday celebration Freneau mocked as a “monarchical farce” that exhibited “every species of royal pomp and parade.”

Other journalists dredged up moldy tales of his supposed missteps in the French and Indian War and derided him as an inept general during the Revolutionary War. In his later, anti-Washington incarnation, Thomas Paine gave the laurels for wartime victory against the British to Gen. Horatio Gates. “You slept away your time in the field till the finances of the country were completely exhausted,” Paine taunted Washington, “and you had but little share in the glory of the event.” Had America relied on Washington’s “cold and unmilitary conduct,” Paine insisted, the commander-in-chief “would in all probability have lost America.”


George Washington pleaded with Alexander Hamilton to end his feud with Thomas Jefferson, saying he hoped that “liberal allowances will be made for the political opinions of one another.” He continued, “Without these I do not see how the reins of government are to be managed, or how the union of the states can be much longer preserved.”

Another persistent Washington nemesis was Benjamin Franklin Bache, grandson of Benjamin Franklin, and nicknamed “Lightning Rod, Jr.” for his scurrilous pen. In his opposition newspaper, the Aurora, Bache questioned Washington’s loyalty to the country. “I ask you, sir, to point out one single act which unequivocally proves you a FRIEND TO THE INDEPENDENCE OF AMERICA.” Resurrecting wartime forgeries fabricated by the British, he raised the question of whether Washington had been bribed by the Crown or even served as a double agent.

So stung was Washington by these diatribes that Jefferson claimed he had never known anyone so hypersensitive to criticism. For all his granite self-control, the president succumbed to private outrage. At one cabinet session, Secretary of War Henry Knox showed Washington a satirical cartoon in which the latter was being guillotined in the manner of the late Louis XVI. As Jefferson recalled Washington’s titanic outburst, “The President was much inflamed; got into one of those passions when he cannot command himself,” and only regained control of his emotions with difficulty. A few years later, in a strongly worded rebuke to Jefferson, Washington reflected on the vicious partisanship that had seized the country, saying that he previously had “no conception that parties” could go to such lengths. He hotly complained of being slandered in “indecent terms as could scarcely be applied to a Nero, a notorious defaulter, or even to a common pick-pocket.” To Washington’s credit, he tolerated the press attacks and never resorted to censorship or reprisals.

As it turned out, the rabid partisanship exhibited by Hamilton and Jefferson previewed America’s future far more accurately than Washington’s noble but failed dream of nonpartisan civility. In the end, Washington seems to have realized as much. By his second term, having fathomed the full extent of Jefferson’s disloyalty, he insisted upon appointing cabinet members who stood in basic sympathy with his policies. After he left office, he opted to join in the partisan frenzy, at least in his private correspondence. He no longer shrank from identifying with Federalists or scorning Republicans, nor did he feel obliged to muzzle his blazing opinions. To nephew Bushrod Washington, he warned against “any relaxation on the part of the Federalists. We are sure there will be none on that of the Republicans, as they have very erroneously called themselves.” He even urged Bushrod and John Marshall to run as Federalists for congressional seats in Virginia.

Only a generation after Washington’s death in 1799, during the age of Andrew Jackson, presidents were to emerge as unabashed chieftains of their political parties, showing no qualms about rallying their followers. The subsequent partisan rancor has reverberated right down to the present day—with no relief in sight.

Ron Chernow is the author of “Alexander Hamilton” and “Titan: The Life of John D. Rockefeller, Sr.” His next book, “Washington: A Life,” is due out in October.


Full article and photos:


The spelling of English is a bizarre mishmash, no doubt about it. Why do we spell acclimation with an “i” in the middle but acclamation with an “a”? Why do we distinguish between carat, caret, carrot and karat? For those who feel strongly that something needs to be done, there’s no better place to vent some orthographic rage than the Scripps National Spelling Bee. The 2010 bee, held earlier this month, was no exception, as a handful of protesters from the American Literacy Council and the British-based Spelling Society picketed the Grand Hyatt in Washington, while inside young spellers braved such obscurities as paravane (an underwater mine remover) and ochidore (a shore crab).

When talk turns to the irrationality of English spelling conventions, a five-letter emblem of our language’s foolishness inevitably surfaces: ghoti. The Christian Science Monitor, reporting on the spelling-bee protesters, laid out the familiar story (while casting some doubt on its veracity): “The Irish playwright George Bernard Shaw is said to have joked that the word ‘fish’ could legitimately be spelled ‘ghoti,’ by using the ‘gh’ sound from ‘enough,’ the ‘o’ sound from ‘women’ and the ‘ti’ sound from ‘action.’ ”

Just one problem with the well-worn anecdote: there’s not a shred of evidence that Shaw, though a noted advocate for spelling reform, ever brought up ghoti. Scholars have searched high and low through Shaw’s writings and have never found him suggesting ghoti as a comical respelling of fish.

The true origins of ghoti go back to 1855, before Shaw was even born. In December of that year, the publisher Charles Ollier sent a letter to his good friend Leigh Hunt, a noted poet and literary critic. “My son William has hit upon a new method of spelling ‘Fish,’ ” Ollier wrote. You guessed it: good old ghoti. Little is known about William Ollier, who was 31 at the time his father wrote the letter. According to Charles E. Robinson, a professor of English at the University of Delaware who came across the ghoti letter during research on the Ollier family about 30 years ago, William was a journalist whose correspondence reveals a fascination with English etymology.

As a language fancier in mid-19th-century England, William Ollier would surely have come into contact with the strong current of spelling reform — championed by the likes of Isaac Pitman, now remembered for inventing a popular system of phonetic shorthand: what Pitman called “phonography.” In 1845, Pitman’s Phonographic Institution published “A Plea for Phonotypy and Phonography,” by Alexander J. Ellis, a call to arms that laid the groundwork for ghoti and other mockeries of English spelling. To make the case for reform, Ellis presented a number of absurd respellings, like turning scissors into schiesourrhce by combining parts of SCHism, sIEve, aS, honOUr, myRRH and sacrifiCE. (If you’re wondering about the last part, the word sacrifice has historically had a variant pronunciation ending in the “z” sound.)

Ellis thought scissors was a downright preposterous spelling of sizerz, and he went about calculating how many other ways the word could be rendered. At first he worked out 1,745,226 spellings for scissors, then adjusted the number upward to 58,366,440, before finally settling on a whopping 81,997,920 possibilities. Isaac Pitman and his brothers liked to use the scissors example when proselytizing for phonetic spelling, and the 58 million number even worked its way into “Ripley’s Believe It or Not!”

Don’t believe it. Ellis admitted that “the real number would not be quite so large,” since English spelling does not actually work by stitching together parts of words in Frankensteinian fashion. Ghoti falls down for the same reason, if you stop to think about it. Do we ever represent the “f” sound as gh at the beginning of a word or the “sh” sound as ti at the end of a word? And for that matter, is the vowel of fish ever spelled with an “o” in any word other than women? English spelling might be messy, but it does follow some rules.

Robinson suggested to me that William Ollier could have come up with ghoti in a parlor game of Ellis-inspired silly spellings. Victorians often amused themselves with genteel language games, so why not one involving the rejiggering of common words? Into the 20th century, other jokey respellings made the rounds, such as ghoughphtheightteeau for potato (that’s gh as in hiccough, ough as in though, phth as in phthisis, eigh as in neigh, tte as in gazette and eau as in beau).

Ghoti was elevated above these other spelling gags when it became attached to the illustrious name of Shaw — who, like Churchill and Twain, seems to attract free-floating anecdotes. If Shaw never said it, who was responsible for the attribution? I blame the philologist Mario Pei, who spread the tale in The Los Angeles Times in 1946 and then again in his widely read 1949 book, “The Story of Language.” Pei could have been confusing Shaw with another prominent British spelling reformer, the phonetician Daniel Jones (said to be one of the models for Shaw’s Henry Higgins in “Pygmalion” ), since Jones really did make use of the ghoti joke in a 1943 speech.

With Shaw’s supposed imprimatur, ghoti lingers with us. Jack Bovill, chairman of the Spelling Society, told me that despite its jocularity, ghoti is nonetheless “useful as an example of how illogical English spelling can be.” I beg to differ: if presented with ghoti, most people would simply pronounce it as goaty. You don’t have to be a spelling-bee champ to know that written English isn’t entirely a free-for-all.

Ben Zimmer will answer one reader question every other week.


Full article and photo: