Humans to Asteroids: Watch Out!

A FEW weeks ago, an asteroid almost 30 feet across and zipping along at 38,000 miles per hour flew 28,000 miles above Singapore. Why, you might reasonably ask, should non-astronomy buffs care about a near miss from such a tiny rock? Well, I can give you one very good reason: asteroids don’t always miss. If even a relatively little object was to strike a city, millions of people could be wiped out.

Thanks to telescopes that can see ever smaller objects at ever greater distances, we can now predict dangerous asteroid impacts decades ahead of time. We can even use current space technology and fairly simple spacecraft to alter an asteroid’s orbit enough to avoid a collision. We simply need to get this detection-and-deflection program up and running.

President Obama has already announced a goal of landing astronauts on an asteroid by 2025 as a precursor to a human mission to Mars. Asteroids are deep-space bodies, orbiting the Sun, not the Earth, and traveling to one would mean sending humans into solar orbit for the very first time. Facing those challenges of radiation, navigation and life support on a months-long trip millions of miles from home would be a perfect learning journey before a Mars trip.

Near-Earth objects like asteroids and comets — mineral-rich bodies bathed in a continuous flood of sunlight — may also be the ultimate resource depots for the long-term exploration of space. It is fantastic to think that one day we may be able to access fuel, materials and even water in space instead of digging deeper and deeper into our planet for what we need and then dragging it all up into orbit, against Earth’s gravity.

Most important, our asteroid efforts may be the key to the survival of millions, if not our species. That’s why planetary defense has occupied my work with two nonprofits over the past decade.

To be fair, no one has ever seen the sort of impact that would destroy a city. The most instructive incident took place in 1908 in the remote Tunguska region of Siberia, when a 120-foot-diameter asteroid exploded early one morning. It probably killed nothing except reindeer but it flattened 800 square miles of forest. Statistically, that kind of event occurs every 200 to 300 years.

Luckily, larger asteroids are even fewer and farther between — but they are much, much more destructive. Just think of the asteroid seven to eight miles across that annihilated the dinosaurs (and 75 percent of all species) 65 million years ago.

With a readily achievable detection and deflection system we can avoid their same fate. Professional (and a few amateur) telescopes and radar already function as a nascent early warning system, working every night to discover and track those planet-killers. Happily, none of the 903 we’ve found so far seriously threaten an impact in the next 100 years.

Although catastrophic hits are rare, enough of these objects appear to be or are heading our way to require us to make deflection decisions every decade or so. Certainly, when it comes to the far more numerous Tunguska-sized objects, to date we think we’ve discovered less than a half of 1 percent of the million or so that cross Earth’s orbit every year. We need to pinpoint many more of these objects and predict whether they will hit us before it’s too late to do anything other than evacuate ground zero and try to save as many lives as we can.

So, how do we turn a hit into a miss? While there are technical details galore, the most sensible approach involves rear-ending the asteroid. A decade or so ahead of an expected impact, we would need to ram a hunk of copper or lead into an asteroid in order to slightly change its velocity. In July 2005, we crashed the Deep Impact spacecraft into comet Tempel 1 to learn more about comets’ chemical composition, and this proved to be a crude but effective method.

It may be necessary to make a further refinement to the object’s course. In that case, we could use a gravity tractor — an ordinary spacecraft that simply hovers in front of the asteroid and employs the ship’s weak gravitational attraction as a tow-rope. But we don’t want to wait to test this scheme when potentially millions of lives are at stake. Let’s rehearse, at least once, before performing at the Met!

The White House Office of Science and Technology Policy has just recommended to Congress that NASA begin preparing a deflection capacity. In parallel, my fellow astronaut Tom Jones and I led the Task Force on Planetary Defense of the NASA Advisory Council. We released our report a couple of weeks ago, strongly urging that the financing required for this public safety issue be added to NASA’s budget.

This is, surprisingly, not an expensive undertaking. Adding just $250 million to $300 million to NASA’s budget would, over the next 10 years, allow for a full inventory of the near-Earth asteroids that could do us harm, and the development and testing of a deflection capacity. Then all we’d need would be an annual maintenance budget of $50 million to $75 million.

By preventing dangerous asteroid strikes, we can save millions of people, or even our entire species. And, as human beings, we can take responsibility for preserving this amazing evolutionary experiment of which we and all life on Earth are a part.

Russell Schweickart, a former astronaut, was the co-chairman of the Task Force on Planetary Defense of the NASA Advisory Council.


Full article and photo:

Unpopular Science

Whether we like it or not, human life is subject to the universal laws of physics.

My day, for example, starts with a demonstration of Newton’s First Law of Motion.

Christoph Niemann - Physics

It states, “Every body continues in its state of rest, or of uniform motion in a straight line…”

Christoph Niemann - Physics

“…unless it is compelled to change that state by forces impressed upon it.”

Christoph Niemann - Physics

Based on supercomplicated physical observations, Einstein concluded that two objects may perceive time differently.

Based on simple life experience, I have concluded that this is true.

Christoph Niemann - Physics

Newtonʼs Cradle shows how energy travels through a series of objects.

In our particular arrangement, kinetic energy is ultimately converted into a compression of the forehead.

Christoph Niemann - Physics

The forehead can be uncrumpled by a downward movement of the jaw.

Christoph Niemann - Physics

Excessive mechanical strain will compromise the elasticity of most materials, though.

Christoph Niemann - Physics

The human body functions like a combustion engine. To produce energy, we need two things:
– Oxygen, supplied through the nostrils (once the toy car is removed, that is).
– Carbohydrates, which come in various forms (vanilla, chocolate, dulce de leche).

Christoph Niemann - Physics

By the by: I had an idea for a carb-neutral ice cream.
All you need is to freeze a pint of ice cream to -3706 F.
The energy it will take your system to bring the ice cream up to a digestible temperature is roughly 1,000 calories, neatly burning away all those carbohydrates from the fat and sugar.
The only snag is the Third Law of Thermodynamics, which says it’s impossible to go below -459 F.

Christoph Niemann - Physics

But back to Newton: he discovered that any two objects in the universe attract each other, and that this force is proportional to their mass.

The Earth is heavier than the Moon, and therefore attracts our bodies with a much greater force.

Christoph Niemann - Physics

This explains why an empty refrigerator administrates a much smaller gravitational pull than, say, one thatʼs stacked with 50 pounds of delicious leftovers. Great: that means we can blame the leftovers.

Christoph Niemann - Physics

(Fig. A): Letʼs examine the behavior of particles in a closed container.

(Fig. B): The more particles we squeeze into the container, the testier they will become, especially if the container happens to be a rush-hour downtown local at 86th and Lex.

(Fig. C): Usually the particles will distribute evenly, unless there is a weird-looking puddle on the floor.

Christoph Niemann - Physics

The probability of finding a seat on the subway is inversely proportional to the number of people on the platform.

Even worse, the utter absence of people is 100 percent proportional to just having missed the train.

Christoph Niemann - Physics

To describe different phenomena, physicists use various units.

PASCALS, for example, measure the pressure applied to a certain area.

COULOMBS measure electric charge (that can occur if said area is a synthetic carpet)

DECIBELS measure the intensity of the trouble the physicist gets into because he didnʼt take off his shoes first.

Christoph Niemann - Physics

Often those units are named after people to recognize historic contributions to their field of expertise. One NEWTON, for example, describes the force that is necessary to accelerate 1 kilogram of mass by one meter per second squared.

This is not to be confused with one NIEMANN, which describes the force necessary to make a three-year-old put on his shoes and jacket when weʼre already late for kindergarten.

Christoph Niemann - Physics

Once the child is ready to go, I search for my keys. I start spinning around to scan my surroundings. This rotation exposes my head and all its contents to centrifugal forces, resulting in loss of hair and elongated eyeballs. That’s why I need to wear prescription glasses, which are yet another thing I constantly misplace.

Christoph Niemann - Physics

Obviously, the hair loss theory I just presented is bogus. Hair canʼt be “lost.” Since Antoine Lavoisier, we all know that “matter can be neither created nor destroyed, though it can be rearranged,” which, sadly, it eventually will.

Christoph Niemann - Physics

Not everything can be explained through physics, though. Iʼve spent years searching for a rational explanation for the weight of my wifeʼs luggage. There is none. It is just a cruel joke of nature.

Christoph Niemann, New York Times


Full article and photos:

No Second Thoughts

When times get tough, it’s really important to believe in yourself. This is something the Democrats have done splendidly this year. The polls have been terrible, and the party may be heading for a historic defeat, but Democrats have done a magnificent job of maintaining their own self-esteem. This is vital, because even if the public doesn’t approve of you, it is important to approve of yourself.

In fact, I would go so far as to say that Democrats have become role models. They have offered us lessons on how we, too, may continue to love ourselves, even in trying circumstances.

Lesson one. Think happy thoughts. Never allow yourself to dwell on downer, depressing ones.

Over the past year, many Democrats have resolutely paid attention to those things that make them feel good, and they have carefully filtered out those negative things that make them feel sad.

For example, Democrats and their media enablers have paid lavish attention to Christine O’Donnell and Carl Paladino, even though these two Republican candidates have almost no chance of winning. That’s because it feels so delicious to feel superior to opponents you consider to be feeble-minded wackos.

On the other hand, Democrats and their enablers have paid no attention to Republicans like Rob Portman, Dan Coats, John Boozman and Roy Blunt, who are likely to actually get elected. It doesn’t feel good when your opponents are experienced people who simply have different points of view. The existence of these impressive opponents introduces tension into the chi of your self-esteem.

Similarly, the Democrats and their enablers have paid lavish attention to the Tea Party this year. It’s nice to feel more sophisticated than those hordes of Middle Americans, who say silly things like “Get government off my Medicare.”

On the other hand, Democrats have paid little attention to the crucial group in this election — the independent moderates who supported President Obama in 2008 but flocked away during the health care summer of 2009 and now support the GOP by landslide proportions.

Losing friends makes you sad. It is better to not think about why these things happen.

Lesson two. Always remember, many great geniuses were unappreciated in their lifetimes.

Democrats are lagging this year because the country appears incapable of appreciating the grandeur of their accomplishments. That’s because, as several commentators have argued over the past few weeks, many Americans are nearsighted and ill-informed. Or, as President Obama himself noted last week, they get scared, and when Americans get scared they stop listening to facts and reason. They get all these crazy ideas in their heads, like not wanting to re-elect Blanche Lincoln.

The Democrats’ problem, as some senior officials have mentioned, is that they are so darn captivated by substance, it never occurs to them to look out for their own political self-interest. By they way, here’s a fun party game: Get a bottle of vodka and read Peter Baker’s article “The Education of President Obama” from The New York Times Magazine a few weeks ago. Take a shot every time a White House official is quoted blaming Republicans for the Democrats’ political plight. You’ll be unconscious by page three.

Lesson three. Always remember: You are the hero of your own children’s adventure story.

Some low-minded people could look at events this year and tell a dull, prosaic story. They would say that parties that promote unpopular policies tend to get punished at election time, These grubby-minded people would point out that Democratic House members who voted against health care are doing well in their re-election bids, while those who voted for it are getting clobbered.

But many Democrats have a loftier sensibility. They see this campaign as a poetic confrontation between good (themselves) and pure evil (Karl Rove and his group, American Crossroads).

As Nancy Pelosi put it at a $50,000-a-couple fund-raiser, “Everything was going great and all of a sudden secret money from God knows where — because they won’t disclose it — is pouring in.”

Even allowing the menace of secret money, embracing this Paradise Lost epic means obscuring a few inconvenient facts: that Democrats were happy to benefit from millions of anonymous dollars in 2006, 2008 and today; that the spending by Rove’s group amounts to less than 1 percent of the total money spent on campaigns this year; that Democrats retain an overall spending advantage.

But legend rises above mere facticity, and this Lancelots-of-the-Left tale underlines a self-affirming message — that Democrats are engaged in a righteous crusade against the dark villain who tricked Americans into voting against John Kerry.

In short, it’s hard not to be impressed by the spirit of self-approval that Democrats have managed to maintain this election. I say that knowing it may end as soon as next Wednesday, when, as is their wont, Democrats will flip from complete self-worship to complete self-laceration in the blink of an eye.

David Brooks, New York Times


Full article:

Serving Two Masters: Shariah Law and the Secular State

A few weeks ago, the Cardozo School of Law mounted a conference marking the 20th anniversary of Employment Division v. Smith (1990), a case in which the Supreme Court asked what happens when a form of behavior demanded by one’s religion runs up against a generally applicable law — a law not targeted at any particular agenda or point of view — that makes the behavior illegal. (The behavior at issue was the ingestion of peyote at a Native American religious ceremony.) The answer the court gave, with Justice Antonin Scalia writing for the majority, was that the religious believer must yield to the law of the state so long as that law was not passed with the intention of curtailing or regulating his or anyone else’s religious practice. (This is exactly John Locke’s view in his “Letter Concerning Toleration.”)

“To make the individual’s obligation to obey . . . a law contingent upon the law’s coincidence with his religious beliefs” would have the effect, Scalia explains, of “permitting him, by virtue of his beliefs, ‘to become a law unto himself.’” And if that were allowed, there would no longer be a single law — universally conceived and applied — but multiple laws each of which was tailored to the doctrines and commands of a particular faith. In order to have law in the strong sense, Scalia is saying, you can have only one. (“No man can serve two masters.”)

The conflict between religious imperatives and the legal obligations one has as a citizen of a secular state — a state that does not take into account the religious affiliations of its citizens when crafting laws — is an old one (Scalia is quoting Reynolds v. United States, 1878); but in recent years it has been felt with increased force as Muslim immigrants to Western secular states evidence a desire to order their affairs, especially domestic affairs, by Shariah law rather than by the supposedly neutral law of a godless liberalism. I say “supposedly” because of the obvious contradiction: how can a law that refuses, on principle, to recognize religious claims be said to be neutral with respect to those claims? Must a devout Muslim (or orthodox Jew or fundamentalist Christian) choose between his or her faith and the letter of the law of the land?

In February 2008, the Right Reverend Rowan Williams, Archbishop of Canterbury, tried in a now-famous lecture to give a nuanced answer to these questions by making what he considered a modest proposal. After asking “what degree of accommodation the laws of the land can and should give to minority communities with their strongly entrenched legal and moral codes,” Williams suggested (and it is a suggestion others had made before him) that in some areas of the law a “supplementary jurisdiction,” deriving from religious law, be recognized by the liberal state, which, rather than either giving up its sovereignty or invoking it peremptorily to still all other voices, agrees to share it in limited areas where “more latitude [would be] given in law to rights and scruples rooted in religious identities.”

Williams proceeded immediately to surround his proposal with cautionary safeguards — “no ‘supplementary’ jurisdiction could have the power to deny access to the rights granted to other citizens or to punish its members for claiming those rights” — but no safeguards would have satisfied his many critics, including Prime Minister Gordon Brown, who declared roundly that there is only one common law for all of Britain and it is based squarely on “British values.”

Prompted by Williams’s lecture and the responses it provoked, law professors Rex Ahdar and Nicholas Aroney have now put together a volume, to be published in 2011, under the title “Shari’a in the West,” a collection of learned and thoughtful essays by some of the world’s leading scholars of religion and the law. The volume’s central question is stated concisely by Erich Kolig, an anthropologist from New Zealand: “How far can liberal democracy go, both in accommodating minority groups in public policy, and, more profoundly, in granting official legal recognition to their beliefs, customs, practices and worldviews, especially when minority religious conduct and values are not congenial to the majority,” that is, to liberal democracy itself?

This is exactly the question posed by John Rawls in a preface to the second edition of “Political Liberalism,” his magisterial account and defense of liberal political principles: “How is it possible for those affirming a religious doctrine that is based on religious authority . . . also to hold a reasonable political conception that supports a just democratic regime?” The words to stumble on are “reasonable” and “just,” which at once introduce the requirement and indicate how hard, if not impossible, it will be to meet it: “reasonable” means confirming to rational, not religious, principles; “just” means respecting the equality of all, not just male or faithful, individuals.

With these concepts as the baseline of “accommodation,” accommodation is going to fall far short of anything that will satisfy the adherents of a religion that “encompasses all aspects of public and private law, hygiene, and even courtesy and good manners” (A. A. An-Na’im). In liberal thought these areas are the ones in which the individual reigns supreme and the value of individual choice is presupposed; but, as Ann Black explains, “Muslims do not conceptualize Islam in terms of the Westernized sociological categorization of religion which places the individual at the centre of all analyses.”

And so, perhaps predictably, the essays in Shariah in the West tack back and forth between the uneasy alternatives Williams names in his lecture — “an assumption on the religious side that membership of the community . . . is the only significant category,” and on the other side secular government’s assumption of a “monopoly in terms of defining public and political identity.” These assumptions seem to be standing obstacles to the ability of secular Western states to think through the problem presented by growing Muslim populations that are sometimes militant in their demand to be ruled by their own faiths and traditions.

On the one hand, there is the liberal desire to accord one’s fellow human beings the dignity of respecting their deepest beliefs. On the other hand, there is the fear that if those beliefs are allowed their full scope, individual rights and the rule of law may be eroded beyond repair. It would seem, at least on the evidence of most of these essays, that there is simply no way of “finding a viable path that accommodates diversity with equality” (Ayelet Shachar), that is, accommodates tolerance of diverse religious views with an insistence that, in the last analysis, the rights of individuals cannot be trumped by a theological imperative. No one in this volume quite finds the path.

Except perhaps theologian and religious philosopher John Milbank who puts forward, the editors tell us, “the striking argument that only a distinctly Christian polity — not a secular postmodern one — can actually accord Islam the respect it seeks as a religion.” The italicized phrase is key: the respect liberalism can accord Islam (or any other strong religion) is the respect one extends to curiosities, eccentrics, the backward, the unenlightened and the unfortunately deluded. Liberal respect stops short — and this is not a failing of liberalism, but its very essence — of taking religious claims seriously, of considering them as possible alternative ways of ordering not only private but public life.

Christianity, says Milbank, will be more capable of deeply respecting Islam because the two faiths share a commitment to the sacred and to a teleological view of history notably lacking in liberalism (again, this is not a criticism but a definition of liberalism): A “Christian polity can go further in acknowledging the integral worth of a religious group as a group than a secular polity can.” Christianity can acknowledge the worth of Islam not merely in an act of tolerance but in an act of solidarity in the same way that Christian sects can acknowledge each other. If you are a Catholic, Milbank explains, “and you do not agree with the Baptists you can nevertheless acknowledge that, relatively speaking, they are pursuing social goals that are comparable with, and promote a shared sense of human dignity” as defined by a corporate religious identity. Liberalism can acknowledge individual Muslims or individual Baptists or individual Catholics, but the liberal acknowledgment detaches these religious believers from their community of belief and turns them into citizens who are in the things that count (to liberalism) just like everyone else.

“Liberal principles,” declares Milbank, “will always ensure that the rights of the individual override those of the group.” For this reason, he concludes, “liberalism cannot defend corporate religious freedom.” The neutrality liberalism proclaims “is itself entirely secular” (it brackets belief; that’s what it means by neutrality) and is therefore “unable to accord the religious perspective [the] equal protection” it rhetorically promises. Religious rights “can only be effectively defended pursuant to a specific and distinctly religious framework.” Liberal universalism, with its superficial respect for everyone (as long as everyone is superficial) and its deep respect for no one, can’t do it.

If that is so, then the other contributors to this volume are whistling “Dixie,” at least with respect to the hope declared by Rawls that liberalism in some political form might be able to do justice to the strongly religious citizens of a liberal state. Milbank’s fellow essayists cannot negotiate or remove the impasse he delineates, but what they can do, and do do with considerable ingenuity and admirable tact, is find ways of blunting and perhaps muffling the conflict between secular and religious imperatives, a conflict that cannot (if Milbank is right, and I think he is) be resolved on the level of theory, but which can perhaps be kept at bay by the ad-hoc, opportunistic, local and stop-gap strategies that are at the heart of politics.

Stanley Fish, New York Times


Full article:

Testosterone Put to the Test

Men today—wimpy or exploited or both?

Do today’s men need to man up? Yes, absolutely, Peter McAllister says in “Manthropology,” viewing contemporary males as faint shadows of their shaggy forebears.

Modern man, Mr. McAllister declares, is “the worst man in history,” though not every reader will be convinced by the evidence presented. Certainly the guys of 2010 are not as physically tough as the men of other times and other places. Mr. McAllister, who is especially entertaining when he writes about male-centric mayhem, scoffs at what passes for grit these days. He dismisses, for instance, modern-day “blood pinning,” in which military insignia are jabbed into soldiers’ chests, as minor-league at best. Sambian boys in New Guinea have traditionally been initiated into manhood with cane splints jammed up their nostrils and vines shoved down their throats. He also roughs up modern soldiers, noting that Army recruits are asked to run only 12 miles in four hours; in China, Wu Dynasty soldiers in the sixth century B.C. were reputed to go on 80-mile runs without a break.

You might think, given all the moaning lately, that helmet-to-helmet hits in football are a sign of a violent sports culture. Don’t tell Mr. McAllister. Even the no-holds-barred brawling of Ultimate Fighting Championship, he says, is “a ridiculously safe form of combat” when compared with Olympic boxing back in the good old days—say, the fifth century B.C. That’s when a boxer named Cleomedes killed his opponent Iccus “by driving his hand into his stomach and disemboweling him.”

For Mr. McAllister one measure of manhood is the willingness to face an enemy and mete out punishment without flinching. Today our conduct in war is governed by a handbook of careful rules. Mr. McAllister, for contrast, points to the 17th-century Native American practice of not only scalping victims alive but also “heaping hot coals onto their scalped heads.” Which is nothing compared with the attentions lavished by the Romans on a Christian named Apphianus, who was racked for 24 hours and scourged so hard that “his ribs and spine showed.”

Even today’s bloodthirsty maniacs are pikers by comparison with the rampagers of yore. In the 13th century, Genghis Khan’s son Tolui killed nearly every inhabitant of Merv in Turkmenistan, then the world’s largest city. All told, Mr. McAllister writes, the Mongols killed as many as 60 million people during nearly a century of slaughter. “Al Qaeda and its affiliates,” he adds with something of a sneer, “succeeded in killing 14,602 people worldwide in 2005.” True enough, although by some readings Mr. McAllister is describing a positive development.

Male readers who slink away from “Manthropology” feeling that Mr. McAllister has driven a Cleomedesian fist into their guts may find some solace in Roy F. Baumeister’s “Is There Anything Good About Men?” Mr. Baumeister is less concerned about the wimpification of modern man than about the degree to which men have been historically “exploited.” The very cultures that men have built, he says, have considered males more “expendable” than women.

The expendability is reflected in wartime casualty rates, of course, but men also die more often in work-related accidents and die earlier, on average. Their energies are the motor for some bad things but also for a great deal of good, including the economic bustle and technological advance that we associate with progress. But men, Mr. Baumeister says, are often taken for granted and denigrated as the bane of female existence, with some gender activist insisting that women would be better off without them. In a feisty rejoinder, Mr. Baumeister says that “if women really would have been happier without men,” they would have “set up shop” on their own long ago. “The historical record is overwhelming,” he adds. “Women stick around men.”

In a passage that may strike a chord in some male readers, Mr. McAllister says that men are disadvantaged when it comes to sex. Women don’t pay for sex because “they don’t have to. Women can get sex for nothing.” When women offer themselves to male celebrities, he notes, men jump at the opportunity. When men do the same to women celebrities, they can expect a visit from the security detail. But Mr. Baumeister, a psychology professor, writes with a hopeful air, insisting that, while men and women are different, they can create partnerships based on complementary skills. No hard feelings, apparently, about those centuries of exploitation.

Both “Manthropology” and “Is There Anything Good About Men?” leave the reader wondering: Aren’t men better off these days? What’s a decline in pillaging-proficiency and a history of being a tad taken-advantage-of when, on the whole, modern man has it so good?

Mr. McAllister inadvertently answers the question at book’s end by envisioning a male Homo erectus from a million years ago, plucked off the African plain and plunked down at a Nascar event. The visitor, we’re told, gazing at the soft-bellied male race enthusiasts in the stands, would be horrified and bellow (if he could indeed speak): “My sons, my sons, why have you forsaken me?”

But there is another view. If ancient erectus were told that his “sons” had driven to the event at 70 m.p.h. in cars outfitted with satellite radios; that they lived in climate-controlled houses equipped with refrigerators full of parasite-free steaks from Argentina and beer from Holland; that they and their womenfolk took showers and were familiar with shampoo, he might shout: “My sons, you have found the Kingdom of Heaven!” And, comparatively speaking, he’d be right.

Mr. Shiflett posts his journalism and original music at


Full article and photos:

Study Highlights German Foreign Ministry’s Role in Holocaust

Historians Deliver Damning Verdict

A camera man films the Foreign Ministry building in Berlin. A panel of historians is due to present a study of the ministry’s history during and after the Nazi era.

Historians have found that the German Foreign Ministry was far more deeply involved in the Holocaust than had been thought. A new study commissioned by former minister Joschka Fischer in 2005 is due to present its findings this week, and concludes that diplomats went on covering up the past for decades.

As far as book launches go, this will be an unusual one. Three German foreign ministers past and present will be marking the publication on Thursday of a history about the ministry’s role during the Nazi era.

The 880-page work compiled by a panel of historians was commissioned in 2005 by Joschka Fischer shortly before the end of his tenure as Germany’s top diplomat. It will be formally handed over to the present incumbent, Guido Westerwelle, on Thursday afternoon.

That evening, Fischer and Frank-Walter Steinmeier, who was foreign minister from the end of 2005 until last year, will be attending an event hosted by the publishing company Blessing Verlag.

All three ministers will have to talk about the Holocaust, about war crimes, about diplomatic failure, about perfidious behavior and about rare incidents of heroism, all in the context of the German Foreign Ministry during the Third Reich.

The book will be presented by a commission that includes the historians Eckart Conze and Norbert Frei of Germany, Peter Hayes of the United States and Moshe Zimmermann of Israel. Their book deals with the history of this most distinguished of German ministries during this dark chapter, and about how it dealt with its past after the war.

Diplomats ‘Actively Involved’ in Holocaust

The experts’ verdict is damning. “The diplomats were aware of the Jewish policy throughout,” they write, “and actively involved in it.” Cooperating in mass murder was “an area of activity” of ministry staff “everywhere in Europe.”

Fischer had commissioned the study in 2005 to settle a heated dispute in his ministry about the extent of its historical guilt. The results are unlikely to calm the controversy. Fischer was shocked by the findings. “It makes me feel sick,” he said.

The head of the commission, Eckart Conze, even described the Foreign Ministry as a “criminal organization” in an interview with SPIEGEL (to be published in English later this week). That was the term used at the Nuremberg Trials to describe the SS. Conze’s assessment amounts to a condemnation of Germany’s upper classes during the Nazi era. No other institution had so many members from illustrious families on its staff — the Weizsäckers, the Bismarcks, the Mackensens.

The historians’ findings about the ministry in the post-war West German era are also explosive. Chancellor Konrad Adenauer, who had the job of foreign minister from 1951 until 1955 during his tenure as West German leader, allowed former Nazis to remain on the ministry’s staff even though he was well aware of the roles they had played under Hitler. Diplomats with Nazi pasts were posted in Arab countries and Latin America where they were unlikely to encounter public criticism.

Former Nazis in West German Foreign Service

The situation didn’t improve much when the center-left Social Democratic Party came to power in 1966. Willy Brandt, who resisted the Nazis and emigrated during the 1930s, became foreign minister and then chancellor. But he continued to work with Ernst Achenbach, a foreign policy expert for the Free Democratic Party in the 1960s and 1970s, who — according to the commission — was involved in the deportation of Jews from occupied France during the war when he was a high-ranking member of the German embassy in Paris. Right up until 1974, Achenbach blocked an agreement between West Germany and France to permit the prosecution of Nazis who had committed crimes in France.

Well into the 1980s, during the tenure of Foreign Minister Hans-Dietrich Genscher, historians ran into a wall of silence when they wanted to dig for incriminating documents in the ministry’s archives in order to refute the official version of events — that it had been “haven of resistance.”

Former Foreign Minister Steinmeier says the study’s findings about the post-war years were among the most depressing passages. He said it was “incredible” that it had taken 60 years to conduct systematic research into the history of the ministry. The study was only launched because Fischer got into an argument with his staff.

Fischer says the trigger was a “ridiculous obituary” circulated among staff in 2003 about Franz Nüsslein, who had been a diplomat in the West German Foreign Ministry. The text declined to mention that Nüsslein had been senior prosecutor in Prague during the war and had been partly responsible for hundreds of executions there. Fischer, who was foreign minister at the time, ordered that the ministry should refrain in future from honoring former Nazi party members.

This ban was applied for the first time a year later after the death of Franz Krapf, West Germany’s ambassador to NATO under Genscher. He had been a member of the Nazi party and the SS.

Former diplomats rebelled against the ban and many active members of the diplomatic service joined the protest. They argued that it was unfair to condemn staff who had been members of the Nazi party, and 128 former diplomats put a large death notice in the respected Frankfurter Allgemeine Zeitung newspaper in defense of Krapf’s honor.

Surprised by the reaction, Fischer responded by hiring the commission. He feels that the findings have confirmed his stance. “That’s the obituary these gentlemen deserve,” he said.

Study Lacks Balance

But Fischer’s victory isn’t that clear-cut. The study shows that membership in the Nazi party in itself says nothing about the extent of involvement in crimes. But above all, it isn’t as balanced as the studies that usually put debates such as this to rest. 

It contains repeated references to “the” diplomats even though they didn’t all commit crimes, as the book itself emphasizes in another passage. In addition, it assumes that diplomats had demonstrated their support for the “Final Solution” — the term the Nazis used for the Holocaust — just by reading the reports filed by the murderous death squads and signing them as read.

The study also creates the impression that several diplomats were involved in murders, but then fails to provide proof.

For example, Krapf was stationed at the German embassy in Tokyo during the war. The historians write: “Little is known about Krapf’s activities (editor’s note — in Japan), but it’s clear that German diplomats dealt with the ‘Final Solution’ of the Jewish question even in the Far East.” That is supposed to mean: Krapf took part in the genocide somehow.

Former diplomats won’t be the only ones to scrutinize such passages. The historians are also likely to face criticism from younger diplomats because the study accuses staff members of having failed to question the official line right up to the 1990s. One high-ranking ministry official said that wasn’t true. He pointed to research conducted long ago by the historian Hans-Jürgen Döscher about the crimes committed by diplomats. Staff members had read that research, the official said.

Contrary to the commission’s claims, the ministry has already adopted a differentiated view of its own past, the official added. An official brochure published in 1995 says the ministry had contained “several fanatical supporters” and a “considerable number” of people who went along with the Nazis and were indifferent about their crimes.

In a sign of how sensitive the study’s findings are, Westerwelle cancelled a joint book presentation with Steinmeier and Fischer after the publishing firm said it planned a panel discussion between the three ministers and the historians.

Westerwelle seems to have had a feeling that he couldn’t win in a clash with the eloquent Fischer, for whom confronting Germany’s Nazi past has been a lifelong theme and who always relishes taking a swipe at Westerwelle.

New Approach to Dealing With Past

But Westerwelle too has praised the book as “a weighty piece of work” which would help reaffirm the ministry’s sense of self. He wants to incorporate the book in the training course for young diplomats and to change the way the ministry observes its traditions.

The ministry also plans to revise any brochures that fail to mention the roles former staff members played during the Nazi era. In addition, it will take a closer look at the portraits of diplomats hanging on the walls of the ministries and of embassies.

It may well be that embassies follow the example of the London embassy, which mentions the Nazi past of Konstantin von Neurath, the foreign minister from 1932 to 1938, beneath a portrait of him. It may be that in future, only portraits of post-war ambassadors will be shown.

The study in itself represents a break with the past in one important respect: The foreign ministry has put itself at the forefront of historical research into its past. The other ministries largely ignore their Nazi history to this day.


Full article and photo:,1518,725248,00.html

Seeking Proof in Near-Death Claims

At 18 hospitals in the U.S. and U.K., researchers have suspended pictures, face up, from the ceilings in emergency-care areas. The reason: to test whether patients brought back to life after cardiac arrest can recall seeing the images during an out-of-body experience.

People who have these near-death experiences often describe leaving their bodies and watching themselves being resuscitated from above, but verifying such accounts is difficult. The images would be visible only to people who had done that.

“We’ve added these images as objective markers,” says Sam Parnia, a critical-care physician and lead investigator of the study, which hopes to include 1,500 resuscitated patients. Dr. Parnia declined to say whether any have accurately described the images so far, but says he hopes to report preliminary results next year.

The study, coordinated by Southampton University’s School of Medicine in England, is one of the latest and largest scientific efforts to understand the mystery of near-death experiences.

At least 15 million American adults say they have had a near-death experience, according to a 1997 survey—and the number is thought to be rising with increasingly sophisticated resuscitation techniques.

People often describe moving down a dark tunnel toward a bright light after a near death experience.

Dead or Alive?

An analysis of 613 near-death experiences gathered by the Near Death Research Foundation found:

  • About 75% included an out-of-body experience
  • 76% reported intense positive emotions
  • 34% described passing through a tunnel
  • 65% described encountering a bright light
  • 22% had a life review
  • 57% encountered deceased relatives or other beings

Note: Patients could report more than one sensation.


In addition to floating above their bodies, people often describe moving down a dark tunnel toward a bright light, feeling intense peace and joy, reviewing life events and seeing long-deceased relatives—only to be told that it’s not time yet and land abruptly back in an ailing body.

The once-taboo topic is getting a lot of talk these days. In the new movie “Hereafter,” directed by Clint Eastwood, a French journalist is haunted by what she experienced while nearly drowning in a tsunami. A spate of new books details other cases and variations on the theme.

Yet the fundamental debate rages on: Are these glimpses of an afterlife, are they hallucinations or are they the random firings of an oxygen-starved brain?

“There are always skeptics, but there are millions of ‘experiencers’ who know what happened to them, and they don’t care what anybody else says,” says Diane Corcoran, president of the International Association for Near-Death Studies, a nonprofit group in Durham, N.C. The organization publishes the Journal of Near-Death Studies and maintains support groups in 47 states.

Dr. Corcoran, a retired Army colonel who heard wounded soldiers talk of such experiences as a nurse in Vietnam, says many military veterans have had near-death experiences but are particularly hesitant to talk them for fear of being branded psychologically disturbed.

Some investigators say the most remarkable thing about near-death reports is that the core elements are the same, among people of all cultures, races, religions and age groups, including children as young as 3 years old.

In his new book, “Evidence of the Afterlife,” Jeffrey Long, a radiation oncologist in Louisiana, analyzes 613 cases reported on the website of his Near Death Research Foundation and concludes there is only one plausible explanation: “that people have survived death and traveled to another dimension.”

Skeptics say there is no way to verify such anecdotal reports—and that many of the experiences can be explained by neurobiological changes in the brain as people die.

In the 1980s, British neuroscientist Susan Blackmore theorized that oxygen deprivation was to blame and noted that fighter pilots also encountered tunnel vision and hallucinations at high altitudes and speeds.

This year, a study of 52 cardiac-arrest patients in Slovenia, published in the Journal of Critical Care, found that the 21% who had near-death experiences also had high blood levels of carbon dioxide, which has been associated with visions, bright lights and out-of-body experiences.

A study of seven dying patients at George Washington University Medical Center, published in the Journal of Palliative Medicine, noted that their brainwaves showed a spurt of electrical activity just before they were pronounced dead. Lead investigator Lakhmir Chawla, an intensive-care physician, notes that the activity started in one part of the brain and spread in a cascade and theorized that it could give patients vivid mental sensations.

Matt Damon, left, plays a psychic in the movie ‘Hereafter,’ which explores themes of the afterlife.

Some scientists have speculated that the life review some patients experience could be due to random activation of the dying brain’s memory circuits. The sensation of moving down a tunnel could be due to long-buried birth memories suddenly retrieved. The feeling of peace could be endorphins released during extreme stress.

Other researchers say they have produced similar experiences by stimulating neurons in parts of the brain—or by giving patients ketamine, a tranquilizer and sometime party drug.

Yet researchers who have studied near-death experiences note that such experiments tend to produce only fragmentary visions and hallucinations, not the consistent, lucid and detailed accounts of events that many resuscitated patients report. One study found that people who had near-death experiences had higher blood oxygen levels than those who didn’t.

Several follow-up studies have found that people undergo profound personality changes after near-death experiences—becoming more altruistic, less materialistic, more intuitive and no longer fearing death. But some do suffer alienation from spouses or friends who don’t understand their transformation.

Other relatives understand all too well.

Raymond Moody, who first coined the term near-death experience in his 1975 book “Life After Life,” explores the even stranger phenomenon of “shared death experiences” in a new book, “Glimpses of Eternity.” He recounts stories of friends, family and even medical personnel who say they also saw the light, the tunnel and accompanied the dying person partway on his or her journey.  “It’s fairly common among physicians who are called to resuscitate someone they don’t know—they say they’ve seen a spirit or apparition leave the body,” says Dr. Moody.

Meanwhile, in his book, “Visions, Trips and Crowds,” David Kessler, a veteran writer on grief and dying, reports that hospice patients frequently describe being visited by a deceased relative or having an out-of-body experience weeks before they actually die, a phenomenon called “near-death awareness.”  While some skeptics dismiss such reports as hallucinations or wishful thinking, hospice workers generally report that the patients are otherwise perfectly lucid—and invariably less afraid of death afterward.

Mr. Kessler says his own father was hopeless and very sad as he was dying. “One day, he had an amazing shift and said, ‘Your mother was here—she told me I’d be dying soon and it will be fine—everyone will be there.”

Dr. Parnia, currently an assistant professor of critical care at State University of New York, Stony Brook, says verifying out-of-body experiences with pictures on the ceiling is only a small part of his study. He is also hoping to better understand whether consciousness exists apart from the brain and what happens to it when the brain shuts down. In near-death experiences, people report vivid memories, feelings and thought processes even when there is no measurable brain activity.

“The self, the soul, the psyche—throughout history, we’ve never managed to figure out what it is and how it relates to the body,” he says. “This is a very important for science and fascinating for humankind.”

Melinda Beck, Wall Street Journal


Full article and photos:

Chilean President Wrote ‘Deutschland Über Alles’ in German Guest Book

Diplomatic Gaffe

“Deutschland Über Alles:” Chilean President Sebastian Pinera wrote his controversial dedication into the official guest book of German President Christian Wulff (left).

In a gesture of thanks for Germany’s help in rescuing the 33 Chilean miners, President Sebastián Piñera wrote the historically charged slogan ‘Deutschland Über Alles’ into the guest book of German President Christian Wulff last week. Now Wulff’s office is pondering how to remove the words.

Chilean President Sebastián Piñera has apologized for writing the words “Deutschland Über Alles,” a phrase frowned on in Germany because of its association with the Nazi era, into the official guest book of German President Christian Wulff during a visit to Berlin last week.

Media reports claimed Piñera had said on Monday that he had learned the slogan in school in the 1950s and 1960s and understood it to be a celebration of German unification in the 19th century under Chancellor Otto von Bismarck. He said he was unaware that it was “linked to that country’s dark past.”

The first verse was dropped from the anthem after World War II because it is deemed too nationalistic. Piñera had been on a European trip to thank countries for their help in freeing the 33 Chilean miners. A spokesman for Wulff’s office played down the gaffe on Monday, saying the president had no doubt intended to express something positive about Germany.

Bild’s Loser of the Day

Piñera isn’t the only one to have unwittingly broken the taboo. Even experienced Europeans have done so. Last year, the French presidential office was so excited at the prospect that Chancellor Angela Merkel would attend the official celebrations to mark the French victory in World War I, the first German leader ever to do so, that its press department announced that the choir of the French army would sing “Deutschland Über Alles” at the event, the Frankfurter Allgemeine Zeitung newspaper reported at the time.

The mistake was spotted in time and the choir confined itself to singing the third verse which has been officially used since the end of World War II, starting with the unoffensive words: “Unity and justice and freedom for the German fatherland!”

Bild, Germany’s best-selling tabloid newspaper, responded to the faux pas by declaring Piñera as its loser of the day, a regular item on its front page, on Tuesday. “He’s better at rescuing miners,” the paper declared.

Meanwhile, “Deutschland Über Alles” continues to sully the pages of Wulff’s guest book. Wulff’s office now plans to discuss the matter with the Chilean embassy in Berlin. Piñera may get a chance to revise his entry.


Full article and photo:,1518,725382,00.html

The Proto-Surrealist

Arcimboldo’s ‘Vertumnus’ (c. 1590).

The late, legendary S. Lane Faison Jr., professor emeritus of art history at Williams College, responded to over-the-top works of art with a vigorous “Hoo boy! Whoops a daisy!” He tended to reserve this evocative phrase for High Baroque extravaganzas and the apses of 18th-century Austrian churches, but I suspect he might have applied it to “Arcimboldo, 1526-1593: Nature and Fantasy,” the small, engaging exhibition dedicated to one of the most peculiar artists of the 16th century, on view at Washington’s National Gallery of Art. At once an exploration of a side-road of Mannerist painting, a brief survey of natural history in the late Renaissance, and an inquiry into perception itself, the show brings together paintings, prints, illustrated books, ceramics and bronzes united by their devotion to the apparently mutually exclusive worlds of nature and the fantastic.

Even those unsure about the pronunciation of “Arcimboldo” (are-cheem-BOLD-oh) will probably recognize his extraordinary “composite heads”—a genre that he apparently invented—in which sometimes comical, sometimes sinister likenesses are conjured up with clusters of fruits, vegetables and gourds, with flowers, twigs and sea creatures, and even, memorably, with books. The exhibition brings together 16 of these puzzling pictures, ranging from allegorical personifications of the elements and the seasons to portraits and witty images in which seemingly straightforward, if tightly packed, still lifes turn into heads when inverted. (Strategically placed mirrors at the National Gallery allow us to participate in the joke.) The selection includes many of Arcimboldo’s most characteristic, best-known heads, painted between 1563 and 1590—from about the time he left his native Milan for Vienna, seat of the Holy Roman Empire, to become court painter to Maximilian II, until a few years after the homesick Italian was allowed to return to Milan while remaining in the service of Maximilian’s successor, Rudolph II.

Little is known about how Arcimboldo attracted the attention of the Hapsburg court. He was, like his artist father, associated with the workshop of Milan’s vast cathedral, designing frescoes, banners, stained glass and the like. Of this early work, only a few unexceptional windows have survived, nothing that suggests extraordinary talent. He may have been known for illustrations of the natural world—a few have emerged—or else, then as now, connections helped in obtaining prestigious appointments.

Certainly the paintings Arcimboldo made for his Hapsburg patrons announce his mastery of the high realism for which Lombardy became known, a tradition based on close observation of nature, thought to be influenced by Leonardo da Vinci during his 17 years in the service of Ludovico Sforza, Duke of Milan. The flora and fauna that make up Arcimboldo’s weird profiles are exquisitely and accurately rendered, their details and textures meticulously accounted for. It is believed, too, that Arcimboldo had first-hand acquaintance with Leonardo’s drawings of grotesque heads, many of which belonged to a family friend; the irregular profiles of the composite heads often have remarkable cognates in Leonardo’s distorted profiles.

Scholars find allegorical allusions to Hapsburg power in Arcimboldo’s “portraits” of the elements and the seasons, deciphering coded references to dominion over the world. Most of us concentrate on the obsessive virtuosity of the depictions of individual elements—a diagram identifies more than 60 sea creatures and a seal in the personification of water—on the shifting scale among these elements, and on the sheer strangeness of the images. (Not surprisingly, it was the Surrealists, with their taste for dislocation, who rescued Arcimboldo from centuries of obscurity.)

We struggle to see these playful, slightly disturbing images; our interpretation constantly changes. Drawn to the particulars, we try to amalgamate them into an illusionistic head, then get seduced by details again, unable to reconcile the two readings. We recognize the wonderfully painted peaches and pear suggesting the fleshy cheeks and nose of “Vertumnus” (c. 1590), note his peapod eyelids and cardoon moustache, then fleetingly manage to see this paean to abundance as a portrait of the robust Rudolph II, before losing ourselves in cabbage leaves, olives, a blackberry eye, and the glistening cherries of his protruding Hapsburg lip. Least appetizing? “The Jurist” (1566), thought to represent a famously ugly legal scholar’s scarred face by means of plucked chickens and a fish. Most improbable? “The Librarian” (c. 1566), a superb three-quarter portrait constructed with stacked and tipped books; only the picture’s impeccable provenance convinces us that it isn’t a Cubist effort.

At the National Gallery, Arcimboldo’s extravagant composites are illuminated by the presence of some of Leonardo’s bestial grotesque heads, along with drawings and illustrated books documenting the cinquecento’s burgeoning interest in the natural history of both the New and Old World, recorded with scientific accuracy. An enchanting marmot by Jacopo Ligozzi competes with Albrecht Dürer’s cowslips and the charming red squirrel of a Dürer contemporary, Hans Hofmann. Polychrome ceramic plates with high-relief amphibians and bronzes of real and invented creatures remind us that Arcimboldo’s composite heads were once displayed in kunstkammers, along with miscellanies of man-made and natural curiosities. Suddenly, the chicken/fish portrait of “The Jurist” doesn’t seem so odd.

Ms. Wilkin writes about art for the Journal.


Full article and photo:

Goodbye Basil, Hello Pumpkin Seeds

Ten—no, 11!—delicious, beyond-the-obvious pestos to add to your arsenal

Clockwise from left: Lardo and rosemary, cherry tomato and almond, walnut, arugula and pistachio pestos

Pesto is a gift from summer—a nutty, herby distillation of a sweet-smelling, sunshine-loving herb. But fall doesn’t have to mean giving it up altogether. The classic basil version is just one interpretation of an open-ended technique: The word “pesto” has its roots in the Italian word for “pestle,” and it means the technique of using a mortar and pestle (or more often nowadays, a food processor) to make a flavorful paste combining garlic, nuts and oil with vegetables or herbs. In pesto’s birthplace, ingredients like parsley, mint and olives commonly end up in the mix. Fall, especially now when spinach and broccoli are approaching their peak, is the perfect time to experiment—and to try one of these more seasonable pesto recipes from top chefs around the country. Make extra: It’ll keep in an airtight container in the fridge for a few days. Better yet, freeze it in a Ziploc bag and you can stay sauced through the winter.

Arugula + Basil + Almonds


Blanch two cups arugula and three-quarters cup basil leaves separately. Shock, squeeze dry and puree in a food processor with a garlic clove, a little parsley, slivered almonds, olive oil, salt and a lot of ground pepper. —chef Matthew Accarrino, SPQR in San Francisco

Use it: Tossed with fusilli and ricotta salata

Walnuts + Grapeseed Oil


In a food processor, blend a half-cup each of olive and grapeseed oils with a half clove of garlic until garlic is finely chopped. On medium speed, incorporate a cup of walnuts. Process on high until mixture is smooth. Season with sherry vinegar, salt and pepper. —chef Marc Vetri, Vetri in Philadelphia

 Use it: Tossed with fresh pappardelle or farro penne

Cherry Tomatoes + Almonds


Blend 2½ cups cherry tomatoes, a garlic clove, a half-cup slivered almonds, 12 basil leaves, a pinch crushed red pepper and a big pinch of salt to a fine purée. While blending, pour in a half-cup olive oil in a steady stream until pesto emulsifies into a thick purée. Season. —chef Lidia Bastianich, “Lidia’s Italy” (PBS)

Use it: Tossed with hot spaghetti

Pistachios + Breadcrumbs + Mint


Blanch a quarter-cup raw pistachios in boiling water for two minutes. Remove, cool and process with a quarter-cup breadcrumbs, three tablespoons of olive oil, two tablespoons of chopped mint, a pinch of Aleppo pepper (available in Middle Eastern markets) and a garlic clove, pulsing until well mixed and smooth. Season with salt and pepper to taste. —chef Chris Cosentino, Incanto in San Francisco

 Use it: Tossed with roasted potatoes or Brussels sprouts

Lardo + Rosemary


In a mortar and pestle, mash a quarter clove of garlic and a pinch of salt until a paste begins to form. Add a teaspoon each of chopped rosemary and black pepper and continue to crush. Add a quarter pound of lardo, and mash ingredients together until pesto is smooth. Season to taste. —chef Cesare Casella, dean of The Italian Culinary Academy in New York

 Use it: Spread on toasted slices of crusty bread

Marjoram + Parsley + Walnuts


In a mortar and pestle, pound three garlic cloves and a pinch of salt into a mash. Pound six sprigs worth of marjoram leaves into mix. Do same with parsley leaves until you have rough paste. Cover paste with three-quarters cup olive oil. Add half-cup chopped walnuts. Taste for salt. —chef Russell Moore, Camino in Oakland, Calif .

 Use it: Spooned over sautéed mushrooms or grilled sea bass

Rapini + Parmesan + Porcini


Blanch one bunch of rapini for about four minutes, shock in a bowl of ice water, squeeze dry and chop finely. Purée the rapini, two garlic cloves, one cup olive oil, and a pinch of salt in a food processor until very smooth. Transfer to bowl. Stir in a half-cup grated Parmesan. Sauté a third of a pound of porcini mushrooms in butter until they are colorless and soft. Cool, purée and fold into the rapini mix. —chef Ethan Stowell, Staple & Fancy Mercantile in Seattle

 Use it: Tossed with a short twisted pasta like gemelli



Purée a half-cup of flat leaf parsley, two garlic cloves, a cup of olive oil, a large pinch of salt and up to eight turns of the pepper mill in a blender until mixture is smooth. Taste and adjust seasoning. —chefs Frank Falcinelli and Frank Castronovo, Frankies Spuntino, Brooklyn, N.Y.

 Use it: Brushed on sliced crusty bread before toasting

Pumpkin Seeds + Basil + Parmesan


Blend five tablespoons of pumpkin seeds, two cups of basil, a clove of garlic and salt until pureed. Pour into a large mixing bowl. Add two-thirds cup grated Parmesan and a quarter-cup olive oil, stirring until the pesto is smooth and creamy. —chefs Tony Mantuano and Sarah Grueneberg, Spiaggia in Chicago

 Use it: Spooned over cheese ravioli

Pecans + Parsley + Dates


Pulse a half-cup pecans, a half-cup parsley leaves, a quarter-cup Parmesan, a half-cup pecan oil and a teaspoon of kosher salt in a food processor until combined, but not totally puréed. Transfer to bowl. Fold in four chopped dates and two teaspoons balsamic vinegar. —chef Alon Shaya, Domenica in New Orleans

Use it: Spooned over duck, pork or ricotta spread on grilled bread

Pumpkin Seeds + Spinach 


Blend four cups spinach and one cup parsley in a food processor with just enough olive oil to make a semithick paste (about a half cup). Add two tablespoons toasted pumpkin seeds and blend well. Transfer to bowl. Add one crushed amaretto cookie and three tablespoons grated Parmesan. Add salt and pepper, and adjust oil to desired consistency. —chef Marc Bianchini, Osteria del Mondo, Milwaukee, Wisc.

 Use it: Spooned over scallops or stuffed inside an omelet

Pervaiz Shallwani, Wall Street Journal

Full article and photos:

Poem of the week

Poem by John Cornford

Madrid during the Spanish civil war
The heartless world’ … Madrid during the Spanish civil war.
John Cornford was one of the first British volunteers for the Spanish civil war. Born in 1915, he was the son of the classicist, Francis Cornford and the poet, Frances Cornford. They christened him Rupert John in memory of their great friend, the poet Rupert Brooke, but the first name was later dropped, as his father explained, because it seemed too romantic. John Cornford joined the Young Communist League at the age of 18, and became a full Party member at 20. Newly graduated from Cambridge, with a “starred” first and a brightly promising future, he left for Spain to fight for the Republican cause in August, 1936, and joined the anti-Stalinist POUM (The Workers’ Party of Marxist Unification). He fought in the battles for Madrid and Boadilla, and was killed on the Cordoba front in December, either on or just after his 21st birthday.

“Poem,” this week’s choice, addresses the poet’s girlfriend and fellow political activist, Margot Heinemann. It owes nothing to Rupert Brooke, nor, surprisingly, to WH Auden. Cornford begins dramatically, as if to invoke some great, abstract power. His innovative stroke, the repetition of “heart” three times, is wonderfully successful. A surge of emotion is created with each repetition, and, every time, the word earns its place by acquiring a faintly different meaning, and tracing a movement from impersonal register to intimate. The “heart of the world” is certainly a romantic notion, with a Yeatsian echo, but the depiction of the world as “heartless” is closer to realism than romantic exaggeration, given the immediate context of war, and the wider background of the rise of fascism. Cornford then shifts attention finally from the general to the personal and particular. “Dear heart” tenderly singles out the addressee, and it defines the poem. This is not to be a poem centred on war and politics, like his other great literary achievement, “Full Moon at Tierz,” but a love poem.

The newly intimate tone suggests, also, a love letter. From now on the poem will be concerned with confiding immediate experience, especially inner experience. The voice is calm, candid and direct, brave but without bravado. This bravery is not wholly connected to war: it is about confronting emotion. “The pain at my side” reminds us that war’s injuries are not only physical, not only in the body. Yet the absence of a loved one is felt so acutely it’s like an accompanying physical presence.

This idea recurs in the third stanza, where the speaker suggests a childlike device by which to transcend the absence. He uses the same rhyme-word, “side”, and the sad, high-pitched sound of stanza one is repeated, but now there is “pride”, and the hope of an intense, visionary comfort. The idea that love can be communicated telepathically, and the beloved’s presence conjured by her sufficiently “kindly” thinking, is so simply and touchingly put that it seems neither arch nor fanciful. Once more, Cornford brings the addressee into the poem with an endearment – this time, simply the familiar, informal “dear.”

The second stanza expands the sense of chill introduced by the “shadow”. Those first two lines, with the fluttering rhythm and the favourite “i” sounds of “rises” and “reminds” convey premonition and sighing loneliness. That the main verb, “reminds,” is used intransitively compounds the feeling of dislocation.

With its strong, often trochaic, rhythm, the poem invites us to hear the footsteps of marching troops. Even love is like a ghostly soldier who trudges beside the poet on that “last mile.” The death that he fears is embodied almost alliteratively by name of the town, “Huesca”. Constant little rhythmic adjustments ensure there is not a trace of monotony, but the ebb and flow of complicated feeling – fear, and the fear of fear, conviction, courage, longing for comfort – like a landscape flowing past.

The passionate apostrophe at the poem’s beginning is what moves us, and draws us in, but something else keeps us reading, something less dramatic and more truthful, almost matter-of-fact. This quieter tone is sustained to the end, where the last wishes are simple, declared with exemplary plainness.

In fact, after its first romantic flourish, the poem demonstrates many of the classical virtues: proportion, self-discipline, the integration of mind and body. You feel as if you have been presented with a photograph of a young soldier’s inner life. He is a passionate lover and a passionate warrior: these qualities are held in perfect psychic balance. And they are timeless. The speaker could be one of Homer’s heroes. He could be a Spartan at Thermopylae.

It is impressive that such a stately and achieved lyric should have been written under such pressure, and by a writer still only 20. As a “last letter” it is neither raw nor prosaic, and, with or without the reader’s knowledge of Cornford’s sacrifice, it stands as one of the most moving and memorable 20th-century love poems.


Heart of the heartless world,
Dear heart, the thought of you
Is the pain at my side,
The shadow that chills my view.

The wind rises in the evening,
Reminds that autumn is near.
I am afraid to lose you,
I am afraid of my fear.

On the last mile to Huesca,
The last fence for our pride,
Think so kindly, dear, that I
Sense you at my side.

And if bad luck should lay my strength
Into the shallow grave,
Remember all the good you can;
Don’t forget my love.


Stories vs. Statistics

Half a century ago the British scientist and novelist C. P. Snow bemoaned the estrangement of what he termed the “two cultures” in modern society — the literary and the scientific. These days, there is some reason to celebrate better communication between these domains, if only because of the increasingly visible salience of scientific ideas. Still a gap remains, and so I’d like here to take an oblique look at a few lesser-known contrasts and divisions between subdomains of the two cultures, specifically those between stories and statistics.

I’ll begin by noting that the notions of probability and statistics are not alien to storytelling. From the earliest of recorded histories there were glimmerings of these concepts, which were reflected in everyday words and stories. Consider the notions of central tendency — average, median, mode, to name a few. They most certainly grew out of workaday activities and led to words such as (in English) “usual,” “typical.” “customary,” “most,” “standard,” “expected,” “normal,” “ordinary,” “medium,” “commonplace,” “so-so,” and so on. The same is true about the notions of statistical variation — standard deviation, variance, and the like. Words such as “unusual,” “peculiar,” “strange,” “original,” “extreme,” “special,” “unlike,” “deviant,” “dissimilar” and “different” come to mind. It is hard to imagine even prehistoric humans not possessing some sort of rudimentary idea of the typical or of the unusual. Any situation or entity — storms, animals, rocks — that recurred again and again would, it seems, lead naturally to these notions. These and other fundamentally scientific concepts have in one way or another been embedded in the very idea of what a story is — an event distinctive enough to merit retelling — from cave paintings to “Gilgamesh” to “The Canterbury Tales,” onward. 

The idea of probability itself is present in such words as “chance,” “likelihood,” “fate,” “odds,” “gods,” “fortune,” “luck,” “happenstance,” “random,” and many others. A mere acceptance of the idea of alternative possibilities almost entails some notion of probability, since some alternatives will be come to be judged more likely than others. Likewise, the idea of sampling is implicit in words like “instance,” “case,” “example,” “cross-section,” “specimen” and “swatch,” and that of correlation is reflected in “connection,” “relation,” “linkage,” “conjunction,” “dependence” and the ever too ready “cause.” Even hypothesis testing and Bayesian analysis possess linguistic echoes in common phrases and ideas that are an integral part of human cognition and storytelling. With regard to informal statistics we’re a bit like Moliere’s character who was shocked to find that he’d been speaking prose his whole life.

Despite the naturalness of these notions, however, there is a tension between stories and statistics, and one under-appreciated contrast between them is simply the mindset with which we approach them. In listening to stories we tend to suspend disbelief in order to be entertained, whereas in evaluating statistics we generally have an opposite inclination to suspend belief in order not to be beguiled. A drily named distinction from formal statistics is relevant: we’re said to commit a Type I error when we observe something that is not really there and a Type II error when we fail to observe something that is there. There is no way to always avoid both types, and we have different error thresholds in different endeavors, but the type of error people feel more comfortable may be telling. It gives some indication of their intellectual personality type, on which side of the two cultures (or maybe two coutures) divide they’re most comfortable.

People who love to be entertained and beguiled or who particularly wish to avoid making a Type II error might be more apt to prefer stories to statistics. Those who don’t particularly like being entertained or beguiled or who fear the prospect of making a Type I error might be more apt to prefer statistics to stories. The distinction is not unrelated to that between those (61.389% of us) who view numbers in a story as providing rhetorical decoration and those who view them as providing clarifying information.

The so-called “conjunction fallacy” suggests another difference between stories and statistics. After reading a novel, it can sometimes seem odd to say that the characters in it don’t exist. The more details there are about them in a story, the more plausible the account often seems. More plausible, but less probable. In fact, the more details there are in a story, the less likely it is that the conjunction of all of them is true. Congressman Smith is known to be cash-strapped and lecherous. Which is more likely? Smith took a bribe from a lobbyist or Smith took a bribe from a lobbyist, has taken money before, and spends it on luxurious “fact-finding” trips with various pretty young interns. Despite the coherent story the second alternative begins to flesh out, the first alternative is more likely. For any statements, A, B, and C, the probability of A is always greater than the probability of A, B, and C together since whenever A, B, and C all occur, A occurs, but not vice versa.

This is one of many cognitive foibles that reside in the nebulous area bordering mathematics, psychology and storytelling. In the classic illustration of the fallacy put forward by Amos Tversky and Daniel Kahneman, a woman named Linda is described. She is single, in her early 30s, outspoken, and exceedingly smart. A philosophy major in college, she has devoted herself to issues such as nuclear non-proliferation. So which of the following is more likely?

a.) Linda is a bank teller.

b.) Linda is a bank teller and is active in the feminist movement.

Although most people choose b.), this option is less likely since two conditions must be met in order for it to be satisfied, whereas only one of them is required for option a.) to be satisfied.

(Incidentally, the conjunction fallacy is especially relevant to religious texts. Imbedding the God character in a holy book’s very detailed narrative and building an entire culture around this narrative seems by itself to confer a kind of existence on Him.)

Yet another contrast between informal stories and formal statistics stems from the extensional/intensional distinction. Standard scientific and mathematical logic is termed extensional since objects and sets are determined by their extensions, which is to say by their member(s). Mathematical entities having the same members are the same even if they are referred to differently. Thus, in formal mathematical contexts, the number 3 can always be substituted for, or interchanged with, the square root of 9 or the largest whole number smaller than pi without affecting the truth of the statement in which it appears.

In everyday intensional (with an s) logic, things aren’t so simple since such substitution isn’t always possible. Lois Lane knows that Superman can fly, but even though Superman and Clark Kent are the same person, she doesn’t know that Clark Kent can fly. Likewise, someone may believe that Oslo is in Sweden, but even though Oslo is the capital of Norway, that person will likely not believe that the capital of Norway is in Sweden. Locutions such as “believes that” or “thinks that” are generally intensional and do not allow substitution of equals for equals.

The relevance of this to probability and statistics? Since they’re disciplines of pure mathematics, their appropriate logic is the standard extensional logic of proof and computation. But for applications of probability and statistics, which are what most people mean when they refer to them, the appropriate logic is informal and intensional. The reason is that an event’s probability, or rather our judgment of its probability, is almost always affected by its intensional context. 

Consider the two boys problem in probability. Given that a family has two children and that at least one of them is a boy, what is the probability that both children are boys? The most common solution notes that there are four equally likely possibilities — BB, BG, GB, GG, the order of the letters indicating birth order. Since we’re told that the family has at least one boy, the GG possibility is eliminated and only one of the remaining three equally likely possibilities is a family with two boys. Thus the probability of two boys in the family is 1/3. But how do we come to think that, learn that, believe that the family has at least one boy? What if instead of being told that the family has at least one boy, we meet the parents who introduce us to their son? Then there are only two equally like possibilities — the other child is a girl or the other child is a boy, and so the probability of two boys is 1/2.

Many probability problems and statistical surveys are sensitive to their intensional contexts (the phrasing and ordering of questions, for example). Consider this relatively new variant of the two boys problem. A couple has two children and we’re told that at least one of them is a boy born on a Tuesday. What is the probability the couple has two boys? Believe it or not, the Tuesday is important, and the answer is 13/27. If we discover the Tuesday birth in slightly different intensional contexts, however, the answer could be 1/3 or 1/2.

Of course, the contrasts between stories and statistics don’t end here. Another example is the role of coincidences, which loom large in narratives, where they too frequently are invested with a significance that they don’t warrant probabilistically. The birthday paradox, small world links between people, psychics’ vaguely correct pronouncements, the sports pundit Paul the Octopus, and the various bible codes are all examples. In fact, if one considers any sufficiently large data set, such meaningless coincidences will naturally arise: the best predictor of the value of the S&P 500 stock index in the early 1990s was butter production in Bangladesh. Or examine the first letters of the months or of the planets: JFMAMJ-JASON-D or MVEMJ-SUN-P. Are JASON and SUN significant? Of course not. As I’ve written often, the most amazing coincidence of all would be the complete absence of all coincidences.

I’ll close with perhaps the most fundamental tension between stories and statistics. The focus of stories is on individual people rather than averages, on motives rather than movements, on point of view rather than the view from nowhere, context rather than raw data. Moreover, stories are open-ended and metaphorical rather than determinate and literal.

In the end, whether we resonate viscerally to King Lear’s predicament in dividing his realm among his three daughters or can’t help thinking of various mathematical apportionment ideas that may have helped him clarify his situation is probably beyond calculation. At different times and places most of us can, should, and do respond in both ways.

John Allen Paulos is Professor of Mathematics at Temple University and the author of several books, including “Innumeracy,” “Once Upon a Number,” and, most recently, “Irreligion.”


Full article and photo:

Ten of the best balls in literature

Roxana by Daniel Defoe

The high point for Defoe’s high-class courtesan is her “little ball” in her swanky London apartments. Even the king turns up, and she makes her grand entrance in Turkish dress, prompting all the Restoration beaux to chant “Roxana! Roxana!” (an exotic beauty popular from the Restoration stage). “My dress was the chat of the town for that week; and so the name of Roxana was the toast at and about the court”.

Mansfield Park by Jane Austen

Fanny Price cannot bloom unseen for ever. Sir Thomas Bertram stages a ball at which she will come out into society. She gets to dance with Edmund, which is nice, but has Henry Crawford at her too, with all his sexy compliments. By three o’clock in the morning Fanny is all “knocked up”, as her brother delicately puts it.

Childe Harold’s Pilgrimage by Lord Byron

“There was a sound of revelry by night”. Byron’s narrative poem re-enacts the famous Duchess of Richmond’s ball in Brussels before the battle of Waterloo. But the party has to end. “Ah! then and there was hurrying to and fro, / And gathering tears, and tremblings of distress, / And cheeks all pale, which but an hour ago / Blushed at the praise of their own loveliness”.

Vanity Fair by William Makepeace Thackeray

Thackeray’s Napoleonic magnum opus stages the very same ball. Dance, flirt and be merry, for tomorrow you may well die. Soppy Amelia’s husband is beguiled by her sexy, manipulative friend Becky and invites her to elope with him. Amelia slinks away, heartbroken.

Madame Bovary by Gustave Flaubert

Emma Bovary loves a ball but it always makes her discontented. After she and her dull husband attend a glamorous ball given by the Marquis d’Andervilliers she begins to chafe at the restrictions of provincial married life. Her ambition to consort with toffs has been awakened.

Anna Karenina by Leo Tolstoy

Kitty goes to a ball prepared to perform the first quadrille with Vronsky. Tolstoy seems to know not only about her feelings of excitement, but also about the arrangement of her tulle dress over her pink slip and elaborate coiffure “surmounted by a rose and two small leaves”. Everyone wants to dance with her, naturellement

The Age of Innocence by Edith Wharton

The first great set-piece of Wharton’s society novel is Mrs Julius Beaufort’s annual ball. It is a magnificent affair, for the Beauforts have a ballroom, “used for no other purpose, and left for three-hundred-and-sixty-four days of the year to shuttered darkness”. At this ball, Newland Archer feels the pull of the fascinating Countess Olenska.

Beware of Pity by Stefan Zweig

Toni, a young cavalry officer, makes a mortifying blunder at a ball by inviting Edith to dance. He has not realised that she is lame and cannot walk. In the days that follow he calls on her to assuage his guilt and finds that she has fallen in love with him. But he does not love her; he only pities her.

Frederica by Georgette Heyer

There are more balls in Heyer’s oeuvre than in that of any other novelist. In this Regency romance, impecunious Frederica Merriville hopes to launch her beautiful younger sister into society and enlists the help of their louche relation the Marquis of Alverstoke. At a ball for his rich, stodgy niece, the Merriville girls shine and passions begin to boil. 

Carrie by Stephen King

Carrie, the sexually repressed girl with telekinetic powers, is taken to the prom ball by nice, handsome Tommy. He and she are voted king and queen of the ball, but, at the crowning moment, Carrie is doused in pig’s blood by a nasty rival. She lives to regret it . . .


Full article:

Uncommon knowledge

Do you swear to tell the truth?

Getting kids to tell the truth can be challenging. Most parents likely think that talking with their kids about the morality of lying is the best approach, but new work suggests another way. Researchers asked kids between the ages of 8 and 16 to take a trivia test and told them that they would win $10 if they answered all the questions correctly. The kids were also told that the answers were inside the testing booklet but to not cheat, even though they’d be left in a room alone. What the kids didn’t know was that a couple of the questions had no real answers, and the experiment was being recorded by hidden cameras. After finishing the test, the kids were asked whether they had peeked at the answers. The majority of them had indeed cheated, and the overwhelming majority of those who peeked lied about it when first asked. Asking the kids to think about the morality of lying made little difference in getting the kids to recant. However, if the kids were asked to promise to tell the truth — the same approach used in the legal system — a significant number of the liars recanted.

Evans, A. & Lee, K., “Promising to Tell the Truth Makes 8- to 16-year-olds More Honest,” Behavioral Sciences & the Law (forthcoming).

If you send me to my room, the terrorists have won

Terrorism is bad enough as a security threat, but a team of researchers in Europe has found that thinking about terrorism can affect how we treat our own children. After being shown pictures of terrorism or reading or writing about terrorism, both parents and nonparents endorsed stricter parenting practices. Moreover, this pattern was confirmed with an experiment on actual behavior inside homes. After seeing pictures of terrorism, parents were more impatient, and showed more negative facial expressions, toward their children.

Fischer, P. et al., “Causal Evidence that Terrorism Salience Increases Authoritarian Parenting Practices,” Social Psychology (Fall 2010).

The case for making homework a choice

Motivating kids to learn is at the heart of education. According to a new study, there is a simple but effective way to encourage kids to want to learn on their own: give them a choice. In an experiment, high school students who were allowed to choose their homework assignments (covering the same material) reported more interest, enjoyment, and competence regarding their homework, and they scored higher on a subsequent test of the material.

Patall, E. et al., “The Effectiveness and Relative Importance of Choice in the Classroom,” Journal of Educational Psychology (forthcoming).

The ‘freeze’ response

In the wild, animals are known to freeze if they sense danger lurking nearby. This behavior — including bradycardia (slowed heart rate) — has been demonstrated in humans, too. But, of course, we aren’t usually being hunted in our neighborhoods and workplaces, so researchers wondered if the same effect also occurs for social threats. Women were fitted with biometric sensors and asked to stand on a motion-measuring platform while viewing different facial expressions. When they saw angry faces, the women “froze” — their bodies swayed less, and their heart rates dropped.

Roelofs, K. et al., “Facing Freeze: Social Threat Induces Bodily Freeze in Humans,” Psychological Science (forthcoming).

Kevin Lewis is an Ideas columnist.


Full article and photo:


Attention passengers: It’s perfectly safe to use your cellphones

With more than 28,000 commercial flights in the skies over the United States every day, there are probably few sentences in the English language that are spoken more often and insistently than this: “Please turn off all electronic devices.”

Asking why passengers must turn off their mobile phones on airplanes seems like an odd question. Because! With a sentence said so often there simply must be a reason for it. Or — is there not?

Flight attendants are required to make their preflight safety announcement by the Federal Communications Commission because of “potential interference to the aircraft’s navigation and communication systems.” Perhaps this seems like a no-brainer: turning off your cellphone inside a piece of technology as sensitive as an airplane. In our civilized times, there are only a few things imaginable which more likely lead to direct physical conflict with the person in the seat next to you than turning on your cellphone during takeoff and nonchalantly calling your hairdresser to reschedule that appointment next Wednesday. In Great Britain, a 28-year-old oil worker was sentenced to 12 months in prison in 1999 for refusing to switch off his cellphone on a flight from Madrid to Manchester. He was convicted of “recklessly and negligently endangering” an aircraft.

Yet with people losing their freedom over the rule, it may come as a bit of a surprise that scientific studies have never actually proven a serious risk associated with the use of mobile phones on airplanes. In the late 1990s, when cellphones and mobile computers became mainstream, Boeing received reports from concerned pilots who had experienced system failures and suggested the problems may have been caused by laptops and phones the cabin crew had seen passengers using in-flight. Boeing actually bought the equipment from the passengers but was unable reproduce any of the problems, concluding it had “not been able to find a definite correlation between passenger-carried portable electronic devices and the associated reported airplane anomalies.”

The National Aeronautics and Space Administration released a study in 2003, stating that of eight tested cellphone models, none would be likely to interfere with navigation or radio systems of the aircraft — systems which are, of course, carefully shielded against all sources of natural or artificial radiation by design. Another study by the IEEE Electromagnetic Compatibility Society concluded in 2006 that “there is no definitive instance of an air accident known to have been caused by a passenger’s use of an electronic device.”

The same study also found that, on average, one to four calls are illegally made during every flight, meaning that there are tens of thousands of phone calls from American airplanes every day — and still no definitive evidence of a problem.

What makes the ban of mobile phones in the United States look even more odd is that it doesn’t exist in other parts of world. The European Aviation Safety Agency lifted the ban in 2007. “EASA does not ban the use of mobile phones on board as they are not considered to be a threat to safety,” says EASA spokesman Dominique Fouda. Several airlines like Ryanair and Emirates have since allowed passengers to use their phones during flights. According to EASA, some American airlines will soon allow the use of cellphones outside of US airspace.

While the safety argument sounds like a neat story every passenger would understand, there seems to be a second, more important reason for the ban. According to the Federal Aviation Agency, the current ban by the Federal Communications Commission has not been issued for security concerns, but “because of potential interference with ground networks,” says FAA spokeswoman Arlene Salac. An airplane with activated mobile phones flying over a city could cause these several hundred phones to simultaneously log into a base station on the ground, perhaps overloading it and threatening the network.

Europeans seem to not worry about this problem, since European airlines allowing cellphones install base stations inside each aircraft, forwarding all calls through the plane’s satellite system, charging passengers by the minute. If all phones are logged into the base station on the airplane, they will not cause trouble on the ground.

But even if the FCC were to revoke the ban, the FAA’s current regulations for the certification of electronic equipment would apply. This would mean air carriers would have to show that every particular cellphone model is compatible with every particular airplane type. With hundreds of cellphone models released every year, this would mean a continuing source of cost for airlines, while the only benefit would be the convenience of passengers.

In the end, the ban of mobile phones on airplanes might not be a story about safety concerns, but about the psychology of governmental agencies. Bureaucracy, in theory, is designed to eliminate irrationality by replacing the biased judgment of individuals with a system of fixed requirements. Bureaucracies are machines to make judgments according to the best objective knowledge available. Given that, and the suspicion that the threat by mobile phones is indeed minor, how is it possible that two bureaucratic agencies, the FAA and the FCC, act with disproportionate caution? Is the apparatus not so rational after all?

“The point of bureaucracy is to have a less emotive discussion. But that doesn’t mean you get rid of that factor,” says Daniel Carpenter, professor of government at Harvard University.

When it comes to the question of allowing people to use their mobile phones, the bureaucratic incentive to do so could not be weaker. For any agency involved in this, two errors are possible. The first is what Carpenter calls an error of commission: The agency allows mobile phones and something bad happens, either an airplane crash or a network failure on the ground. The other possible error is one of omission: The agency fails to allow the use of mobile phones, though they are safe, and people subsequently cannot make phone calls while on the airplane.

“One of these errors is much more vivid and evocative. The error of not letting people talk on cellphones when they should — it’s hard to see people dying from that,” says Carpenter.

This suggests the most important reason mobile phones are still banned on airplanes might be the absence of anger — the fact that passengers are not organizing and demanding the right to make calls.

Still, there might be yet another way of thinking about the issue. Despite the current ban, Congress debated the “Halting Airplane Noise to Give Us Peace Act” (also known as the “Hang Up Act”) in 2008, prohibiting all voice communications on commercial flights. The bill was never voted on, but the reasoning behind it was simple: No calls in airplanes, not because the calls are dangerous — but because they are so annoying.

Justus Bender is a reporter with Die Zeit, a weekly newspaper based in Hamburg, Germany.


Full article and photo:

I could care less

A loathed phrase turns 50

It was 50 years ago this month — Oct. 20, 1960 — that one of America’s favorite language disputes showed up in print, in the form of a letter to Ann Landers. A reader wanted Ann to settle a dispute with his girlfriend: “You know that common expression: ‘I couldn’t care less,’ ” he wrote. “Well, she says it’s ‘I COULD care less.’ ”

Ann voted with her reader — “the expression as I understand it is ‘I couldn’t care less’ ” — but she thought the question was trivial. “To be honest,” she concluded, “this is a waste of valuable newspaper space and I couldn’t care less.”

She couldn’t have known it at the time, but her reader’s trivial question would be wasting newspaper space (and bandwidth, too) for decades, as it blossomed into one of the great language peeves of our time. In 1972, Ann’s sister and fellow advice-peddler, Dear Abby, used “could care less” in print herself, and got an earful from readers. In 1975, the Harper’s usage dictionary declared that “could care less” was “an ignorant debasement of the language.” (Said panelist Isaac Asimov: “I don’t know people stupid enough to say this.”) In 1979, William Safire declared in his New York Times column that “could care less” had finally run its course: “Like most vogue phrases, it wore out its welcome.”

But three decades on, “could care less” is flourishing. Ben Zimmer, examining its career last year in a column at the language website Visual Thesaurus, reported that “could care less” had steadily gained ground in edited prose. In American speech, according to research by linguist Mark Liberman, “could care less” is far ahead of the “couldn’t” version. And “could care less” is no recent corruption, Zimmer found; it shows up in print by 1955, only 11 years after the first sighting of “couldn’t care less.”

As Liberman observed in a 2004 post at Language Log, “could care less” is not uniquely odd. Its pattern is familiar in other phrases like “I could give a damn” (and its ruder variants), and in the lyrics of Sammy Cahn’s 1940s classic, “I Should Care.” But whatever its sources — sarcasm, irony, Yiddish, or (as its detractors say) ignorance — “could care less” is snugly embedded in the American idiom. Yet the complaints keep rolling in.

Half a century, it’s true, is not excessively long in the world of usage disputes. This is one of the mysteries of peevology: Why do certain innovations annoy people, year after year, while other changes pass unnoticed? Why are some terms “skunked,” in the coinage of usage maven Bryan Garner — trapped awkwardly between the traditional usage and the emerging sense — for decades? Why do others shift and adapt, almost unremarked, right under our noses?

Among the peeves of 100 years ago, there are plenty of short-lived scandals, nits nobody has picked since the Treaty of Versailles. Usagists once scorned ovation (for “applause”) because the word “really” meant a minor Roman triumph. Dirt was supposed to mean “filth,” not good clean soil. Reliable was called a “monstrous” coinage, practitioner “a vulgar intruder.” But none of these rulings had much effect.

In our time, bemused has quietly shifted its sense from “befuddled” to something like “wryly or quizzically amused.” Apparently everyone finds it more useful in its new role, because objections (though they have been recorded) are relatively rare. The transition from “was graduated from college” (once the proper form) to “graduated from,” in the 19th century, met little resistance, and the 20th-century move to the simpler “graduated college” is well underway.

Other peeves just won’t die. Aggravate was aggravating Latin-minded usage writers in the 1860s, and you still hear from people who think it should mean only “make worse,” not “annoy.” Other issues nearing the 150-year mark are the propriety of “there’s two more,” the use of decimate to mean “destroy,” and the debate between “taller than I” and “taller than me.” Compared to these hardy perennials, “could care less” is a mere sprout.

But these days, we can circulate a lot more opinion in any given week. In its contentious half-century, “could care less” has probably generated as much usage comment as aggravate has in 150 years. And the volume isn’t slacking off: Last month in Reader’s Digest, this month in the Simmons College Voice, all over the Web, sober professionals and spelling-impaired amateurs continue to insist that “I could care less” really must mean “I care to some extent.” But it doesn’t; it never has; it never will.

Around the Internet, a popular saying (variously attributed) defines insanity as “doing the same thing over and over and expecting a different result.” After 50 years, it’s not likely that the next iteration of the argument against “could care less” will change American usage. So let’s stash the phrase in the “idioms” bin, along with “head over heels” and “have your cake and eat it too,” and forget about it. Truly, there is nothing more to say.

Jan Freeman, Boston Globe


Full article:

Johnny has two mommies – and four dads

As complex families proliferate, the law considers: Can a child have more than two parents?

“To an unconventional family.” That’s what Paul, the roguish restaurateur and sperm donor, raises his glass to in this summer’s movie “The Kids Are All Right.” Paul is, he has recently discovered, the biological father of two teenage children, one by each partner in a long-term lesbian couple. Contacted by the kids, he has come into their lives and begun to compete for the affections of various members of the family he unknowingly helped create. Complications — funny, then sad — ensue.

The film’s family is indeed unconventional, but it is not unique. In the age of assisted reproductive technology, the increasing acceptance of same-sex partnerships, and a steady growth in “blended” families, more parents and more children are finding that traditional notions of the nuclear family don’t accurately reflect their lives and relationships.

Still, even in a time of changing attitudes about who can be a parent, the legal and social definition of a family still has certain rules — a family can be run by a single mom or a single dad and, increasingly, by two moms or two dads, but it can’t have three parents, or four. For a long, long time — going back to when the English common law first started codifying such things — the law has set the maximum number of parents a child can have as two. Only two people, in other words, can enjoy the unique set of rights to determine a child’s life — and the unique set of responsibilities for the child’s welfare — that legal parenthood entails. That matches how most people think about parenthood: Two people, after all, are how many it usually takes to make a baby in the first place.

Now a few family-law scholars have begun to argue that there is nothing special about the number two — if three or four or five adults have a parental relationship with a child, the law should recognize them all as parents. Going beyond two, these scholars argue, would better reflect the dynamics of the modern family, and also protect the children in such families. It would ensure that, even in the event of a split or major disagreement between the adults in question, the children would not be deprived of the affection, care, and financial resources of any of the people they have grown up regarding as their mothers and fathers.

“The law needs to adapt to the reality of children’s lives, and if children are being raised by three parents, the law should not arbitrarily select two of them and say these are the legal parents, this other person is a stranger,” says Nancy Polikoff, a family-law professor at American University’s Washington College of Law.

In a few recent cases, courts seem to have agreed with the calls for multiple parents. But critics argue that tinkering with the definition of parenthood in this way threatens to dilute the sense of obligation that being a parent has always carried, and that increasing the number of legal parents only raises the likelihood that family disputes will arise and get messy and find their way into court. Not to mention that having judges routinely declare that Heather has two mommies and three daddies would represent a radical cultural shift, and one that, like gay marriage, many will find threatening.

Ultimately, the legal definition of parenthood is part of a broader philosophical question: What is a family? And what is it for? While some scholars have focused on expanding the number of parents, others argue that the law needs to do more to recognize the social context in which families exist, and the extent to which child care is actually performed by people who aren’t part of the nuclear family at all.

And as supporters of revising the definition of parenthood point out, there’s nothing tidy or biologically preordained about today’s prevailing notion of parentage, one that often has to shoehorn families jumbled and reassembled by divorce, adoption, and reproductive technology into one standard model, in ways that can prove disruptive to the families in question.

“The law determines what makes someone a legal parent, not marriage, not biology. Those things don’t determine who is a parent, the law does,” says Polikoff.

When Sharon Tanenbaum and Matty Person, a married lesbian couple in San Francisco, decided to have a child together, it wasn’t hard to figure out who they wanted the sperm donor to be. Bill Hirsh was one of Sharon’s oldest friends, they had known each other, Sharon says, “since we were born, more or less.” Their fathers had been best friends in college, and Sharon and Bill had grown up spending summers together and calling each other’s parents aunts and uncles.

Sharon, Matty, and Bill agreed that Bill would be more than just a source of genetic material — they wanted him to be a father. When Sharon had a son, Jesse, in 1994, the boy lived with Sharon and Matty, but growing up he spent one day a week with Bill and Bill’s same-sex partner, Thompson. In addition, the whole family would gather once a week for dinner.

Legally, however, Sharon and Bill were Jesse’s parents, and that put Matty in a potentially precarious position. “Let’s say I died in some terrible car crash or whatever and Matty had no legal rights, and let’s say she and Billy had a falling out or one of my parents or brother wanted to take care of Jesse,” Sharon says. In that case, Matty could have had Jesse taken away from her altogether.

At the same time, no one in the family wanted to force Bill to give up his parental status. So, when Jesse was 4, their lawyer persuaded the San Francisco Superior Court to allow Matty to do a third-parent adoption. The move, which had little precedent, gave Jesse three parents, three people who, in the event of a split, could demand custody or visitation rights and would be responsible for paying child support.

Asked why it was so important to recognize all three of them in the eyes of the law, Sharon responds, “When you look back on your life, there’s a big difference between your father and your uncle and your parents’ best friends. There are certain rights and responsibilities that also come with being a parent, and those rights and responsibilities only come with being a parent.”

Third-parent adoptions remain extremely rare, and only a handful have been done, mostly in Massachusetts and California. But some legal scholars see in them the seeds of a larger shift in how the law defines parenthood. These advocates point to a few recent court decisions that suggest a willingness to recognize more than two parents.

It would not be the first time that American law has changed the rules of parenthood. According to Polikoff, in the English common law from which American law is derived, children born out of wedlock before the 19th century had, legally speaking, no parents at all. They were filius nullius. By the 1800s, however, their status had changed — legal parentage was automatically assigned to the mother. If she was unmarried, she was the sole parent; if she was married, her husband was the father, regardless of whether he was biologically related.

In the 20th century, the most significant change in parenting law was erasing the distinction between legitimate and illegitimate children. Until the 1960s, the law regularly denied rights to children born out of wedlock: the right to collect worker’s compensation benefits or Social Security survivor benefits for a dead parent, for example, or sue for a parent’s wrongful death or inherit in the absence of a will (so-called intestate succession). With the sexual revolution, of course, popular attitudes about marriage changed, and the law changed with them. In decisions in 1968 and 1972, the Supreme Court struck down state statutes penalizing children born to unmarried mothers. The states claimed the laws encouraged marriage, but the justices focused on the fact that the penalties were largely aimed at the children.

Today’s proponents of expanding the definition of parenthood argue that restricting the number of parents to two people also disadvantages children, at least those in certain nontraditional households. If a child grows up thinking of more than two people as parents, these lawyers and legal scholars argue, then the law should protect those relationships and the emotional connection and material support that come with them. Doing so may not be necessary as long as all of the parents get along and remain equally committed to the child — or children — but if the parents have a falling-out or if the custodial parents split up, then the people the law officially recognizes as parents hold all the cards, and can shut the others out of the child’s life.

In addition, in the eyes of the law, a child doesn’t have any claim on the financial resources of parental figures beyond the legally recognized two. The relationship is not unlike those of illegitimate children and their parents before 1968. With very few exceptions, it is today impossible for children to sue for child support, collect Social Security survivor benefits, or inherit by intestate succession from self-identified third or fourth parents, since the law doesn’t recognize the relationship.

To critics of the legal status quo, all of this means that, just as with illegitimacy laws, the courts are punishing children in the interest of preserving a traditional family structure, making their lives more uncertain by depriving them of emotional and financial support.

“I’m not saying all kids should have three [parents], or that two is good so why not three,” says Melanie Jacobs, a law professor at Michigan State University and author of a 2007 law review article entitled “Why Just Two?” “The law says someone is either a parent or a legal stranger, and in some cases that’s threatening to just take this person who has been a part of the child’s life out of the child’s life.”

Jacobs points to two recent decisions in particular that suggest how she would like courts to define parenthood in such families. In January 2007, the Ontario Court of Appeals granted full parental status to both members of a lesbian couple as well as their sperm donor, ruling that it was contrary to the child’s best interests to not recognize all three. In April of 2007, the Pennsylvania Superior Court was faced with a custody decision involving a child’s biological mother and her same-sex partner, who had split up, and a donor who had been a significant presence in the child’s life. The court ruled that all three should have custodial rights and that all three were responsible for child support. Additionally, in July of this year, the attorney general’s office in British Columbia proposed allowing for more than two parents in cases of sperm and egg donation.

Recognizing multiple fathers or multiple mothers, however, doesn’t necessarily mean that they all have the same rights. In the Pennsylvania case, the court did not decide that all three parents had equal custody or were responsible for the same amount of child support. Jacobs in particular has argued that expanding the number of legal parents a child has requires that courts begin to allow for degrees of legal parenthood, what she calls a scheme of “relative rights.” Whereas today the law tends to see someone as either a parent or a nonparent, she argues that it should instead recognize gradations. For example, she argues, a known sperm donor should perhaps have certain parental rights and responsibilities — visitation and the obligation to pay some child support — but not the right to demand custody.

For critics, “disaggregating” the rights and responsibilities of parenthood, as Jacobs suggests, exposes a larger problem with the idea of expanding beyond two in the first place. Traditional legal definitions of parenthood, though they may not exactly correspond with every family’s day-to-day reality, do lay out a set of hard and fast, inescapable obligations. If courts begin to experiment and innovate with what being a parent means, that may create uncertainty, and even a sense that parental obligations to children may be more negotiable than they once were.

June Carbone, a law professor at the University of Missouri-Kansas City, points to research Deirdre Bowen at Seattle University has done that suggests that in same-sex couples with a child, there’s a great deal of ignorance and miscommunication about what the legal rights and responsibilities of each parent are.

“I think it is very important that there be a shaping of expectations at the outset,” Carbone says.

Opponents of the change also worry that increasing the number of parents increases the odds of disagreements — over everything from where the child goes to school and what religion to raise him to how much time he spends with which parent — and the odds that those disagreements get litigated.

“Expanding the number of parents that would have rights to a child could, on the upside, expand the number of people who have responsibilities to that child, but it also expands the number of people who have a claim on that child, and who could come into conflict with the other parents,” says Elizabeth Marquardt of the Institute for American Values, a nonprofit dedicated to encouraging traditional two-parent households.

Whether or not multiple parentage gains wider legal and social acceptance, the fact that it’s being debated — and, in a few cases, allowed — suggests the flexibility that the concept of parenthood has taken on today, not only among scholars, but among adults doing the work of actually raising children in sometimes unorthodox situations. It’s part of a broader reexamination of what it means to have a family, a conversation that is itself only a chapter in a story that has unfolded over hundreds of years. That constant push and pull has been shaped by religion and law, custom and economics, and its inflection points are not only changes like the abolition of illegitimacy, but the revision of adoption laws, the relaxation of divorce requirements, the movement in some states to legalize same-sex marriage, and even the debate, in places as different as late 19th-century Mormon Utah and the contemporary Netherlands, over the permissibility of polygamy.

Some of those changes remain deeply controversial, of course. And yet there are other aspects of the contemporary family that, while they would strike people of an earlier era as deeply unnatural, today go all but unremarked: the fact, for example, that it’s common for grandparents to live not with their children and grandchildren but instead hundreds of miles away. The family of the future may look similarly unfamiliar to us, and in ways we’re only beginning to discern.

Drake Bennett is the former staff writer for Ideas.


Full article and photo:


Should the word be used for things we can actually count?

McKay Stangler e-mails: ‘‘I was curious about your thoughts on the modern usage of ‘countless.’ The Oxford English Dictionary defines it as ‘that cannot be counted’; in other words, too many of something to count. I’ve noticed, however, that it has very nearly become a synonym for ‘many’ or ‘numerous.’ Do you have a sense of how and when the word started adopting this newly evolved meaning?’’

Countless falls into a family of adjectives that, when taken literally, imply an infinitude but in practice refer more loosely to a vast number. Others in this family include incalculable, immeasurable, inestimable, limitless and measureless. Even infinite gets used in this hyperbolic fashion, and has since the age of Chaucer. (Think of Hamlet’s line: ‘‘What a piece of work is a man! How noble in reason? How infinite in faculty?’’)

As for countless, its traditional use has been for quantities that are, if not strictly uncountable, at least too immense to allow for easy enumeration. A 1916 dictionary of similes supplies some typical literary examples to fill the slot in the phrase ‘‘as countless as ___’’: stars in the sky, grains of sand in the desert, motes of dust in a sunbeam, drops of water in the ocean. Thus, canonically countless items are natural objects that are so profuse that they defy human attempts to number them.

Has the sense of countless been weakening in recent years? Anecdotal evidence might suggest so. Stangler provides a few examples of usage from a week of New York Times coverage that he finds questionable. On Cuba: ‘‘Workers were being laid off in countless industries, from hospitals to hotels.’’ In an obituary of a voiceover actor: ‘‘As the narrator of countless movie trailers (his wife estimated he did 3,000), Mr. Gilmore was an especially effective pitchman.’’ In an article about community farming: ‘‘Even without the community agriculture program, there is tree pruning all winter and countless other tasks.’’

In all three cases, the nitpickiest among the Times readership could find grounds for complaint. If we set our minds to it, we probably could count all the industries in Cuba, the trailers narrated by Mr. Gilmore or the tasks on a community farm. But these things would not be easy to count, and that is generally what countless now implies. (The first sentence is guilty of a graver journalistic transgression: ‘‘from hospitals to hotels’’ is a false range generally scorned by copy editors.)

Though I haven’t found any complaints about the exaggerated use of countless in any of the standard usage guides, the writer David Foster Wallace, a well-known stickler on grammatical matters, seems to have been attuned to the word’s overextension. In a short story called ‘‘My Appearance”’ he tells of an actress going on David Letterman’s late-night talk show. Letterman mentions her ‘‘three quality television series’’ and ‘‘countless guest-appearances on other programs.’’ The actress replies matter-of-factly, ‘‘A hundred and eight.’’ Letterman corrects himself with ‘‘virtually countless guest-credits.’’ A hedging word like virtually, nearly or almost can help to tone down the hype of countless. Or why not mix it up with another adjective like myriad or multitudinous? The possibilities are limitless (well, not quite).

Ben Zimmer, New York Times


Full article:


In the mid-1980s, Meredith Maran, a thirtysomething wife and mother of two young boys, came to believe that, when she was a little girl, her father had molested her. She wasn’t absolutely certain. She didn’t remember any such heinous act, nor did she have any evidence, outside of vague nightmares, strange “flashbacks” and intensely complicated feelings about her father. But she did have a Greek chorus of women thinking similar thoughts, including feminist psychologists, activists and therapy patients, a number of whom she knew personally in the San Francisco Bay-area lesbian community that is her home.

Ms. Maran, who recanted her accusation a decade later, does not go easy on herself in “My Lie,” a memoir of her journey through what she calls “Incest Nation.” As a journalist for the San Jose Mercury News and the editor of a book on the subject, she admits that she “helped to spread the panic” as incest accusations raged in the 1980s and early 1990s. Ms. Maran gullibly embraced all the gothic charges that occupied that hysterical time: Satanic rituals at day-care centers, multiple personalities caused by long-forgotten traumas, the claim that one in three girls was a victim of sexual abuse.

She also doesn’t shy from describing the poison she injected into her own family. Her growing obsessions helped to break up her marriage as she began to think of her sympathetic husband as another “predatory male.” She deprived her boys of their beloved grandfather. Terrified by paternal perfidy so close to home, her young niece began to fear her own father. That father, Ms. Maran’s brother, fretted that he, too, had been victim of the abuse.

The most aggrieved victim of the story, the self-involved but harmless patriarch, Stan Maran, spent his last decade before descending into Alzheimer’s knowing that his daughter believed the worst thing a daughter can believe about a father and knowing that his once happy, now traumatized, third wife was considering divorce.

“I’d found the perpetrator and it was me,” Ms. Maran concedes. Still, for all the soul-searching, “My Lie” is as much a defense as a mea culpa. She interviews neuroscientists about the chemical roots of groupthink but fails to ask why, before she had any inkling of her putative abuse, when she was married to the likable father of her two sons, she was drawn to a group of radical feminists, “wommin” whose lives were defined by therapy sessions, self-defense classes and the incest-survivor’s bible, “The Courage to Heal.”

After her marriage ended, Ms. Maran had a long, live-in relationship with a clearly disturbed woman who was convinced that she had been molested by her father (who had died when she was 5) and who was haunted by fantasies of dark-robed people chanting at forest campfires when she was a child. Neuroscience can’t explain Ms. Maran’s decision to pick this woman, belatedly, as the stepmother of her sons.

Ms. Maran, who wrote about her involvement with leftist politics in a previous memoir, concludes here that her lie about her own personal experience was no different from the belief of some people that President Barack Obama is a Muslim or that Saddam Hussein possessed weapons of mass destruction. History is “rife with examples of the damage done when millions of people become convinced of the same lie at the same time.” This is political posturing substituting for self-knowledge, a distinction you’d hope the author of a book called “My Lie” would have learned.

Kay Hymowitz, Wall Street Journal


Full article and photo:

Posted in Law

Block That Adjective!

I am not at all sure—convinced, certain, persuaded—that creative-writing courses are a good idea unless they prevent people from writing sentences like this one, where adjectives—useful, helpful, intensely descriptive words—are stacked upon one another as Pelion used to be piled upon Ossa. Phew! That sentence took some writing and ended, you will have noticed, with a rather useful classical allusion. Thank you.

My bête noire—and there is nothing wrong with using the occasional French expression, although one does not want to sound too much like a menu—is overwriting. Something is overwritten when there is just too much of it. This may be because the writer has labored the point and made a mountain out of a molehill, or because too many words are used. As a result, descriptions are cluttered and the prose quickly becomes unreadable. There is a lot of it about.

The problem is that we speak English. Some languages, such as English or Spanish, have immensely rich vocabularies: If we want to describe something in English, we have a wide choice of words at our disposal and can say what we want to say in many different ways. The problem does not occur if one is writing in, say, Melanesian Pidgin, where rather few words are at your disposal and most of them are pithy in the extreme.

For some people, being able to use all these words is rather like being faced with a chocolate box with multiple layers; the temptation to overindulge is just too great. The result is the use of too many adjectives, adverbs and subsidiary clauses. Such writing then begins to sound contrived. Nobody uses large numbers of adjectives when they think, and I believe that writing which one cannot actually think can very easily look wrong on the page.

The real aim, of course, is conciseness. Concise prose knows what it wants to say, and says it. It does not embellish, except occasionally, and then for dramatic effect. It is sparing in its use of metaphor. And it is certainly careful in its use of adjectives. Look at the King James Bible, that magnificent repository of English at the height of its beauty. The language used to describe the creation of the world is so simple, so direct. “Let there be light, and there was light.” That sentence has immense power precisely because there are no adjectives. If we fiddle about with it, we lose that. “Let there be light, and there was a sort of matutinal,* glowing phenomenon that slowly transfused, etc.” No, that doesn’t work.

There is a place for the adjective and for the descriptive passage, but these must be carefully handled. A piece of prose that had no adjectives would very quickly become sterile; so it really is a question of restraint. There is a psychological reason for this: If somebody sets out in great detail what is before us, we very quickly become bored. That is not the way we see the world; we look for salience, we look for the feature that will engage our interest. Think about how we describe a cityscape. We do not list and describe every building, we refer to one or two. Manhattan, for instance, can be conjured up with a description of the spire of the Chrysler building; the reader’s imagination can do the rest.

And therein lies the problem. The trouble with overwritten prose is that it takes away from the reader the opportunity to imagine a scene. We do not want to be told everything; we want a few brushstrokes, a few carefully chosen adjectives, and then we can do the rest ourselves. It’s Roget’s fault, of course. I blame him and his wretched thesaurus. Put it away.

* of or pertaining to morning; don’t use this word.

Alexander McCall Smith is the author of more than 60 books, including the “No.1 Ladies’ Detective Agency” series.


Full article and photo:

Inspiration Revised

Mining the unconscious can be dull. Get me rewrite

When I was a 14-year-old aspiring writer, I wished more than anything for a book explaining the alchemy that transformed words to gold. How did poets cast such a spell? How did novelists spin their silk?

My biology text diagramed the Krebs cycle. My social studies teacher spelled out the principle of supply and demand. I wanted a comparable explanation for literature. I understood that art requires inspiration, not formulas. All the same, I wondered where I might find a road map to Dickens’s brilliance, or a Lonely Planet Guide to Poetry.

One day I found just such a book in my school library. In Aileen Ward’s biography, “John Keats: The Making of Poet,” I learned the tragic story of Keats’s poverty, his vocation, his remarkable friendships, his love for Fanny Brawne and his death at age 26. I learned something else too: the story of Keats’s development as a writer.

Author Allegra Goodman

Analyzing manuscripts, Ms. Ward showed that the Odes did not spring fully formed from their author’s imagination. Keats improved every line, crossing out conventional phrases and replacing them with stronger, rarer choices. Studying Keats’s revisions, Ward concluded that a poet is made, not born. What a startling, unromantic reading of a Romantic poet. What a remarkable assertion about writers: Even the great ones work for greatness.

Now here was a challenge, and an opportunity as well. Starting with inspiration and some talent, you could work to be a writer. You could keep revising, and improve.

Why was this idea so surprising and liberating for me? Like many literary teenagers, I believed that art was a matter of instinct—that the artist’s first impulse is the most authentic, that revision is something you do to essays but hardly applies to poetry or fiction. I pictured revision as drudge work, spoiling all that was fresh and original. But what if revision actually improved ideas?

I struggled with revision. As a young writer, I hated cutting paragraphs or pages I had labored over, and struggled to rearrange scenes or rethink characters. I’d revise when friends or editors pointed out problems, but I had trouble starting a revision on my own. Gradually, I learned to set my work aside for days and weeks and return to it with new distance and objectivity. Slowly, I began to identify my own awkward phrases and bad habits.

My writing improved when I unpacked sentences, searched for stronger verbs and cut meandering description. My ideas improved as well. Strange but true: What we write instinctively—the story that seems most immediate and personal—is often most conventional.

We grow up hearing that we should just be ourselves, and listen to our inner voices. But what if your authentic self won’t shut up? What if your inner voice is boring? In revision you cut excess verbiage. Revising, you can experiment with other voices.

It’s great to tap into your unconscious, but remember how impressionable the unconscious can be, how quick to absorb the tropes of television and romance and life-affirming or cautionary memoir. Revision means testing and questioning conventions, forging a path through the cultural clutter that we mistake for our own creativity.

As a teenager I put off revision for as long as possible. Now, I make revision part of my routine. I begin by rewriting the pages I wrote the day before. Art no longer seems like alchemy to me. Like a scientist, I test my ideas and hone the words I use as instruments. Revision is a form of experimentation, art a method for discovery.

Allegra Goodman’s latest novels is “The Cookbook Collector.” She teaches a course on revision in the Master of Fine Arts program at Boston University.


Full article and photo:

What He Saw at the Revolution

A firebrand as opposed to a strong national government as he was to British tyranny.

‘I know not what course others may take, but as for me,” Patrick Henry famously declared at a revolutionary convention of his fellow Virginians on March 23, 1775, “give me liberty, or give me death!” The war for independence was inevitable, he said—and in fact Lexington and “the shot heard around the world” were less than a month away. Even after Lexington, there were moderates, like John Dickinson of Pennsylvania, who still hoped for reconciliation with the mother country. But Henry, who had begun publicly flirting with treason a dozen years earlier, was definitely not among them.

Henry’s radical advocacy of independence is not his only legacy, as Harlow Giles Unger observes in “Lion of Liberty,” his vivid biography of the Virginia firebrand. A foe of a strong national government who fought against ratification of the federal Constitution, Henry helped bring about the addition of the Bill of Rights. And his championing of states’ rights had less fortunate reverberations down through the decades.

Our knowledge of Henry’s words and deeds at some crucial points is less than certain. The known text of his “liberty or death” speech, for instance, is a reconstruction made 40 years after the event. Mr. Unger at times brings his subject into a sharper focus than a strict adherence to what is surely known would permit. But it is illuminating to see Patrick Henry thus, part legend though the “lion” may be.

A self-taught back-country lawyer and spellbinding orator, Henry in 1763 at his first major trial denounced Britain’s putative tyranny in annulling an act by Virginia’s House of Burgesses. The measure let landowning parishioners pay their taxes to the Anglican Church in cash rather than the usual tobacco (which drought had made unusually precious). The act was needed for the people’s economic survival, Henry said; a king who overruled such an act “had degenerated into a tyrant and forfeited all right to his subjects’ obedience to his order of annulment.” Despite the cries of “treason,” the 27-year-old Henry effectively won the damages case brought by a clergyman and was carried from the courthouse in triumph.

Two years later, newly elected to the House of Burgesses, Henry raged against the supposed tyranny of Britain’s Stamp Act, which required the purchase of revenue stamps on legal documents and other items. The anti-Act resolutions that Henry put forward “represented the first colonial opposition to British law,” Mr. Unger writes.

The Stamp Act was the first direct tax imposed by Parliament on the colonies, the author notes, but the tax would have had only a trivial impact on the average American. Heavily in debt after the French and Indian War, and with its empire suddenly enlarged by the acquisition of Canada from the French, Britain not unreasonably thought that Americans should pay for imperial protection against Indian attacks. The stamp tax had been in effect in England for decades. And because the franchise was so restricted, most taxpayers there—despite Henry’s claim to the contrary—had no more representation in Parliament than the Americans did. Even so, Parliament’s extension of the tax was ill-timed, Mr. Unger says: “Increased duties were already strangling the American economy.”

By asserting that only Virginia’s General Assembly had the right to impose taxes on Virginians, and by warning of what might happen if George III persisted in tyranny, Henry once again provoked cries of “treason”—but Virginia adopted his resolutions against the Stamp Act, and other colonies soon followed suit. “Mr. Henry gave the first impulse to the ball of the revolution,” Thomas Jefferson said. Over the next 10 years, as that ball received more such impulses, Henry seems never to have factored into his “liberty or death” calculations the risk that an armed struggle for independence would also turn into a civil war, as of course it did. In Henry’s apparent view, averting civil war was not up to intransigent radicals like himself.

Elected governor in 1776 and then twice re-elected, Henry became an effectual wartime executive (unlike his successor, Jefferson), even taking on what Mr. Unger calls “dictatorial powers” in a “political turnabout [that] was nothing more than a statesman’s adaptation to changing realities.” After the war, the champion of small farmers in Virginia’s Piedmont hills served two more terms as governor before retiring to private life.

Out of office in 1787, Henry refused to attend the Constitutional Convention, later saying that he had “smelt a rat.” As opposed to a strong national government as he had been to British “tyranny,” Henry claimed that a coup d’état was in progress and said that the proposed constitution “squints towards monarchy.” He objected not only to the absence of a bill of rights but also to the federal government’s powers to tax the people without their state legislature’s consent and to send troops into any state to enforce federal laws.

Without Henry’s and others’ active opposition, there would have been no Bill of Rights. But he was not satisfied with the outcome, declaring: “This Constitution cannot last.” And just as Henry had predicted, Mr. Unger notes, “tyranny” ensued. “Congress imposed a national whiskey tax without the consent of state legislatures—much as the British had done with the stamp tax—and President Washington sent troops to crush tax protests in western Pennsylvania, much as the British had in Boston.” With the Alien and Sedition Acts of 1798, President John Adams and Congress suppressed free speech and freedom of the press. Despite such infringements of liberty, Henry did not urge taking up arms against the government.

A few months before his death in June 1799—he was the father of 18 children by then and a wealthy man thanks to his law practice and land speculation—Henry advised: “We should use all peaceable remedies first before we resort to the last argument of the oppressed—revolution—and avoid as long as we can the unspeakable horrors of civil war.” But tragically, Mr. Unger writes, Henry’s passionate struggle for states’ rights had “sowed the seeds of secession in the South” for the Civil War to come.

Mr. Landers is the author of “An Honest Writer: The Life and Times of James T. Farrell.”


Full article and photo:

The Mastery of Georges Simenon

He created a world that you can smell and taste, that you enter in riveted fascination

“I was born in the dark and in the rain and I got away. The crimes I write about are the crimes I would have committed if I had not got away.” In this celebrated statement—from an interview with the New Yorker—the novelist Georges Simenon, creator of Inspector Maigret, dramatized his own life with a characteristic mixture of self-congratulation and false modesty. But Simenon (1903-89) was not just shooting a line, and the evidence is to be found in “Pedigree,” his lengthy but little-read autobiographical novel. Written in the dark days of Nazi-occupied France, “Pedigree” (1948) stands alone among the author’s mature novels because it took him more than two years to write, rather than the usual three weeks.

“Pedigree” is an unforgettable picture of the Belgian city of Liège and its people as observed by the innocent but pitiless eye of a very unusual little boy. It is a Dickensian portrait, with poverty, crime, lunacy, wealth, corruption, and mockery, but a complete absence of Dickensian sentimentality. The story opens with the birth of Roger Mamelin in 1903 and ends with the liberation of the city from German occupation in November 1918.

PARIS NOIR: Brassaï’s 1934 photograph of headlights on Paris’s Avenue de l’Observatoire.
The author objected to the book being called an “autobiographical novel,” but the details of Roger’s life are too close to those of Simenon’s for argument. Roger’s parents, the houses the family inhabited in the working-class district of Outremeuse, the schools Roger attends, the aunts and uncles and cousins of his extended Flemish-Walloon family, the Russian and Jewish lodgers his mother takes in, are all just like those in Simenon’s life. In many cases, the novelist did not even bother to alter the names.

This Liège is a place where the crowded streets are dominated by lethal electric trams and the market is made lively by battling, foul-mouthed fishwives. As a child, Simenon noticed and remembered the “fat, pink arms of the dairymaid,” the smell of eggs and bacon in the kitchen before a summer’s day picnic in the wooded heights outside the city, and the rituals of Catholic life and, more particularly, death. Then there were the horrors of war and occupation—no fuel, no food, the terror of collective punishments and all the prettiest girls on the arms of German soldiers.

The only hero in Roger’s life is his father Désiré, an honorable failure: a tall trustworthy insurance clerk who the little boy adores—all equally true of Simenon’s father, Désiré. In “Pedigree,” Désiré is married to the monstrous Élise. The battle between Roger and his mother dominates the novel, with the child struggling to understand the volcanic, unloving personality that fate had given him for a mother.


A Reader’s Guide to Simenon

Simenon became world-famous for Inspector Maigret, the good police detective who solved crimes through intuition and a shrewd understanding of human frailty. There are 76 Maigret books, most of which evoke a pungent world of 1950s Paris and provincial France: street markets, warm bars, cold beer and a policeman with the patience of the hound of heaven. The best include “The Madman of Bergerac” (1932), “Maigret’s Dead Man” (1948), “Maigret on Holiday” (1948), “Maigret and the Calame Report” (1955), “The Patience of Maigret” (1965) and “Inspector Maigret and the Burglar’s Wife” (1951). You couldn’t go wrong starting with any of these mysteries.

But Simenon also wrote 117 literary novels, which he called romans durs: psychological stories that examine the behavior of apparently non descript characters at a time of extreme personal crisis. The greatest of these have been ranked among the finest French-language fiction of the 20th century. Between 1946 and 1955, Simenon lived in America, mainly in Arizona and Connecticut, and during this period he produced many of his best novels. “Three Beds in Manhattan” (1946) is a study of sexual jealousy and fear of loss. “Act of Passion” (1947) takes the form of a letter written by a convicted murderer, a doctor in a small French town, to the judge who condemned him. “The Hitchhiker” (1955) takes place on Labor Day on the road between New York and Maine and is the story of an alcoholic whose wife is kidnapped by a killer on the run. “Dirty Snow” (1948), considered by many to be his finest novel, takes place in an unidentified country under German occupation during World War II. Its anti-hero is an 18-year-old youth who commits abject crimes but refuses to break under torture.

Simenon’s is a world that you can smell and taste and that you enter in riveted fascination. His characters stay with you for life. There is the drunken lawyer in “Strangers in the House” (1939), for instance; or the elderly sisters in “Poisoned Relations” (1938), trapped in mutual hatred inside the family home. There is also the building contractor in “The Accomplices” (1955), watching as the police close in on the hit-and-run driver who has killed a bus full of school children. Although you may be appalled by the imaginary world that the novelist inhabited, you are not repelled. On the contrary, you are drawn back to it again and again. So fecund was Simenon’s imagination that there can be no short list of his finest novels. Any such summary must include, along with the titles mentioned above, “The Engagement” (1933), “The House by the Canal” (1933), “The Man Who Watched the Trains Go By” (1938), “Monsieur Monde Vanishes (1945), “The Heart of a Man” (1950), “The Door” (1962) and “The Little Saint” (1965).

Patrick Marnham

This drama comes straight from the author’s childhood. “The Simenons,” he once said, “took life as a straight line, the Brülls [his mother’s family] came from a tormented race.” From the start of the story, Simenon emphasizes the contrast between Roger’s father’s French-speaking Walloon family and his mother’s Flemish relations. At the time of his birth in 1903, sophisticated or ambitious Belgians spoke French, the language of the country’s dominant group, and Flemish speakers were patronized or treated with contempt. The division grew worse during the 20th century when Belgium suffered two brutal German occupations and Flemish-Belgians were accused of being less anti-German.

Shortly after World War I ended, Désiré Simenon died, and one year later Georges, aged 19, left Liège and never lived in Belgium again. He moved to Paris, started to write pulp fiction and eventually created Inspector Maigret. One of the models for the inspector was undoubtedly Désiré, the merciful father, now brought back to life as the just policeman who exemplifies Georges Simenon’s motto: “Understand, don’t condemn.” But there are also some touches of the autobiographical: Maigret knows the criminal world and studies human nature; he operates on intuition, like a novelist. “Pedigree” shows where the creator of Maigret gained some of this knowledge.

At the age of 15, Simenon (like the novel’s Roger Mamelin) was living in a city made desperate by four years of military occupation. He abandoned his schooling and hesitated on the verge of a life of crime. He was tempted by the black market. He joined his mother on food-smuggling ventures. He had friends who procured girls for prostitution, and together they discussed opportunities for blackmail. He was saved by chance; his father became gravely ill, and Georges was told to leave school and find a job.

By 1939, when war broke out again, Simenon was a highly successful popular novelist who had decided to terminate the Maigret series and work to win the Nobel Prize with his romans durs (“hard novels”), as he called his literary fiction. His working methods were notorious. He did not just write his stories; he lived them. He immersed himself in the personality of his leading character, went into “a sort of trance” and, possessed by the world he was creating, worked in short bursts at tremendous speed.

He would type a page every 20 minutes, 1,500 words an hour, 4,500 words a day for 20 days. In this way he could produce three or four books a year and take nine or more months off. While he was writing he could drink two liters of red wine a day and still lose weight. His children would watch him from the window, notice how his walk changed and try to guess what sort of character would emerge in the next book. But “Pedigree” was different. He did little else in 1942 except write this book. He worked on it in 1941 and in 1943 as well.

The period when “Pedigree” was written explains much. Living again under German occupation, Simenon’s imagination returned to his own childhood. War had traumatized him as a boy and his relationship with his mother, Henriette, was a lifelong trauma. For the purposes of the novel, the author conflated the anguish, making Roger’s mother, Élise, half-German, whereas in real life Henriette Simenon was entirely Flemish.

The other clear departure from biography was that Roger Mamelin is an only child, whereas Georges had a younger brother, Christian. In 1944, as German forces retreated from Belgium, Christian Simenon went on the run, accused of collaboration. On the advice of Georges, he joined the French Foreign Legion and was killed fighting in Indochina in October 1947. Henriette never forgave Georges for helping his younger brother to join the Foreign Legion.

Simenon insisted that “Pedigree” was a book in which “everything is true while nothing is accurate.” But the story was close enough to real life for three people to sue him successfully for libel. (He had to pay damages and cut several passages from the French-language editions of the novel.) His version of the truth was a novelist’s psychological truth, and the most important truth he revealed in “Pedigree” was the identity of his lifelong muse.

Simenon married twice and enjoyed long-standing affairs with two domestic servants; he died in the arms of a maid originally hired by his second wife. But the woman who drove his work was none of these; nor was it any of the 10,000 women he famously claimed to have conquered. It was his mother, the over-apologetic, proud little lodging-house proprietor whose standards he never managed to reach and who never loved him as she loved his younger brother.

Shortly before she died in 1970, Henriette visited Georges in Switzerland, where he was living the life of a millionaire, and returned every penny of the money he had sent her over the years. When she died, Simenon’s inspiration died too. The man who had published 76 Maigrets and 117 dark novels battled on for 12 months and then gave up writing fiction.

Mr. Marnham is the author of “The Man Who Wasn’t Maigret.”


Full article and photo:

Still Under Cleopatra’s Spell

The Romans were the first, but hardly the last, to be unnerved by female ambition, authority and allure

How is it possible that Cleopatra continues to enchant, 2,000 years after her sensational death? It helps that, with her suicide in 30 B.C., she brought down two worlds; with her went both the 400-year-old Roman Republic and the Hellenistic age. Egypt would not recover its autonomy until the 20th century.

Shakespeare and G.B. Shaw lent a hand in her immortality, of course, as did Cleopatra’s eloquent Roman critics. She endures for reasons beyond the fame and talent of her chroniclers, however; the issues that she raised continue to fluster and fascinate. Nothing enthralls us so much as excessive good fortune and devastating catastrophe. As ever, we lurch uneasily between indulgence and restraint. Sex and power still combust in spectacular ways.

And we remain unnerved by female ambition, accomplishment and authority. The wise woman mutes her voice in order to maintain her political or corporate constituency. She is often cast all the same as a scheming harridan or a threatening seductress. Her clothing budget attracts uncommon scrutiny, by definition either too large or too small. If she is not overly sexual, she is suspiciously sexless.

For reasons that remain murky, Julius Caesar invited Cleopatra to Rome in 46 B.C. Though her fortune had dwindled from that of her forebears—she was the last of the Ptolemies, the Greek dynasty that ruled in Egypt after the death of Alexander the Great—she remained the richest person in the Mediterranean world. A decade earlier, her father had traveled about Rome on the shoulders of eight men and with an escort of 100 swordsmen. He distributed lavish gifts left and right. There is little reason to believe that Cleopatra did things differently. The pageantry unsettled, as will a convoy of Maybachs in Paris today.

ELIZABETH TAYLOR retouches her makeup on the set of ‘Cleopatra’ in 1962.

In the late republic, that outsized wealth impugned her morals. To wax eloquent about someone’s embossed silver, sumptuous carpets or marble statuary was to indict him. In the Roman view, Cleopatra quite literally possessed an embarrassment of riches. This meant that every evil in the profligacy family attached itself to her. Well before she became the sorceress of legend—a reckless, careless destroyer of men—Cleopatra was suspect as a reckless, careless destroyer of wealth. Even if she never melted a pearl in vinegar, as legend has it, she could well afford to do so.

Cleopatra’s fortune derived from Egypt’s inexhaustible natural resources. Her kingdom was miraculously, effortlessly fecund, the most productive agricultural land in the Mediterranean. Its crops appeared to plant and water themselves. Those harvests—and Egypt’s absolutist government—accounted for the Ptolemaic fortune. Very little grew in or left Egypt without in some way enriching the royal coffers. And Cleopatra controlled the greatest grain supply in the ancient world. Rome stood at her mercy. She could single-handedly feed that city. She could equally well starve it if she cared to.

Wealth and culture also happened to share an address in Cleopatra’s lifetime. Compared to Alexandria, Rome qualified as a provincial backwater. It was still the kind of place where a stray dog might deposit a human hand under the breakfast table, where an ox could burst into the dining room. Alexandria remained the fashion capital, the center of learning, the seat of culture. If you wanted a secretary, a tutor or a doctor, you wanted one trained in Egypt. And if you wanted a bookstore, you dearly hoped to find yourself in Alexandria.

By contrast, it was difficult to get a decent copy of anything in Rome, which nursed a healthy inferiority complex as a result. Gulping down his envy with a chaser of contempt, a Roman found himself less awed than offended by Egypt. He wrote off extravagance as detrimental to body and mind, sounding like no one so much as Mark Twain, resisting the siren call of Europe many centuries later. Staring an advanced civilization straight in the face, the Roman dismissed it as either barbarism or decadence.

Egypt confounded as well for its exoticism. Nothing so much proved the point as the perceived femininity of the East, that beguiling, voluptuous realm of languor and luxury. There was something subversive about a land that exported a female goddess—the Isis temples in Rome were notorious spots for assignations—and a female pharaoh.

In Egypt, on the other hand, competence regularly trumped gender. Cleopatra followed to the throne a sister who had briefly succeeded in deposing their father. She could look to any number of female forebears who had built temples, raised fleets, waged military campaigns. And she came of age in a country that entertained a singular definition of women’s roles. They inherited equally and held property independently. They enjoyed the right to divorce and to be supported after a divorce. Romans marveled that in Egypt female children were not left to die. A Roman was obligated to raise only his first-born daughter. Egyptian women loaned money and operated barges, initiated lawsuits and hired flute players. They enjoyed rights women would not again enjoy for another 2,000 years.

Not only was a Roman woman without political or legal rights, she was often without a personal name. Caesar had two sisters, both named Julia. A good woman was an inconspicuous woman, something that rather defied Cleopatra’s training. As ever, what kept a woman pure was the drudge’s life, of which Juvenal supplied the traditional formula: “Hard work, short sleep, hands chafed and hardened” from housework. For the Romans, a world ruled by a woman was a world turned upside down; like the north-flowing Nile itself, it reversed the course of nature. Female authority was in Rome a meaningless concept. This posed a problem for an Egyptian sovereign.

Cleopatra spoke many languages, flattery perhaps most fluently. Though famed for her charm and her powers of persuasion, she did not always temper her style. She was an autocrat who very much sounded the part. Few resented her tone as deeply as her Judaean neighbor, Herod the Great; the relationship between the two sovereigns proceeded by mutual betrayals. Complicating their dealings was each ruler’s friendship with Rome, the western superpower intent on maintaining peace between them. (Herod owed his crown in part to Roman fears of Cleopatra; he balanced power in a volatile corner of the world.) Cleopatra conspired to separate Herod from their mutual Roman friends. In turn, he proposed her assassination. All would be so much simpler, argued the Judaean king, if his henchmen simply eliminated the pesky Egyptian queen.

What else to do with a clever woman who could not be subjugated by the usual means? Cleopatra’s relationship with Mark Antony was the longest of her life, but that with Octavian, the future Caesar Augustus, would prove the more enduring. She allowed him to recycle the oldest trope: The allergy to the powerful woman was even sturdier than that to monarchy or to the impure, inferior East. Octavian delivered up the tabloid version of an Egyptian queen, insatiable, treacherous and decadent. To prepare the ground for Actium, the battle that would decide the future of Rome and at which Octavian would defeat Antony and Cleopatra, he needed a worthy opponent. He wisely oversold the enemy.

In Octavian’s version, Cleopatra assumed the role of the “wild queen,” lusting after Rome and plotting its destruction. For his one-time ally Antony to have succumbed to something other than a fellow Roman, she had to be a disarming seductress. Her powers had to be exaggerated because—for one man’s political purposes—she needed to have reduced another to abject slavery. And as ever, the easiest way to disarm a capable woman was to sexualize her. Herod did the same, expounding in the course of Cleopatra’s Jerusalem visit on her shameless behavior. Blushingly, he swore that she had forced herself upon him. As everyone knew, such was her wont. (She was at the time hugely pregnant with Mark Antony’s child.)

The divide between the civilized, virtuous West and the tyrannical, dissolute East began in part with Rome and its Egyptian problem. Cleopatra emerged as stand-in for her occult, alchemical land, the intoxicating address of sex and excess. She wielded power shrewdly and easily, making her that rarest of things: a woman who—working from an original script—discomfited the very male precincts of traditional authority. Two thousand years later, those tensions and anxieties have not relaxed their hold.

Stacy Schiff is the author of “Cleopatra: A Life,” which will be published next month. She won the Pulitzer Prize in 2000 for her biography of Vera Nabokov.


Full article and photo:

The Seafarer

Rescue ship: Joshua Slocum (at left), his wife and sons Victor and Garfield aboard the Liberdade, the 35-foot ‘sailing canoe’ he built to get them home after they were shipwrecked on the coast of Brazil in 1888.

Joshua Slocum is remembered for two things—being the first person to sail single-handedly around the world and writing a marvelous account of the journey. In his biography of Slocum, “The Hard Way Around,” Geoffrey Wolff focuses less on the nautical and literary achievements than on what Slocum did before them.

It is, for the most part, not a pretty picture. The New York Times called Slocum a barbarian after he was imprisoned for allegedly mistreating a sailor. On one of the vessels he commanded, in the 1880s, several crewmen contracted smallpox, and Slocum was arrested again, this time for killing a mutinous member of the crew. Although he eventually resumed command of that ship, it then went aground and was lost in Brazil. By age 45, two of the ships Slocum commanded had been wrecked, his first wife and three of his children had died, and he was unemployed and broke.

I confess that, halfway into this tale of woe, I found myself thinking about bailing out. The early chapters seemed slow-moving, especially for anyone expecting an adventure story. There are also some odd change-ups in style, from carefully considered, grown-up prose to informal sentences such as this one: “It was a miracle the hulk didn’t sink, though if you wait a bit, she will.”

But Mr. Woolf’s writing was not my problem. I was troubled by his overall approach to his subject. Slocum’s solo circumnavigation—he set out from Boston in April 1895 and arrived back in Newport, R.I., in June 1898—was an extraordinary feat, and Slocum’s book about it all, “Sailing Alone Around the World” (1899), is an intoxicating masterpiece. I saw no purpose in exposing the great man’s failings more than a century after his death.

But I kept reading, propelled by Mr. Wolff’s engaging description of the life of a young seaman during the great age of sail. Slocum was 16 when he went to sea in 1860. He wanted to command one of the tall-masted clipper ships, and once he achieved his objective, 10 years later, he didn’t just chart the ship’s course and direct its crew. He also functioned as the resident entrepreneur, identifying cargos to carry and negotiating the terms. He called on exotic ports throughout the world, with his wife and children onboard most of the time.

But Slocum was born too late. The clipper-ship era is probably the most celebrated period of marine history—the inspiration for the paintings and prints that seem to hang everywhere, from stodgy clubs to fast-food restaurants. But it didn’t last long. In 1860, wood-hulled sailing vessels were already being displaced by steel ships powered by steam. By the time Slocum took over his most impressive ship, the 233-foot-long Northern Lights, in 1881, the tide was flowing swiftly against him.

It is in the attempt to connect Slocum’s circumstances and choices to his failures and his immortalizing achievements that Mr. Wolff finds book-worthy purpose. After Slocum lost his ship in Brazil in 1887, he built a 35-foot “sailing canoe” and set out on a 5,000-mile journey back to the U.S., this time with his second wife, Hettie (his first wife, Virginia, had died three years before), and two of his children. This is how Slocum, in his book, explained the switch to small-boat sailing: “The old boating trick came back fresh to me, the love of the thing itself gaining on me as the little ship stood out; and my crew with one voice said, ‘Go on.’ ”

Not far into the journey, the little boat ran into a squall and the sails, which had been sewn by Hettie, shredded. Seeking to answer the question of what Slocum was thinking at such times, Mr. Wolff bores into Slocum’s prose like a literary detective. Of Slocum’s lifetime sailing obsession and his arresting phrase “the love of the thing itself” he writes that it came from “irreducible, hard-nut recognition and radiant sentiment.”

Mr. Wolff doesn’t get around to describing Slocum’s 46,000-mile lap around the planet until his book’s penultimate chapter. By then many readers will be so fascinated by the man and the why-did-he-do-it question that they may be eager to read Slocum’s own book, which has never gone out of print.

What is it that drives some people to undertake the audacious? We live at a time when many of the most important firsts have already been claimed, but people seem more obsessed than ever with establishing records, some of them of dubious distinction. Businessmen-climbers search for mountain peaks that have never been surmounted, marathoners go to Antarctica to run, and a procession of teenagers seeks to replicate Slocum’s circumnavigation (with the benefit of high-tech boats, push-button navigational equipment and satellite telephones).

Was Slocum like these people? Before I read Mr. Wolff’s book, I would have said no, that his motives and achievement were more pure and singular. Now I am unsure. Many modern-day adventurers are driven by ego. And ego probably played a role with Slocum, who was no doubt eager to demonstrate that he was, in spite of his many setbacks, exceptionally skilled at what he did, to the point, as he put it, of “neglecting all else.” And aren’t some contemporary adventurers individuals who, like Slocum, feel as if they have run out of other options?

Then again, perhaps Slocum was different. Maybe it was all about “the thing itself.” In November 1908, Slocum sailed from his home on Martha’s Vineyard to undertake a solo exploration of the Venezuelan coast and the Amazon. Somewhere along the way he disappeared. No one knows exactly what happened.

Mr. Knecht is the author of “The Proving Ground: The Inside Story of the 1998 Sydney of Hobart Race.”


Full article and photo:

Eisenhower’s Pit Bull

There have been countless biographies of the generals of World War II, and many are excellent. This biography of Walter Bedell Smith, Eisenhower’s chief of staff, is one of the best. Smith has never received the attention and the credit that he deserves. A chief of staff is perhaps bound to be an unsung hero, but “Beetle” Smith was far more than just a tough and able administrator. In the words of a fellow officer, he possessed “all the charm of a rattlesnake.” Yet the bad-cop routine—one he used almost entirely with fellow Americans and not with Allies—was forced upon him because Eisenhower, his supreme commander, desperately wanted to be liked by everybody.

A GENERALS GATHERING: Bernard Montgomery explains his plan for taking the Sicilian city of Messina to Walter Bedell Smith, George Patton and Harold Alexander, July 25, 1943.
Like almost every key U.S. Army officer in World War II, Smith (1895– 1961) was spotted by George C. Marshall. After serving as Marshall’s right-hand man in Washington, Smith moved to Europe as Eisenhower’s chief of staff in 1942. His first operational task, while based in London, was to coordinate the North African landings codenamed Operation Torch. Although the invasion was a success, problems mounted rapidly. The most serious was the supply chain, which Smith tried to reorganize radically, but Eisenhower was reluctant to take hard decisions.

In addition to all his operational duties, Smith was also left to handle the press, and political and diplomatic relations, acting as Eisenhower’s “primary shock-absorber.” The politically naïve Eisenhower had suddenly discovered the pitfalls of supreme command, especially when it involved the latent civil war of French politics. His decision to make use of Admiral François Darlan, the head of the Vichy French navy, to defuse opposition to the Allied landings in North Africa produced a storm of condemnation in the U.S. and Britain, especially as Vichy’s anti-Jewish laws were left in place. Eisenhower complained to an old friend of his role as supreme commander: “I am a cross between a one-time soldier, a pseudo-statesman, a jack-legged politician and a crooked diplomat.” These first trials, and especially the failures in the advance on Tunisia, did not constitute Eisenhower’s finest hour. He was close to a breakdown by January 1943, and his weak performance briefing the Combined Chiefs of Staff at the Casablanca conference—Roosevelt thought him “jittery”—nearly led to his resignation. He confided to Patton that he thought “his thread [was] about to be cut.” But the British did not insist on his removal, and with Smith’s steady advice Eisenhower weathered the storm.

Eisenhower and Smith were both caught up in the great strategic debate within the Allied camp. Marshall wanted the invasion of France to have every priority and remained deeply suspicious of British attempts to postpone it by diverting efforts to the Mediterranean theater because of their material and manpower shortages. As in the Napoleonic wars, British strategy was to avoid a major continental engagement until, making use of the Royal Navy, the enemy had been worn down at the periphery. American doctrine was the very opposite: using industrial supremacy to fight a battle of equipment (Materialschlacht) and confronting the enemy in a head-on land engagement. Mr. Crosswell quotes the boast of one U.S. general: “The American Army does not solve its problems, it overwhelms them.”

But Marshall’s plans for an early invasion of Northwest Europe were thwarted by Churchill, who went directly to Roosevelt. As things turned out, Churchill proved to be right to postpone D-Day, albeit for the wrong reasons. He longed to attack the “soft under-belly of Europe” through Italy and into central Europe to forestall a Soviet occupation after the war. (Roosevelt, Marshall and Eisenhower all failed to foresee the Stalin’s ambitions.) Marshall, on the other hand, was wrong because any attempt to mount a cross-Channel invasion in 1942 or even 1943 would have ended in disaster. The U.S. Army was simply not ready, the shipping and landing-craft were not available and the Allies lacked air supremacy.

The stress of Smith’s job, especially dealing with the rival egos of Eisenhower’s army group and army commanders—to say nothing of the constant political interference from Churchill—contributed to his irascibility and ulcers. His infrequent escapes from his desk revolved around needlepoint, fishing and collecting objets d’art. Smith was, in Mr. Crosswell’s words, both “a loner and an inveterate collector all his life.”

Eisenhower has always received the credit for the close Allied cooperation, but in “Beetle” we find that Smith achieved much of it working behind the scenes. Eisenhower knew this and wrote to Marshall about the necessity of promoting him. “Smith seems to have a better understanding of the British and is more successful in producing smooth teamwork among the various elements of the staff than any other subordinate I have.” Yet Eisenhower’s feelings about Beetle seem to have been ambivalent, even though he depended on his abilities to an extraordinary degree. They were never close friends, and Eisenhower failed to give Smith the credit he deserved. Smith’s ability to get on well with the British also often led to accusations that he was prejudiced in their favor. Yet he was brilliant in containing inter-Allied explosions, especially those provoked by the prima donna Bernard Montgomery. Major turf wars were avoided by Smith’s skilled handling of the insufferable British general. When Montgomery came to Eisenhower’s headquarters in Algiers in 1943, he said to Smith: “I expect I am a bit unpopular up here.” Smith replied: “General, to serve under you would be a great privilege for anyone, to serve longside you wouldn’t be too bad. But, say, General, to serve over you is hell.”

Montgomery, however, was not the only senior commander to exploit Eisenhower’s failure to establish firm control and his attempts to compromise. American generals like Omar Bradley and George Patton also played games and threw tantrums, which Smith had to resolve. “The trouble with Ike,” Smith observed, “is that instead of giving direct and clear orders, [he] dresses them up in polite language; and that is why our senior American commanders take advantage.” Eisenhower’s reliance on charm and manipulation all too often failed to work. Patton likened him to a politician running for office rather than a real commander.

This book, which manages to be both brutally honest and fair, does little to bolster the Ike myth, but clearly shows his moment of glory during the Ardennes offensive in December 1944, when he really did at last take a grip. But Eisenhower quickly lost it again during the rest of that terrible winter. And perhaps predictably, it was Smith who had to fire a semi-deranged Patton in September 1945 after his outrageous remarks attacking denazification.

Smith was disappointed not to get Eisenhower’s job after the end of the war. But his talents for tough negotiation were not ignored. He was appointed to Moscow as ambassador, and Eisenhower said that it would “serve those bastards right.” Although in bad health, Smith was called upon again, in 1950, to reorganize the fledgling CIA. He was appalled by the gifted amateurs in covert operations, who clearly were out of their league up against the ruthless KGB. On becoming president, Eisenhower again called on Smith—to serve under John Foster Dulles at the State Department—and Smith dutifully obeyed. His main role was dealing with the collapse of French Indochina and the Geneva conference in 1954. Struggling against ill health, partly due to a diet of cigarettes, “bourbon and Dexedrine,” Smith died in 1961.

Mr. Crosswell’s account both of Smith’s life and of supreme command in Europe is expert and written in good clean prose. Almost a third of it is devoted to logistic problems, which have never received the importance they deserve, especially for the war in Northwest Europe. Although strangely structured, with Smith’s postwar career at the beginning, the book provides a vital addition to our understanding of the politics and problems of allied warfare.

Mr. Beevor is the author of “D-Day: The Battle for Normandy” (Penguin).


Full article and photo:

We the People

It seems that they are on the news programs every night: Americans dressed as 18th-century Founders, waving placards saying “Don’t Tread On Me” and complaining that members of Congress pass legislation without regard for the Constitution. Perhaps never before have so many citizens invested so much of their political energy in the proposition that we should return to the first principles of the Founding.

Critics of the tea-party movement have been quick to question its members’ constitutional bona fides. Washington Post columnist E.J. Dionne, for instance, sniffed that tea-party supporters more closely resemble Anti-Federalists—opponents of the Constitution in 1788—than they do the Founders.

In a sense the critics are right. To a remarkable extent, the tea-party movement is raising the same questions of constitutional governance that Anti-Federalists (and not a few Federalists) raised in the debates over whether to adopt a new Plan of Union in 1788. Just a few days ago, a poll by Rasmussen Reports showed that fully 61% of American adults believe that the federal government has too much power; 66% think Americans are overtaxed; and 70% believe the government does not spend taxpayers’ money wisely or fairly.

Too bad Rasmussen wasn’t around in the 1780s—the results might have been strikingly similar. Even while ratifying the Constitution, at least seven of the state conventions—representing the vast majority of Americans—expressed the view that the new government had been given too much power. The conventions demanded amendments to curb the government’s potential for oppression. And the most popular of the amendments—the only one agreed on by all the states proposing the changes—limited the federal government’s broad power of taxation.

Yet it’s doubtful that many Americans today, even tea-party enthusiasts, are aware that those debates took place.

The arrival of Pauline Maier’s “Ratification,” then, could not be more timely. It is the first comprehensive account of the debates in the 13 states over adoption of the Constitution. Others have written about specific aspects of the ratification struggle—about the arguments of one side or the other, or about the debate in a particular state—but remarkably, until now, no historian had written a full-length account of the politics, personalities, arguments, and outcomes between Sept. 17, 1787, when the Constitutional Convention completed its work, and May 29, 1790, when the last of the original states, Rhode Island, ratified the document.

“Ratification,” for all its scope and technical detail, is a gripping and eye-opening read. Ms. Maier is a member of that rare breed of historians who write vividly and with a flair for depicting dramatic events. She has benefited from an ongoing project led by John Kaminski and Gaspare Saladino called The Documentary History of the Ratification of the Constitution, an effort to collect and publish all extant records, newspaper articles, letters and notes bearing on the subject of ratification. Much of this material, Ms. Maier writes, “I suspect no historian has ever used before.” She mined the papers to produce a description of the ratification process that is rich in detail, bringing to light episodes and arguments previously unknown even to constitutional historians.

For example, the supporters of the Constitution in Pennsylvania were so determined to make it appear that the state overwhelmingly supported ratification that they suppressed publication of the proceedings. Despite weeks of spirited debate, in which opponents raised numerous issues of substance, only two speeches, both by supporters, made it into the official reports. The Federalist majority even voted to expunge any mention of defeated motions for amendments from the journal of the proceedings. Most prior accounts of the Pennsylvania events thus missed most of this fight.

Drawing on freshly uncovered archival sources, Ms. Maier tells the story of a Pennsylvania backwoods opponent of the Constitution, William Findley, who denounced the absence of a provision for civil jury trials in the Constitution—an omission later rectified by the Seventh Amendment. He commented that when Sweden had abandoned jury trials, “the commons of that nation lost their freedom.” Immediately, two lions of the Pennsylvania legal establishment pounced. James Wilson (later associate justice of the United States Supreme Court) and Thomas McKean (who had served 10 years as chief justice of Pennsylvania) declared that trial by jury never existed anywhere but in England and mocked Findley’s supposed ignorance.

The next day, however, Findley produced the third volume of William Blackstone’s “Commentaries on the Laws of England,” which attributed the invention of the jury to Scandinavia and recounted that when the jury ceased to be used in Sweden, that nation “degenerated into a mere aristocracy.” Wilson, who should have been more embarrassed than he was, conceded that Findley was correct—but added, superciliously, that he had forgotten more law than Findley had ever learned. No wonder Wilson was burned in effigy by Pennsylvanians who thought he was high-handed, and no wonder the opponents of the Constitution felt abused by the arrogance of the Federalists.

In Pennsylvania and elsewhere, as Ms. Maier reports, debates sometimes broke into violence. The Pennsylvania legislature gained the quorum necessary to call a ratifying convention only when a mob broke into the homes of two recalcitrant legislators and dragged them forcibly to the statehouse. When the New York convention, dominated by delegates from upstate counties, appeared adamantly opposed to ratification, metropolitan New Yorkers threatened to secede from the state, even at the risk of possible civil war. Later, Rhode Island was coerced into ratification by an act of Congress cutting off all trade. Any merchant caught trading with Rhode Islanders would face confiscation of his ship, a substantial fine and up to six months’ imprisonment.

A particularly notorious incident occurred in Albany, N.Y., on the Fourth of July, 1788. After hearing news of Virginia’s ratification, supporters of the Constitution staged a noisy celebration. Infuriated opponents counter-marched, publicly burned a copy of the Constitution, and later assaulted a group of supporters with clubs, stones and bricks. Federalists then trashed the tavern where the Anti contingent met and took several prisoners.

Later that month, in the middle of the night, 500 supporters of the Constitution in Manhattan attacked the premises of the New-York Journal, the one newspaper in the city that had regularly published essays critical of the Constitution. According to Ms. Maier, they smashed the windows and threw printing equipment into the street. The publisher, Thomas Greenleaf, escaped through a back door. The publisher of a rival paper commented: “God save us, if these be the dawnings of the new federal government.”

Religion, too, reared its head in unexpected ways. The Constitutional Convention famously conducted its proceedings without a chaplain or daily prayer, but the Virginia and New York ratifying conventions began each day with a prayer, without controversy or objection. Two of the ratifying conventions met in church buildings. Delegates in several states worried that the Constitution would allow “Jews, deists, and infidels” to hold office. Yet when the New York City supporters of the Constitution scheduled a procession to celebrate ratification by the nine states necessary to form the new government, they postponed it out of respect for a Jewish holiday. When the procession did take place, clergy of various denominations walked hand-in-hand. Among them was a bearded rabbi.

History is written by the winners. Opponents of the Constitution have long been dismissed as being motivated by fear of outsiders, narrow self-interest, and localized concerns. It used to be thought that most of the critics fought against the Constitution because its superior court system and prohibitions on paper money would force them to pay their lawful debts. And of course there was some of that. But Ms. Maier emphasizes that the overriding concern of the Constitution’s opponents was with the defense of liberty against federal overreach and the lack of proper representation of the people.

Still more interesting: Federalists shared these concerns. The vast majority on both sides of the issue wanted a decentralized federal system of limited government, responsive to the people and protective of their rights. The difference was over how to achieve this. As Ms. Maier tells the story, the Constitution’s critics sought more to improve the plan through amendments than to scuttle it, and to a great extent they succeeded. Not only did they obtain amendments, which we call the Bill of Rights, but the critics also won a host of other assurances: states would retain their autonomy; the federal government would be allowed to impose few taxes other than tariffs; and the nation would rely mostly on state militias rather than a large standing army. All of these concessions addressed Anti-Federalist demands or concerns.

Far more than the Constitutional Convention, the ratification debates touched on fundamental questions of liberty and order, and their relation to centralization and practical democracy. The immediate concerns of the young nation were resolved—the Constitution was ratified, with amendments. But those fundamental questions would recur, as fundamental questions always do, at key junctures of history when citizens feel the need for guidance about how to carry forward what Washington called “the experiment entrusted to the hands of the American people.” We seem to live in such a time.

Mr. McConnell, a former federal judge, is the Richard & Frances Mallery Professor and Director of the Constitutional Law Center at Stanford Law School, and a Senior Fellow at the Hoover Institution.


Full article and photo:

Posted in Law

The Other ‘G’ Spot

At the beginning of the 20th century the British psychologist Charles Spearman “discovered” the idea of general intelligence. Spearman observed that students’ grades in different subjects, and their scores on various tests, were all positively correlated. He then showed that this pattern could be explained mathematically by assuming that people vary in special abilities for the different tests as well as a single general ability—or “g”—that is used for all of them.

John Duncan, one of the world’s leading cognitive neuroscientists, explains Spearman’s work early in “How Intelligence Happens,” before moving on to his own attempts to locate the source of Spearman’s “g” in the brain. To get us grounded, Mr. Duncan also provides a wonderfully compact summary of brain architecture and function. Throughout the book, he makes it clear that his fascination with intelligent behavior has to do with how the brain brings it about—he leaves it to others to ponder things like the economic import of intelligence and how it is influenced by genes, upbringing, and education.

He also doesn’t waste time dilating on the question of what, precisely, we mean by “intelligence.” Defining terms is not the expertise of scientists, but their attempts can be thought-provoking. Two decades ago, the cognitive science and artificial-intelligence pioneer Allen Newell proposed that an entity should be considered intelligent to the extent that it uses all the information it has when making decisions. But according to that definition, a device as simple as a thermostat would have perfect intelligence—not terribly helpful when trying to understand human differences.

I have been doing research on intelligence for more than a decade, and I have to confess that I do not know of a perfect definition. But most psychologists consider intelligence a general ability to perform well on a wide variety of mental tasks and challenges. In everyday speech, it sometimes means roughly the same thing: We call someone “intelligent” if we believe that their mental abilities are generally high—not if they are skilled in just one narrow field.

Mr. Duncan’s early work on intelligence and the brain resolved an old paradox. Before imaging technologies like MRI were invented, neuropsychologists used IQ tests to determine what parts of the brain were damaged in patients suffering from strokes and other closed-head injuries. If the patient had trouble with the verbal parts of the test, the damage was probably in the left hemisphere; if the trouble was in the visual parts, the damage was probably in the back of the brain; and so on. But oddly, damage to the frontal lobes seemed to have very little effect on IQ—despite the frontal lobes’ constituting nearly 40% of the cerebral cortex.

Mr. Duncan found that patients with frontal-lobe damage were impaired on tests of “fluid intelligence” that, until recently, were not part of standard IQ tests. These tests measure the ability to solve abstract nonverbal problems in which prior knowledge of language or facts is of no help. For example, a “matrix reasoning” problem presents a grid of complex shapes with one empty space that the test-taker must fill by choosing the correct option from a set of up to eight alternatives. Such tests seem to reveal a raw ability to make optimal use of the information contained within a problem or situation.

Later, Mr. Duncan used PET scanning to measure the brain activity of people without brain damage as they solved problems that varied in difficulty. Regardless of content, as the tests got harder, the subjects made more use of areas in their frontal lobes, as well as in their parietal lobes, which are farther toward the back of the brain.

Mr. Duncan makes a convincing case that these brain areas constitute a special circuit that is crucial for both Spearman’s “g” and for intelligent behavior more generally. But his book elides the question of whether this circuit is also the source of IQ differences. That is, do people who score high on IQ tests use the frontal and parietal areas of their brains differently from people who score lower? The answer, discovered by other researchers, turns out to be yes.

There are other properties of the brain that contribute to “g,” including the speed of basic information-processing (measured by how fast people can press buttons in response to flashing lights) and even the total size of the brain (larger is better). One of the next steps in understanding “g” is to figure out how all these factors interact and combine to produce the wide range of differences we see in human intelligence. Mr. Duncan no doubt will be a key player in this effort, frontal and parietal lobes firing way.

Mr. Chabris is a psychology professor at Union College and the co-author, with Daniel Simons, of “The Invisible Gorilla: And Other Ways Our Intuitions Deceive Us” (Crown).


Full article and photo:

Five Best Books on Animal Survival

To Know a Fly

By Vincent G. Dethier (1962)

Vincent Dethier spent a lifetime researching the senses, in particular those of insects. His “To Know a Fly” (not an easy task—there are more than 50,000 species) is an exuberant investigation of such matters as taste, hunger and satiation and their role in the survival of the humble housefly. He observes that a pregnant female fly will stop consuming sugar—an “adequate food for her, but useless for her eggs,” preferring instead protein that is good for the eggs but won’t nourish her. “In some quarters it would be hailed as maternal instinct,” he writes, “and by so naming it we would be no nearer an understanding of what it is.” Dethier’s learning from countless fly experiments is vast, but he is bracing in his acknowledgment of what remains unknown. “To espouse ultimate understanding of even so simple a brain,” he says, “reflects an optimism outside the natural order.” But he is entirely convincing when he says that a properly conducted experiment is “an adventure, an expedition, a conquest” and that to know a fly “is to share a bit in the sublimity of Knowledge.”

Nerve Cells and Insect Behavior

By Kenneth D. Roeder (1963)

This book presents Kenneth Roeder’s most famous discovery—that some moths are able to detect the calls of echo-locating bats and employ defensive measures to evade the predators. The revelation was made all the more remarkable by its timing: only a few years after Donald Griffin astonished the scientific community in 1958 with his revelation that bats “see” the world with their ears. Roeder examines the senses and behavior of insects at the level of neural mechanisms, and along the way we learn about not only the tactics of escape-artist moths but also about the evasive maneuvers of cockroaches and other insects. The study is the product of neuron monitoring via electrical eavesdropping, which means that there is a lot of technical writing in “Nerve Cells and Insect Behavior”—a fascinating work if you stay with it.

Desert Animals

By Knut Schmidt-Nielsen (1964)

Despite extremes of heat and lack of water, the desert is home to “a richer animal life than we can imagine,” Knut Schmidt-Nielsen says in this pioneering study. The animals that survive in such extreme conditions are aided by a variety of adaptations. For instance, the camel’s body temperature fluctuates wildly—camels start out “cold” in the morning so that they overheat less easily later in the day. The kangaroo rat’s kidney produces only small amounts of highly concentrated urine, enabling the animal to forgo water for long periods and live on air-dried food. After reading Schmidt-Nielsen’s evocation of a world where countless hardy animals thrive, you’ll never again look at a desert expanse and think it barren.

Honeybee Democracy

By Thomas D. Seeley (2010)

In ‘HONEYBEE DEMOCRACY,’ Thomas Seeley explains how a honeybee colony divides and reproduces: A contingent of 10,000 bees or more communicate among themselves and arrive unanimously at a decision about the best available new home. Building on a lifetime of observation and experimentation, Seeley relates the story with admirable clarity as we see his beloved honeybees—which have been in the consensus-building business for perhaps 200 million years—embark on the establishment of a new outpost. The process begins with a few scout bees and involves a vigorous debate before an agreement is reached. Then, on a signal, the group leaves en masse for the chosen place, likely a hollow tree some kilometers distant that the majority of the bees have never seen before. This spirit of cooperation, Seeley says, has much to tell us about solving complex human problems.

The Beak of the Finch

By Jonathan Weiner (1994)

Darwin made the Galápagos finches famous, but biologists Peter and Rosemary Grant and their graduate students deepened our understanding of how these small birds have survived and adapted across the centuries. Darwin supposed that the various kinds of finches, with their varying beaks and body sizes, came from diverse genetic backgrounds. But he later concluded that the finches were closely related and had thus likely evolved from a common stock. The Grants—working for three decades on the islands—bolstered Darwin’s insight that species are not immutable, as had been thought. One potential problem with Darwin’s theory had been that species appeared to be largely static, but the Grants succeeded in showing that evolution can be very rapid—beak shapes could change from year to year in response to, say, heightened mortality rates caused by food scarcity. Evidence of speedy adaptation has added meaning today as we witness insects becoming resistant to insecticides and bacteria surviving despite the most potent antibiotics.

Mr. Heinrich is the author of “The Nesting Season: Cuckoos, Cuckolds, and the Invention of Monogamy.”


Full article and photo:

In Praise of the Mediocre Mother

Elisabeth Badinter’s bestselling book champions France’s so-so moms as the secret to high Gallic birth rates.

For all their hand-wringing over Gallic cultural decline, the French are the European champions of childbirth. With a consistently solid birth rate of two babies per woman, France is a both a puzzle and a model for demographers and policy makers alarmed by aging populations in the rest of the developed world.

Feminist philosopher Elisabeth Badinter, the Left Bank’s modern-day answer to Simone de Beauvoir, thinks she can explain this paradox: French women have always allowed themselves to be “mediocre mothers.”

As she details in her bestselling book, whose title translates as “The Conflict: The Woman and the Mother,” France has a long tradition of entrusting babies to nannies and daycare staff (the daycare center, or crèche, and the pre-school, or école maternelle, both being French inventions). Helicopter parenting, and the constant demands it places on women’s bodies, identities, intellects, and careers, never really made it to France, where for centuries the children of the upper classes were handed over to wet nurses. “Maman does not owe everything—her milk, her time, her energy—to her child,” Ms. Badinter says.

But times are changing, even in France, where maternal instinct and hormones are venerated ever more strongly, a trend that’s on the rise in the rest of the developed world as well. Becoming pregnant is, in Ms. Badinter’s words, becoming akin to “entering a religious order.” This global mentality shift is now threatening to strip French mothers of what has, ironically, made them among the most fertile women in the developed world: Their willingness to be so-so moms.

The tension involves more than simply what kind of mother one should be, or how much time with one’s children is too much. In her book Ms. Badinter describes a subterranean culture war that is being waged on mothers by the new forces of “eco-political” correctness. Only a few decades ago, disposable diapers, packaged baby food, infant formula and bottles were seen as key tools of women’s emancipation. Today, mothers in the rich world are under increasing pressure to not only give themselves entirely to their children, but to do so by going back to the “natural,” and all the tedium that entails.

Ms. Badinter identifies this concept as a regressive one, even adorned as it is with the new-age tinge of environmentalism. In the ascendent mentality, the “perfect” 24/7 mother is one who stays home to prepare only organic purees for her treasures, while endlessly washing cloth nappies and breastfeeding until the child is almost ready for school.

What Ms. Badinter terms a “holy alliance of reactionaries” comprises environmentalists, pediatricians, politicians, “the ayatollahs of breastfeeding” and elements of the media. As the book’s dense cross-national research shows, the consequences of the Total Motherhood credo are becoming dire for birth rates. Panicked by the all-or-nothing definition of good motherhood, women are opting out in droves, leading to the demographic crises we see in Italy and Germany, each with a fertility rate of 1.4 children per woman in 2008 according to the World Bank.

As Ms. Badinter tells it, the thinking that has lead to this baby-bust is not so new, and finds its strongest ideological roots in the 18th-century anti-progress arguments of Jean-Jacques Rousseau. And while there are important culturally specific notions of motherhood, such as the German mutter, Italian mama, and the Japanese kenbo, Ms. Badinter identifies the global trend now hitting France as part of a post-Baby Boomer backlash.

“Because of successive economic crises since the 1970s, and the feeling that our parents made a mistake with their excessive materialism, we have to turn our backs on extreme individualism and unreasonable consumption and give to our children only what is ‘natural,'” Ms. Badinter told me in a recent interview.

The reader might assume that all this means that “The Conflict” boils down to a blunt attack on the green movement, or the latest battle in the intergenerational feminist wars. It’s neither, though if we must, Ms. Badinter is best labeled as a libertarian of the left. The clearest difference between mother-of-three and grandmother Badinter, and her childless forebear de Beauvoir, is the former’s enthusiastic embrace of the will to procreate. But that embrace, she tells us, is only possible if motherhood isn’t supposed to take everything from the mother.

Ms. Badinter scoffs, for instance, at the interminable lists of banned foods and drinks for expectant mothers, noting that “thirty years ago we lived our pregnancies with insouciance and lightness” without bad consequences. While other mothers pore over the plethora of yummy-mummy websites and instruction manuals, Ms. Badinter jeers at the invention and multiplication of children’s needs in a world where the kid is king.

“The Conflict” hit German bookshelves last month after spending much of the year on French bestseller lists. Its tone can be brutal at times but it provides a fresh and apparently necessary wake-up call to advanced societies about how to stop the “womb strike” menacing graying nations from Japan to Germany.

Next year “The Conflict” will be published in English. As in France and certainly Germany, most English-speaking parents will recognize the ideology Ms. Badinter says is attacking procreation: That to attain moral elevation, mothers must throw out powdered milk and plastic bottles, disposable diapers, strollers, feeding spoons, and even submit to “natural births” sans epidural.

For all women who have wilted under these crushing prohibitions and admonitions, Elisabeth Badinter is their savior. Her acerbic dose of skepticism, even if overdrawn at times, is a welcome panacea to the fetishization of parenting.

And who knows, it might even convince some would-be mothers that, experts be damned, she can “afford” to bear children after all. At least if she does so à la française.

Ms. Symons is a writer based in Bangkok and Paris.


Full article and photo: