Cloud control

Electrified sand. Exploding balloons. The long and colorful history of weather manipulation.

This brutal winter has made sure that no one forgets who’s in charge. The snow doesn’t fall so much as fly. Cars stay buried, and feet stay wet. Ice is invisible, and every puddle is deeper than it looks. On the eve of each new storm, the citizenry engages in diligent preparations, rearranging travel plans, lining up baby sitters in case the schools are closed, and packing comfortable shoes for work so they’re not forced to spend all day wearing their awful snow boots.

One can’t help but feel a little embarrassed on behalf of the species, to have been involved in all this fuss over something as trivial as the weather. Is the human race not mighty? How are we still allowing ourselves, in the year 2011, to be reduced to such indignities by a bunch of soggy clouds?

It is not for lack of trying. It’s just that over the last 200 years, the clouds have proven an improbably resilient adversary, and the weather in general has resisted numerous well-funded — and often quite imaginative — attempts at manipulation by meteorologists, physicists, and assorted hobbyists. Some have tried to make it rain, while others have tried to make it stop. Balloons full of explosives have been sent into the sky, and large quantities of electrically charged sand have been dropped from airplanes. One enduring scheme is to disrupt and weaken hurricanes by spreading oil on the surface of the ocean. Another is to drive away rain by shooting clouds with silver iodide or dry ice, a practice that was famously implemented at the 2008 Olympics in Beijing and is frequently employed by farmers throughout the United States.

There’s something deeply and perennially appealing about the idea of controlling the weather, about deciding where rain should fall and when the sun should shine. But failing at it has been just as persistent a thread in the human experience. In a new book called “Fixing the Sky: The Checkered History of Weather and Climate Control,” Colby College historian of science James Rodger Fleming catalogs all the dreamers, fools, and pseudo-scientists who have devoted their lives to weather modification, tracing the delusions they shared and their remarkable range of motivations. Some wanted to create technology that would be of use to farmers, so that they would no longer have to operate at the mercy of unpredictable droughts. Others imagined scenarios in which the weather could be weaponized and used against foreign enemies. Still others had visions of utopia in which the world’s deserts were made fertile and every child was fed.

“Even some of the charlatans had meteorology books on their desks,” Fleming said last week. “Most had simple ideas: for instance, that hot air rises. These guys’ll have some sense of things, but they won’t have a complete theory of the weather system. They have a principle they fix on and then they try to build their scheme from there.”

What they underestimated, in Fleming’s view — what continues to stymie us all, whether we’re seeding clouds or just trying to plan for the next commute — is weather’s unfathomable complexity. And yet, the dream has stayed alive. Lately, the drive to fix the climate has taken the form of large-scale geoengineering projects designed to reverse the effects of global warming. Such projects — launching mirrors into space to reflect solar radiation away from the earth, for instance — are vastly more ambitious than anything a 19th-century rainmaker could have cooked up, and would employ much more sophisticated technology. What’s unclear, as one looks back at the history of weather modification research, is whether all that technology makes it any more likely that our ambitions will be realized, or if it just stands to make our failure that much more devastating.

The story of modern weather control in America, as Fleming tells it in “Fixing the Sky,” begins on a Wednesday morning in November of 1946, some 14,000 feet in the air above western Massachusetts. Sitting up there in a single-engine airplane was Vincent Schaefer, a 41-year-old scientist in the employ of the General Electric Research Laboratory, whose principal claim to fame up to that point was that he’d found a way to make plastic replicas of snowflakes. Prior to the flight, Schaefer had conducted an experiment that seemed to point toward a method for manipulating clouds using small bits of dry ice. If the technology could be exported from the lab and made to work in the real world, the potential applications would be limitless. With that in mind, flying over the Berkshires, Schaefer waited until his plane entered a suitable-looking cloud, then opened a window and let a total of six pounds of crushed dry ice out into the atmosphere. Before he knew it, “glinting crystals” of snow were falling obediently to the ground below.

GE announced the results of the demonstration the next day. “SNOWSTORM MANUFACTURED,” read the massive banner headline on the front page of The Boston Globe. The GE lab was deluged with letters and telegrams from people excited about the new technology for all sorts of reasons. One asked if it might be possible to get some artificial snow for use in an upcoming Christmas pageant. Another implored the company to aid a search-and-rescue effort at Mount Rainier by getting rid of some inconveniently located clouds. Hollywood producers inquired about doing up some blizzards for their movie sets. Separately, a state official from Kansas wrote to President Truman in hopes that GE’s snow-making technology could be used to help end a drought there. It seemed to be an all-purpose miracle, as though there was not a problem on earth it couldn’t fix.

Insofar as technological advancement in general is all about man’s triumph over his conditions, a victory over the weather is basically like beating the boss in a video game. And GE’s breakthrough came at a moment when the country was collectively keyed up on the transformative power of technology: World War II had just ended, and we suddenly had the bomb, radar, penicillin, and computers. But the results of Schaefer’s experiment would have inspired the same frenzied reaction in any era. “Think of it!” as one journalist wrote in a 1923 issue of Popular Science Monthly, about an earlier weather modification scheme, “Rain when you want it. Sunshine when you want it. Los Angeles weather in Pittsburgh and April showers for the arid deserts of the West. Man in control of the heavens — to turn on or shut them off as he wishes.”

It’s a longstanding, international fantasy — one that goes all the way back to ancient Greece, where watchmen stood guard over the skies and alerted their countrymen at the first sign of hail so that they might try to hold off the storm by quickly sacrificing some animals. The American tradition begins in the early 19th century, when the nation’s first official meteorologist, James “the Storm King” Espy, developed a theory of rainmaking that involved cutting down large swaths of forest and lighting them on fire. Espy had observed that volcanic eruptions were often followed by rainfall. He thought these fires would work the same way, causing hot air to rise into the atmosphere, cool, and thus produce precipitation.

For years he unsuccessfully sought funding from the government so that he might test his theory, describing in an 1845 open letter “To the Friends of Science” a proposal wherein 40-acre fires would be set every seven days at regular intervals along a 600-mile stretch of the Rocky Mountains. The result, he promised, would be regular rainfall that would not only ease the lives of farmers and make the country more productive but also eradicate the spread of disease and make extreme temperatures a thing of the past. He did not convince the friends of science, however, and lived out his distinguished career without ever realizing his vision.

Others had better luck winning hearts and minds. In 1871, a civil engineer from Chicago named Edward Powers published a book called “War and the Weather, or, The Artificial Production of Rain,” in which he argued that rainstorms were caused by loud noises and could be induced using explosives. He found a sympathetic collaborator in a Texas rancher and former Confederate general by the name of Daniel Ruggles, who believed strongly that all one had to do to stimulate rain was send balloons full of dynamite and gunpowder up into the sky and detonate them. Another adherent of this point of view was Robert Dyrenforth, a patent lawyer who actually succeeded in securing a federal grant to conduct a series of spectacular, but finally inconclusive, pyrotechnic experiments during the summer and fall of 1891.

A few decades after Dyrenforth’s methods were roundly discredited, an inventor named L. Francis Warren emerged with a new kind of theory. Warren, an autodidact who claimed to be a Harvard professor, believed that the trick to rainmaking wasn’t heat or noise, but electrically charged sand, which if sprinkled from the sky could not only produce rain but also break up clouds. His endgame was a squad of airplanes equipped to stop droughts, clear fog, and put out fires. It was Warren’s scheme that inspired that breathless Popular Science article, but after multiple inconclusive tests — including some funded by the US military — it lost momentum and faded away.

For the next 50 years, charlatans and snake-oil salesmen inspired by Warren, Dyrenforth, and the rest of them went around the country hawking weather control technologies that had no basis whatsoever in science. It wasn’t until after World War II, with the emergence of GE’s apparent success dropping dry ice into clouds, that the American public once again had a credible weather control scheme to get excited about. Once that happened, though, it was off to the races, and by the 1950s, commercial cloud seeding — first with dry ice, then with silver iodide — was taking place over an estimated 10 percent of US land. By the end of the decade, it was conventional wisdom that achieving mastery over the weather would be a decisive factor in the outcome of the Cold War. In 1971, it was reported that the United States had secretly used cloud-seeding to induce rain over the Ho Chi Minh Trail in hopes of disrupting enemy troop movements. In 1986, the Soviet Union is said to have used it to protect Moscow from radioactivity emanating from the Chernobyl site by steering toxic clouds to Belarus and artificially bursting them. There’s no way to know whether the seeding operation actually accomplished anything, but people in Belarus to this day hold the Kremlin in contempt for its clandestine attempt to stick them with the fallout.

You’d think, given mankind’s record of unflappable ingenuity, we would have had weather figured out by now. But after decades of dedicated experimentation and untold millions of dollars invested, the world is still dealing with droughts, floods, and 18-foot urban snowbanks. What is making this so difficult? Why is it that the best we can do when we learn of an approaching snowstorm is brace ourselves and hope our street gets properly plowed?

The problem is that weather conditions in any given place at any given time are a function of far too many independent, interacting variables. Whether it’s raining or snowing is never determined by any one overpowering force in the atmosphere: It’s always a complicated and unpredictable combination of many. Until we have the capability to micromanage the whole system, we will not be calling any shots.

Fleming, for his part, doesn’t believe that a single one of the weather modification schemes he describes in his book ever truly worked. Even cloud-seeding, he says, as widespread as it is even today, has never been scientifically proven to be effective. Kristine Harper, an assistant professor at Florida State University who is writing a book about the US government’s forays into weather modification, says that doesn’t necessarily mean cloud-seeding is a total waste of time, just that there’s no way to scientifically measure its impact.

“You’d be hard-pressed to find evidence even at this point that there’s a statistically significant difference between what you would get from a seeded cloud and an unseeded cloud,” she said.

The good news for practitioners of weather control is that amid all this complexity, they can convince themselves and others that they deserve credit for weather patterns they have probably had no role whatsoever in conjuring. The bad news for anyone who’d like to prevent the next 2-foot snow dump — or the next 2 degrees of global warming — is that there’s just no way to know. As Fleming’s account of the last 200 years suggests, it may be possible to achieve a certain amount by intervention. But it’s a long way from anything you could call control. Those who insist on continuing to shake their fists at the sky should make sure they have some warm gloves.

Leon Neyfakh is the staff writer for Ideas.


Full article and photo:

Peaked performance

The case that human athletes have reached their limits

Last summer, David Oliver tried to become one of the fastest men in the world. The American Olympic hurdler had run a time of 12.89 seconds in the 110 meters at a meet in Paris in July. The time was two-100ths of a second off the world record, 12.87, owned by Cuba’s Dayron Robles, a mark as impressive as it was absurd. Most elite hurdlers never break 13 seconds. Heck, Oliver seldom broke 13. He’d spent the majority of his career whittling down times from the 13.3 range. But the summer of 2010 was special. Oliver had become that strange athlete whose performance finally equaled his ambition and who, as a result, competed against many, sure, but was really only going against himself.

In Paris, for instance, Oliver didn’t talk about how he won the race, or even the men he ran against. He talked instead about how he came out of the blocks, how his hips dipped precariously low after clearing the sixth hurdle, how he planned to remain focused on the season ahead. For him, the time — Robles’s time — was what mattered. He had a blog and called it: “Mission: 12.85: The race for the record.” And on this blog, after Paris, Oliver wrote, “I am in a great groove right now and I can’t really pinpoint what set it off….Whatever groove I’m in, I hope I never come out of it!”

The next week, he had a meet in Monaco. The press billed it as Oliver’s attempt to smash the world record. But he had a terrible start — “ran the [worst] first couple of hurdles of the season,” as he would later write. Oliver won the race, but with a time of 13.01.

On his blog, Oliver did his best to celebrate; he titled his Monaco post, “I’m sitting on top of the world.” (And why not? The man had, after all, beaten the planet’s best hurdlers for the second-straight week, almost all of whom he’d see at the 2012 Olympics.) But the post grew defensive near the end. He reasoned that his best times should improve.

But they haven’t. That meet in Paris was the fastest he’s ever run.

Two recent, provocative studies hint at why. That Oliver has not broken Robles’s record has nothing to do with an unfortunate stumble out of the blocks or imperfect technique. It has everything to do with biology. In the sports that best measure athleticism — track and field, mostly — athletic performance has peaked. The studies show the steady progress of athletic achievement through the first half of the 20th century, and into the latter half, and always the world-record times fall. Then, suddenly, achievement flatlines. These days, athletes’ best sprints, best jumps, best throws — many of them happened years ago, sometimes a generation ago.

“We’re reaching our biological limits,” said Geoffroy Berthelot, one of the coauthors of both studies and a research specialist at the Institute for Biomedical Research and Sports Epidemiology in Paris. “We made major performance increases in the last century. And now it is very hard.”

Berthelot speaks with the bemused detachment of a French existentialist. What he predicts for the future of sport is just as indifferent, especially for the people who enjoy it: a great stagnation, reaching every event where singular athleticism is celebrated, for the rest of fans’ lives. And yet reading Berthelot’s work is not grim, not necessarily anyway. It is oddly absorbing. The implicit question that his work poses is larger than track and field, or swimming, or even sport itself. Do we dare to acknowledge our limitations? And what happens once we do?

It’s such a strange thought, antithetical to the more-more-more of American ideals. But it couldn’t be more relevant to Americans today.

In the early 1950s, the scientific community thought Roger Bannister’s attempt to break the four-minute mile might result in his death. Many scholars were certain of the limits of human achievement. If Bannister didn’t die, the thinking went, he might lose a limb. Or if no physiological barrier existed, surely a mathematical one did. The idea of one minute for one lap, and four minutes for four, imposed a beautiful, eerie symmetry — besting it seemed like an ugly distortion, and, hence, an impossibility. But Bannister broke the four-minute mark in 1954, and within three years 30 others had done it. Limitations, it seemed, existed only in the mind.

Except when they don’t. Geoffroy Berthelot began looking at track and field and swimming records in 2007. These were the sports that quantified the otherwise subjective idea of athleticism. There are no teammates in these sports, and improvement is marked scientifically, with a stopwatch or tape measure. In almost every other game, even stat-heavy games, athletic progression can’t be measured, because teammates and opponents temper results. What is achieved on these playing fields, then, doesn’t represent — can’t represent — the totality of achievement: Was Kareem Adbul-Jabbar a better basketball player than Michael Jordan because Abdul-Jabbar scored more career points? Or was Wilt Chamberlain better than them both because he scored 100 in a game? And where does this leave Bill Russell, who won more championships than anybody? By contrast, track and field and swimming are pure, the sporting world’s equivalent of a laboratory.

Berthelot wanted to know more about the progression of athletic feats over time in these sports, how and why performance improved in the modern Olympic era. So he plotted it out, every world record from 1896 onward. When placed on a L-shaped graph, the record times fell consistently, as if down a gently sloped hill. They fell because of improving nutritional standards, strength and conditioning programs, and the perfection of technique. But once Berthelot’s L-shaped graphs reached the 1980s, something strange happened: Those gently sloping hills leveled into plains. In event after event, record times began to hold.

The trend continued through the 1990s, and into the last decade. Today 64 percent of track and field world records have stood since 1993. One world record, the women’s 1,500 meters, hasn’t been broken since 1980. When Berthelot published his study last year in the online journal PLoS One, he made the simple but bold argument that athletic performance had peaked. On the whole, Berthelot said, the pinnacle of athletic achievement was achieved around 1988. We’ve been watching a virtual stasis ever since.

Berthelot argues that performance plateaued for the same reasons it improved over all those decades. Or, put another way, because it improved over all those decades. Records used to stand because some athletes were not well nourished. And then ubiquitous nutritional standards developed, and records fell. Records used to stand because athletes had idiosyncratic forms and techniques. And then through an evolution of experimentation — think high jumper Dick Fosbury and his Fosbury Flop — the best practices were codified and perfected, and now a conformity of form rules sport. Records used to stand because only a minority of athletes lifted weights and conditioned properly. Here, at least, the reasoning is a bit more complicated. Now everybody is ripped, yes, but what strength training also introduced was steroid use. Berthelot doesn’t name names, but he wonders how many of today’s records stand because of pharmacological help, the records broken during an era of primitive testing, before a foundation established the World Anti-Doping Agency in 1999. (This assumes, of course, that WADA catches everything these days. And it probably doesn’t.)

Berthelot isn’t the only one arguing athletic limitation. Greg Whyte is a former English Olympic pentathlete, now a renowned trainer in the United Kingdom and academic at the University of Wolverhampton, who, in 2005, coauthored a study published in the journal Medicine and Science in Sports and Exercise. The study found that athletes in track and field’s distance events were nearing their physiological limits. When reached by phone recently and asked about the broader scope of Berthelot’s study, Whyte said, “I think Geoffroy’s right on it.” In fact, Whyte had just visited Berthelot in Paris. The two hope to collaborate in the future.

It’s a convincing case Berthelot presents, but for one unaccounted fact: What to do with Usain Bolt? The Jamaican keeps torching 100- and 200-meter times, is seemingly beyond-human in some of his races and at the very least the apotheosis of progression. How do you solve a problem like Usain Bolt?

“Bolt is a very particular case,” Berthelot said. Only five track and field world records have been broken since 2008. Bolt holds — or contributed to — three of them: the 100 meters, 200 meters, and 4×100-meter relay. “All the media focus on Usain Bolt because he’s the only one who’s progressing today,” Berthelot said. He may also be the last to progress.

Another Berthelot paper, published in 2008, predicts that the end of almost all athletic improvement will occur around 2027. By that year, if current trends hold — and for Berthelot, there’s little doubt that they will — the “human species’ physiological frontiers will be reached,” he writes. To the extent that world records are still vulnerable by then, they will be improved by no more than 0.05 percent — so marginal that the fans, Berthelot reasons, will likely fail to care.

Maybe the same can be said of the athletes. Berthelot notes how our culture asks them — and in fact elite athletes expect of themselves — to always grow bigger, be stronger, go faster. But what happens when that progression stops? Or, put another way: What happens if it stopped 20 years ago? Why go on? The fame is quadrennial. The money’s not great. (Not for nothing: Usain Bolt said recently he’ll go through another Olympic cycle and then switch to pro football.) The pressure is excruciating, said Dr. Alan Goldberg, a sports psychologist who has worked with Olympic athletes, especially if they’re competing at a level where breaking a record is a possibility.

In a different sport but the same context, another individual performer, Ted Williams, looked back on his career and said, in the book “My Turn at Bat,” “I’m glad it’s over…I wouldn’t go back to being eighteen or nineteen years old knowing what was in store, the sourness and bitterness, knowing how I thought the weight of the damn world was always on my neck, grinding on me. I wouldn’t go back to that for anything.” Remember, this is from a man who succeeded, who, most important, broke records. What happens to the athlete who knows there are no records left to break? What happens when you acknowledge your limitations?

The short answer is, you create what you did not previously have. Swimming records, for instance, followed the same trend as track and field: a stasis beginning roughly in the mid-1980s. But in 2000, the sport innovated its way out of its torpor. The famous full-body LZR suits hit the scene, developed with NASA technologies and polyurethane, promising to reduce swimmers’ drag in the water. World records fell so quickly and so often that they became banal, the aquatic version of Barry Bonds hitting a home run. Since 2000, all but four of swimming’s records have been broken, many of them multiple times.

But in 2009 swimming’s governing body, FINA, banned the full-body LZR suits. FINA did not, however, ban the knee-to-navel suits men had previously worn, or the shoulder-to-knee suits women preferred. These suits were made of textiles or other, woven materials. In other words, FINA acknowledged the need for technological enhancements, even as it banned the LZR suits. As a result, world records still fall. Last month in Dubai, American Ryan Lochte set one in the 400-meter individual medley.

These ancient sports are a lot like the world’s current leading economies: stagnant, and looking for a way to break through. The best in both worlds do so by innovating, improving the available resources, and when that process exhausts itself, creating new ones. However, this process — whether through an increasing reliance on computers, or NASA-designed swimsuits, or steroids that regulators can’t detect — changes the work we once loved, or the sports we once played, or the athletes we once cheered.

It may not always be for the worse, but one thing is certain. When we address our human limits these days, we actually become less human.

Paul Kix is a senior editor at Boston magazine and a contributing writer for ESPN the Magazine.


Full article and photo:

Hunkier than thou

Sexual selection

Scientists are finally succeeding where so many men have failed: in understanding why women find some guys handsome and others hideous

WHEN it comes to partners, men often find women’s taste fickle and unfathomable. But ladies may not be entirely to blame. A growing body of research suggests that their preference for certain types of male physiognomy may be swayed by things beyond their conscious control—like prevalence of disease or crime—and in predictable ways.

Masculine features—a big jaw, say, or a prominent brow—tend to reflect physical and behavioural traits, such as strength and aggression. They are also closely linked to physiological ones, like virility and a sturdy immune system.

The obverse of these desirable characteristics looks less appealing. Aggression is fine when directed at external threats, less so when it spills over onto the hearth. Sexual prowess ensures plenty of progeny, but it often goes hand in hand with promiscuity and a tendency to shirk parental duties or leave the mother altogether.

So, whenever a woman has to choose a mate, she must decide whether to place a premium on the hunk’s choicer genes or the wimp’s love and care. Lisa DeBruine, of the University of Aberdeen, believes that today’s women still face this dilemma and that their choices are affected by unconscious factors.

In a paper published earlier this year Dr DeBruine found that women in countries with poor health statistics preferred men with masculine features more than those who lived in healthier societies. Where disease is rife, this seemed to imply, giving birth to healthy offspring trumps having a man stick around long enough to help care for it. In more salubrious climes, therefore, wimps are in with a chance.

Now, though, researchers led by Robert Brooks, of the University of New South Wales, have taken another look at Dr DeBruine’s data and arrived at a different conclusion. They present their findings in the Proceedings of the Royal Society. Dr Brooks suggests that it is not health-related factors, but rather competition and violence among men that best explain a woman’s penchant for manliness. The more rough-and-tumble the environment, the researcher’s argument goes, the more women prefer masculine men, because they are better than the softer types at providing for mothers and their offspring.

An unhealthy relationship

Since violent competition for resources is more pronounced in unequal societies, Dr Brooks predicted that women would value masculinity more highly in countries with a higher Gini coefficient, which is a measure of income inequality. And indeed, he found that this was better than a country’s health statistics at predicting the relative attractiveness of hunky faces.

The rub is that unequal countries also tend to be less healthy. So, in order to disentangle cause from effect, Dr Brooks compared Dr DeBruine’s health index with a measure of violence in a country: its murder rate. Again, he found that his chosen indicator predicts preference for facial masculinity more accurately than the health figures do (though less well than the Gini).

However, in a rejoinder published in the same issue of the Proceedings, Dr DeBruine and her colleagues point to a flaw in Dr Brooks’s analysis: his failure to take into account a society’s overall wealth. When she performed the statistical tests again, this time controlling for GNP, it turned out that the murder rate’s predictive power disappears, whereas that of the health indicators persists. In other words, the prevalence of violent crime seems to predict mating preferences only in so far as it reflects a country’s relative penury.

The statistical tussle shows the difficulty of drawing firm conclusions from correlations alone. Dr DeBruine and Dr Brooks admit as much, and agree the dispute will not be settled until the factors that shape mating preferences are tested directly.

Another recent study by Dr DeBruine and others has tried to do just that. Its results lend further credence to the health hypothesis. This time, the researchers asked 124 women and 117 men to rate 15 pairs of male faces and 15 pairs of female ones for attractiveness. Each pair of images depicted the same set of features tweaked to make one appear ever so slightly manlier than the other (if the face was male) or more feminine (if it was female). Some were also made almost imperceptibly lopsided. Symmetry, too, indicates a mate’s quality because in harsh environments robust genes are needed to ensure even bodily development.

Next, the participants were shown another set of images, depicting objects that elicit varying degrees of disgust, such as a white cloth either stained with what looked like a bodily fluid, or a less revolting blue dye. Disgust is widely assumed to be another adaptation, one that warns humans to stay well away from places where germs and other pathogens may be lurking. So, according to Dr DeBruine’s hypothesis, people shown the more disgusting pictures ought to respond with an increased preference for masculine lads and feminine lasses, and for the more symmetrical countenances.

That is precisely what happened when they were asked to rate the same set of faces one more time. But it only worked with the opposite sex; the revolting images failed to alter what either men or women found attractive about their own sex. This means sexual selection, not other evolutionary mechanisms, is probably at work.

More research is needed to confirm these observations and to see whether other factors, like witnessing violence, bear on human physiognomic proclivities. For now, though, the majority of males who do not resemble Brad Pitt may at least take comfort that this matters less if their surroundings remain spotless.


Full article and photo:

The truth about suicide bombers

Are they religious fanatics? Deluded ideologues? New research suggests something more mundane: They just want to commit suicide.

Qari Sami did something strange the day he killed himself. The university student from Kabul had long since grown a bushy, Taliban-style beard and favored the baggy tunics and trousers of the terrorists he idolized. He had even talked of waging jihad. But on the day in 2005 that he strapped the bomb to his chest and walked into the crowded Kabul Internet cafe, Sami kept walking — between the rows of tables, beyond the crowd, along the back wall, until he was in the bathroom, with the door closed.

And that is where, alone, he set off his bomb.

The blast killed a customer and a United Nations worker, and injured five more. But the carnage could have been far worse. Brian Williams, an associate professor of Islamic studies at the University of Massachusetts Dartmouth, was in Afghanistan at the time. One day after the attack, he stood before the cafe’s hollowed-out wreckage and wondered why any suicide bomber would do what Sami had done: deliberately walk away from the target before setting off the explosives. “[Sami] was the one that got me thinking about the state of mind of these guys,” Williams said.

Eventually a fuller portrait emerged. Sami was a young man who kept to himself, a brooder. He was upset by the US forces’ ouster of the Taliban in the months following 9/11 — but mostly Sami was just upset. He took antidepressants daily. One of Sami’s few friends told the media he was “depressed.”

Today Williams thinks that Sami never really cared for martyrdom; more likely, he was suicidal. “That’s why he went to the bathroom,” Williams said.

The traditional view of suicide bombers is well established, and backed by the scholars who study them. The bombers are, in the post-9/11 age, often young, ideologically driven men and women who hate the laissez-faire norms of the West — or at least the occupations and wars of the United States — because they contradict the fundamentalist interpretations that animate the bombers’ worldview. Their deaths are a statement, then, as much as they are the final act of one’s faith; and as a statement they have been quite effective. They propagate future deaths, as terrorist organizers use a bomber’s martyrdom as propaganda for still more suicide terrorism.

But Williams is among a small cadre of scholars from across the world pushing the rather contentious idea that some suicide bombers may in fact be suicidal. At the forefront is the University of Alabama’s Adam Lankford, who recently published an analysis of suicide terrorism in the journal Aggression and Violent Behavior. Lankford cites Israeli scholars who interviewed would-be Palestinian suicide bombers. These scholars found that 40 percent of the terrorists showed suicidal tendencies; 13 percent had made previous suicide attempts, unrelated to terrorism. Lankford finds Palestinian and Chechen terrorists who are financially insolvent, recently divorced, or in debilitating health in the months prior to their attacks. A 9/11 hijacker, in his final note to his wife, describing how ashamed he is to have never lived up to her expectations. Terrorist recruiters admitting they look for the “sad guys” for martyrdom.

For Lankford and like-minded thinkers, changing the perception of the suicide bomber changes the focus of any mission that roots out terrorism. If the suicide bomber can be viewed as something more than a brainwashed, religiously fervent automaton, anticipating a paradise of virgins in the clouds, then that suicide bomber can be seen as a nuanced person, encouraging a greater curiosity about the terrorist, Lankford thinks. The more the terrorist is understood, the less damage the terrorist can cause.

“Changing perceptions can save lives,” Lankford said.

Islam forbids suicide. Of the world’s three Abrahamic faiths, “The Koran has the only scriptural prohibition against it,” said Robert Pape, a professor at the University of Chicago who specializes in the causes of suicide terrorism. The phrase suicide bomber itself is a Western conception, and a pretty foul one at that: an egregious misnomer in the eyes of Muslims, especially from the Middle East. For the Koran distinguishes between suicide and, as the book says, “the type of man who gives his life to earn the pleasure of Allah.” The latter is a courageous Fedayeen — a martyr. Suicide is a problem, but martyrdom is not.

For roughly 1,400 years, since the time of the Prophet Muhammad, scholars have accepted not only the ubiquity of martyrdom in the Muslim world but the strict adherence to its principles by those who participate in it: A lot of people have died, and keep dying, for a cause. Only recently, and sometimes only reluctantly, has the why of martyrdom been challenged.

Ariel Merari is a retired professor of psychology at Tel Aviv University. After the Beirut barracks bombing in 1983 — in which a terrorist, Ismalal Ascari, drove a truck bomb into a United States Marine barracks, killing 241 American servicemen — Merari began investigating the motives of Ascari, and the terrorist group with which the attack was aligned, Hezbollah. Though the bombing came during the Lebanese Civil War, Merari wondered whether it was less a battle within the conflict so much as a means chosen by one man, Ascari, to end his life. By 1990, Merari had published a paper asking the rest of academia to consider if suicide bombers were actually suicidal. “But this was pretty much speculative, this paper,” Merari said.

In 2002, he approached a group of 15 would-be suicide bombers — Palestinians arrested and detained moments before their attacks — and asked if he could interview them. Remarkably, they agreed. “Nobody” — no scholar — “had ever been able to do something like this,” Merari said. He also approached 14 detained terrorist organizers. Some of the organizers had university degrees and were intrigued by the fact that Merari wanted to understand them. They, too, agreed to be interviewed. Merari was ecstatic.

Fifty-three percent of the would-be bombers showed “depressive tendencies” — melancholy, low energy, tearfulness, the study found — whereas 21 percent of the organizers exhibited the same. Furthermore, 40 percent of the would-be suicide bombers expressed suicidal tendencies; one talked openly of slitting his wrists after his father died. But the study found that none of the terrorist organizers were suicidal.

The paper was published last year in the journal Terrorism and Political Violence. Adam Lankford read it in his office at the University of Alabama. The results confirmed what he’d been thinking. The criminal justice professor had published a book, “Human Killing Machines,” about the indoctrination of ordinary people as agents for terrorism or genocide. Merari’s paper touched on themes he’d explored in his book, but the paper also gave weight to the airy speculation Lankford had heard a few years earlier in Washington, D.C., while he was earning his PhD from American University. There, Lankford had helped coordinate antiterrorism forums with the State Department for high-ranking military and security personnel. And it was at these forums, from Third World-country delegates, that Lankford first began to hear accounts of suicide bombers who may have had more than martyrdom on their minds. “That’s what sparked my interest,” he said.

He began an analysis of the burgeoning, post-9/11 literature on suicide terrorism, poring over the studies that inform the thinking on the topic. Lankford’s paper was published this July. In it, he found stories similar to Merari’s: bombers who unwittingly revealed suicidal tendencies in, say, their martyrdom videos, recorded moments before the attack; and organizers who valued their lives too much to end it, so they recruited others, often from the poorest, bleakest villages.

But despite the accounts from their own published papers, scholar after scholar had dismissed the idea of suicidality among bombers. Lankford remains incredulous. “This close-mindedness has become a major barrier to scholarly progress,” Lankford said.

Not everyone is swayed by his argument. Mia Bloom is a fellow at the International Center for the Study of Terrorism at Penn State University and the author of the book, “Dying to Kill: The Allure of Suicide Terror.” “I would be hesitant to agree with Mr. Lankford,” she said. “You don’t want to conflate the Western ideas of suicide with something that is, in the Middle East, a religious ceremony.” For her, “being a little bit wistful” during a martyrdom video is not an otherwise hidden window into a bomber’s mind. Besides, most suicide bombers “are almost euphoric” in their videos, she said. “Because they know that before the first drop of blood hits the ground, they’re going to be with Allah.” (Lankford counters that euphoria, moments before one’s death, can also be a symptom of the suicidal person.)

One study in the academic literature directly refutes Lankford’s claim, and that’s the University of Nottingham’s Ellen Townsend’s “Suicide Terrorists: Are They Suicidal?” published in the journal Suicide and Life Threatening Behavior in 2007. (The answer is a resounding “no.”)

Townsend’s paper was an analysis of empirical research on suicide terrorism — the scholars who’d talked with the people who knew the attackers. In Lankford’s own paper a few years after Townsend’s, he attacked her methodology: relying as she did on the accounts of a martyr’s family members and friends, who, Lankford wrote, “may lie to protect the ‘heroic’ reputations of their loved ones.”

When reached by phone, Townsend had a wry chuckle for Lankford’s “strident” criticism of her work. Yes, in the hierarchy of empirical research, the sort of interviews on which her paper is based have weaknesses: A scholar can’t observe everything, can’t control for all biases. “But that’s still stronger evidence than the anecdotes in Lankford’s paper,” Townsend said.

Robert Pape, at the University of Chicago, agrees. “The reason Merari’s view” — and by extension, Lankford’s — “is so widely discredited is that we have a handful of incidents of what looks like suicide and we have over 2,500 suicide attackers. We have literally hundreds and hundreds of stories where religion is a factor — and revenge, too….To put his idea forward, [Lankford] would need to have a 100 or more stories or anecdotes to even get in the game.”

He’s working on that. Lankford’s forthcoming study, to be published early next year, is “far more robust” than his first: a list of more than 75 suicide terrorists and why they were likely suicidal. He cites a Palestinian woman who, five months after lighting herself on fire in her parents’ kitchen, attempted a return to the hospital that saved her life. But this time she approached with a pack of bombs wrapped around her body, working as an “ideologue” in the service of the al-Aqsa Martyrs Brigade.

Lankford writes of al Qaeda-backed terrorists in Iraq who would target and rape local women, and then see to it that the victims were sent to Samira Ahmed Jassim. Jassim would convince these traumatized women that the only way to escape public scorn was martyrdom. She was so successful she became known as the Mother of Believers. “If you just needed true believers, you wouldn’t need them to be raped first,” Lankford said in an interview.

Lankford is also intrigued by the man who in some sense launched the current study of suicide terrorism: Mohammed Atta, the ringleader behind the 9/11 hijacking. “It’s overwhelming, his traits of suicidality,” Lankford said. An isolated, neglected childhood, pathologically ashamed of any sexual expression. “According to the National Institute of Mental Health there are 11 signs, 11 traits and symptoms for a man being depressed,” Lankford said. “Atta exhibited eight of them.”

If Atta were seen as something more than a martyr, or rather something other than one, the next Atta would not have the same effect on the world. That’s Lankford’s hope anyway. But transporting a line of thought from the halls of academia to the chambers of Congress or onto field agents’ dossiers is no easy task. Lankford said he has not heard from anyone in the government regarding his work. And even if the idea does reach a broader audience in the West, there is still the problem of convincing those in the Middle East of its import. Pape, at the University of Chicago, said people in the Muslim world commit suicide at half the rate they do in the Jewish or Christian world. The act is scorned, which makes it all the more difficult to accept any behaviors or recurring thoughts that might lead to it.

Still, there is reason for Lankford to remain hopeful. The Israeli government, for one, has worked closely with Merari and his work on suicidal tendencies among Palestinian terrorists. Then there is Iraq. Iraq is on the verge of autonomy for many reasons, but one of them is the United States’ decision to work with Iraqis instead of against them — and, more fundamentally, to understand them. Lankford thinks that if the same inquisitiveness were applied to suicide bombers and their motives, “the violence should decrease.”

Paul Kix is a senior editor at Boston magazine and a contributing writer for ESPN the Magazine.


Full article and photo:

All the president’s books

In my two years working in the president’s office at Harvard, before I was laid off in spring, I gave myself the job of steward of her books. Gift books would arrive in the mail, or from campus visitors, or from her hosts when she traveled; books by Harvard professors were kept on display in reception or in storage at our Massachusetts Hall office; books flowed in from publishers, or authors seeking blurbs, or self-published authors of no reputation or achievement, who sometimes sent no more than loosely bound manuscripts.

I took charge of the president’s books because it was my assigned job to write thank-you letters for them. I would send her the books and the unsigned draft replies on presidential letterhead; for each one, she sent me back the signed letter and, most of the time, the book, meaning she had no further use for it. Some books she would keep, but seldom for very long, which meant those came back to me too, in one of the smaller offices on the third floor of Mass Hall where there was no room to put them. Furthermore they weren’t so easily disposed of. Often they bore inscriptions, to president Drew Faust or to her and her husband from people they knew; and even if the volume was something rather less exalted — a professor from India sending his management tome or a book of Hindi poems addressed, mysteriously, to “Sir” or to the “vice-chancellor of Harvard University” — these books obviously couldn’t end up in a secondhand bookshop or charity bin or anywhere they could cause embarrassment. All were soon moved to an overflow space at the very end of the hall, coincidentally looking out at a donation bin for books at a church across the street.

One might feel depressed sitting amid so many unwanted books — so much unread knowledge and overlooked experience — but tending president Faust’s books became my favorite part of the job. No one noticed or interfered in what I did, which in a president’s office like Harvard, where everything is scrutinized, is uncommon. Even a thank-you note can say too much. I developed my own phrase for these notes — “I look forward to spending some time with it” — as a substitute for saying “I look forward to reading it,” because the president can’t possibly read all the books she receives, and there was always the chance she would run into the author somewhere, who might ask if she’d read his book yet.

Any Harvard president attracts books from supplicants, and this particular president attracted her own subcategory. Many books came from publishers or authors not at all shy about requesting a presidential blurb. These were easy to decline, and became easy to decline even when they came from the president’s friends, colleagues, acquaintances, neighbors, and others met over a distinguished career as a Civil War historian. This was the subcategory: Thanks to her specialty, we were building up a large collection of Civil War books, galleys and unpublished manuscripts — not just professional monographs, but amateurish family or local histories. These soon filled the overflow space in Massachusetts Hall, where water leaking from the roof during the unusual March rainstorms resulted in our having to discard several.

For everyone who sent us a book, the signed note back from the president mattered more than the book itself; both sides presumably understood that the president could buy or obtain any book she actually needed. The replies were signed by her — no auto-pen — which meant that even if she didn’t quite read your book the president still held it in her hands even for a moment, perhaps scribbling something at the bottom of her note with a fine black pen, or crossing out the “Ms” or “Professor” heading and substituting the author’s first name.

I had all kinds of plans for these books. The inscribed books we had to keep, of course, no matter how dire or dreadful. (The archives would want its pick of them anyway, deciding which books would become keepsakes of this particular era at Harvard.) But the many good titles that remained could go to struggling small foreign universities or schools, to our soldiers and Marines overseas, or to local libraries as an act of goodwill from a powerful and oft-maligned neighbor. They could go to the Allston branch of the Boston Public Library, for instance, perhaps to be dubbed “the president’s collection,” with its own shelving but freely available to Allstonians to read or borrow.

None of these ideas came to fruition. All of them would have required me to rise to a realm where I was no longer in charge — indeed, where I didn’t have a foothold. I would have to call meetings, bring bigger players to the table. Harvard’s top bureaucracy is actually quite small, and most of it was, literally, in my immediate presence: Two doors to the left was one vice president, two doors to the right, around a tight corner, was another. But these were big-gesture folks alongside a resolutely small-gesture one (me), and without an intermediary to help build support for my ideas my books weren’t going anywhere except, once, into a cardboard box outside my office just before Christmas, where I encouraged staff to help themselves and perhaps two dozen books, or half what I started the box with, went out that way.

In all this, the important thing was that books were objects to be honored, not treated as tiresome throwaways, and that everyone in the building knew this. Books are how, traditionally, universities are built: John Harvard was not the founder of Harvard University but a clergyman who, two years after its founding, bequeathed it his library. I used to joke that the most boring book in our collection was the volume called the “Prince Takamado Trophy All Japan Inter-Middle School English Oratorical Contest,” but if I hear it isn’t still on a shelf somewhere in Mass Hall 20 years from now, I won’t be the only one who’s disappointed.

Eric Weinberger has reviewed books in the Globe since 2000, and taught writing at Harvard for 10 years.


Full article:

In China’s Orbit

After 500 years of Western predominance, Niall Ferguson argues, the world is tilting back to the East.

“We are the masters now.” I wonder if President Barack Obama saw those words in the thought bubble over the head of his Chinese counterpart, Hu Jintao, at the G20 summit in Seoul last week. If the president was hoping for change he could believe in—in China’s currency policy, that is—all he got was small change. Maybe Treasury Secretary Timothy Geithner also heard “We are the masters now” as the Chinese shot down his proposal for capping imbalances in global current accounts. Federal Reserve Chairman Ben Bernanke got the same treatment when he announced a new round of “quantitative easing” to try to jump start the U.S. economy, a move described by one leading Chinese commentator as “uncontrolled” and “irresponsible.”

“We are the masters now.” That was certainly the refrain that I kept hearing in my head when I was in China two weeks ago. It wasn’t so much the glitzy, Olympic-quality party I attended in the Tai Miao Temple, next to the Forbidden City, that made this impression. The displays of bell ringing, martial arts and all-girl drumming are the kind of thing that Western visitors expect. It was the understated but unmistakable self-confidence of the economists I met that told me something had changed in relations between China and the West.

One of them, Cheng Siwei, explained over dinner China’s plan to become a leader in green energy technology. Between swigs of rice wine, Xia Bin, an adviser to the People’s Bank of China, outlined the need for a thorough privatization program, “including even the Great Hall of the People.” And in faultless English, David Li of Tsinghua University confessed his dissatisfaction with the quality of Chinese Ph.D.s.

You could not ask for smarter people with whom to discuss the two most interesting questions in economic history today: Why did the West come to dominate not only China but the rest of the world in the five centuries after the Forbidden City was built? And is that period of Western dominance now finally coming to an end?

In a brilliant paper that has yet to be published in English, Mr. Li and his co-author Guan Hanhui demolish the fashionable view that China was economically neck-and-neck with the West until as recently as 1800. Per capita gross domestic product, they show, stagnated in the Ming era (1402-1626) and was significantly lower than that of pre-industrial Britain. China still had an overwhelmingly agricultural economy, with low-productivity cultivation accounting for 90% of GDP. And for a century after 1520, the Chinese national savings rate was actually negative. There was no capital accumulation in late Ming China; rather the opposite.

The story of what Kenneth Pomeranz, a history professor at the University of California, Irvine, has called “the Great Divergence” between East and West began much earlier. Even the late economist Angus Maddison may have been over-optimistic when he argued that in 1700 the average inhabitant of China was probably slightly better off than the average inhabitant of the future United States. Mr. Maddison was closer to the mark when he estimated that, in 1600, per capita GDP in Britain was already 60% higher than in China.

For the next several hundred years, China continued to stagnate and, in the 20th century, even to retreat, while the English-speaking world, closely followed by northwestern Europe, surged ahead. By 1820 U.S. per capita GDP was twice that of China; by 1870 it was nearly five times greater; by 1913 the ratio was nearly 10 to one.

Despite the painful interruption of the Great Depression, the U.S. suffered nothing so devastating as China’s wretched mid-20th century ordeal of revolution, civil war, Japanese invasion, more revolution, man-made famine and yet more (“cultural”) revolution. In 1968 the average American was 33 times richer than the average Chinese, using figures calculated on the basis of purchasing power parity (allowing for the different costs of living in the two countries). Calculated in current dollar terms, the differential at its peak was more like 70 to 1.

This was the ultimate global imbalance, the result of centuries of economic and political divergence. How did it come about? And is it over?

As I’ve researched my forthcoming book over the past two years, I’ve concluded that the West developed six “killer applications” that “the Rest” lacked. These were:

• Competition: Europe was politically fragmented, and within each monarchy or republic there were multiple competing corporate entities.

• The Scientific Revolution: All the major 17th-century breakthroughs in mathematics, astronomy, physics, chemistry and biology happened in Western Europe.

• The rule of law and representative government: This optimal system of social and political order emerged in the English-speaking world, based on property rights and the representation of property owners in elected legislatures.

• Modern medicine: All the major 19th- and 20th-century advances in health care, including the control of tropical diseases, were made by Western Europeans and North Americans.

• The consumer society: The Industrial Revolution took place where there was both a supply of productivity-enhancing technologies and a demand for more, better and cheaper goods, beginning with cotton garments.

• The work ethic: Westerners were the first people in the world to combine more extensive and intensive labor with higher savings rates, permitting sustained capital accumulation.

Those six killer apps were the key to Western ascendancy. The story of our time, which can be traced back to the reign of the Meiji Emperor in Japan (1867-1912), is that the Rest finally began to download them. It was far from a smooth process. The Japanese had no idea which elements of Western culture were the crucial ones, so they ended up copying everything, from Western clothes and hairstyles to the practice of colonizing foreign peoples. Unfortunately, they took up empire-building at precisely the moment when the costs of imperialism began to exceed the benefits. Other Asian powers—notably India—wasted decades on the erroneous premise that the socialist institutions pioneered in the Soviet Union were superior to the market-based institutions of the West.

Beginning in the 1950s, however, a growing band of East Asian countries followed Japan in mimicking the West’s industrial model, beginning with textiles and steel and moving up the value chain from there. The downloading of Western applications was now more selective. Competition and representative government did not figure much in Asian development, which instead focused on science, medicine, the consumer society and the work ethic (less Protestant than Max Weber had thought). Today Singapore is ranked third in the World Economic Forum’s assessment of competitiveness. Hong Kong is 11th, followed by Taiwan (13th), South Korea (22nd) and China (27th). This is roughly the order, historically, in which these countries Westernized their economies.

Today per capita GDP in China is 19% that of the U.S., compared with 4% when economic reform began just over 30 years ago. Hong Kong, Japan and Singapore were already there as early as 1950; Taiwan got there in 1970, and South Korea got there in 1975. According to the Conference Board, Singapore’s per capita GDP is now 21% higher than that of the U.S., Hong Kong’s is about the same, Japan’s and Taiwan’s are about 25% lower, and South Korea’s 36% lower. Only a foolhardy man would bet against China’s following the same trajectory in the decades ahead.

China’s has been the biggest and fastest of all the industrialization revolutions. In the space of 26 years, China’s GDP grew by a factor of 10. It took the U.K. 70 years after 1830 to grow by a factor of four. According to the International Monetary Fund, China’s share of global GDP (measured in current prices) will pass the 10% mark in 2013. Goldman Sachs continues to forecast that China will overtake the U.S. in terms of GDP in 2027, just as it recently overtook Japan.

But in some ways the Asian century has already arrived. China is on the brink of surpassing the American share of global manufacturing, having overtaken Germany and Japan in the past 10 years. China’s biggest city, Shanghai, already sits atop the ranks of the world’s megacities, with Mumbai right behind; no American city comes close.

Nothing is more certain to accelerate the shift of global economic power from West to East than the looming U.S. fiscal crisis. With a debt-to-revenue ratio of 312%, Greece is in dire straits already. But the debt-to-revenue ratio of the U.S. is 358%, according to Morgan Stanley. The Congressional Budget Office estimates that interest payments on the federal debt will rise from 9% of federal tax revenues to 20% in 2020, 36% in 2030 and 58% in 2040. Only America’s “exorbitant privilege” of being able to print the world’s premier reserve currency gives it breathing space. Yet this very privilege is under mounting attack from the Chinese government.

For many commentators, the resumption of quantitative easing by the Federal Reserve has appeared to spark a currency war between the U.S. and China. If the “Chinese don’t take actions” to end the manipulation of their currency, President Obama declared in New York in September, “we have other means of protecting U.S. interests.” The Chinese premier Wen Jiabao was quick to respond: “Do not work to pressure us on the renminbi rate…. Many of our exporting companies would have to close down, migrant workers would have to return to their villages. If China saw social and economic turbulence, then it would be a disaster for the world.”

Such exchanges are a form of pi ying xi, China’s traditional shadow puppet theater. In reality, today’s currency war is between “Chimerica”—as I’ve called the united economies of China and America—and the rest of the world. If the U.S. prints money while China effectively still pegs its currency to the dollar, both parties benefit. The losers are countries like Indonesia and Brazil, whose real trade-weighted exchange rates have appreciated since January 2008 by 18% and 17%, respectively.

But who now gains more from this partnership? With China’s output currently 20% above its pre-crisis level and that of the U.S. still 2% below, the answer seems clear. American policy-makers may utter the mantra that “they need us as much as we need them” and refer ominously to Lawrence Summers’s famous phrase about “mutually assured financial destruction.” But the Chinese already have a plan to reduce their dependence on dollar reserve accumulation and subsidized exports. It is a strategy not so much for world domination on the model of Western imperialism as for reestablishing China as the Middle Kingdom—the dominant tributary state in the Asia-Pacific region.

If I had to summarize China’s new grand strategy, I would do it, Chinese-style, as the Four “Mores”: Consume more, import more, invest abroad more and innovate more. In each case, a change of economic strategy pays a handsome geopolitical dividend.

By consuming more, China can reduce its trade surplus and, in the process, endear itself to its major trading partners, especially the other emerging markets. China recently overtook the U.S. as the world’s biggest automobile market (14 million sales a year, compared to 11 million), and its demand is projected to rise tenfold in the years ahead.

By 2035, according to the International Energy Agency, China will be using a fifth of all global energy, a 75% increase since 2008. It accounted for about 46% of global coal consumption in 2009, the World Coal Institute estimates, and consumes a similar share of the world’s aluminum, copper, nickel and zinc production. Last year China used twice as much crude steel as the European Union, United States and Japan combined.

Such figures translate into major gains for the exporters of these and other commodities. China is already Australia’s biggest export market, accounting for 22% of Australian exports in 2009. It buys 12% of Brazil’s exports and 10% of South Africa’s. It has also become a big purchaser of high-end manufactured goods from Japan and Germany. Once China was mainly an exporter of low-price manufactures. Now that it accounts for fully a fifth of global growth, it has become the most dynamic new market for other people’s stuff. And that wins friends.

The Chinese are justifiably nervous, however, about the vagaries of world commodity prices. How could they feel otherwise after the huge price swings of the past few years? So it makes sense for them to invest abroad more. In January 2010 alone, the Chinese made direct investments worth a total of $2.4 billion in 420 overseas enterprises in 75 countries and regions. The overwhelming majority of these were in Asia and Africa. The biggest sectors were mining, transportation and petrochemicals. Across Africa, the Chinese mode of operation is now well established. Typical deals exchange highway and other infrastructure investments for long leases of mines or agricultural land, with no questions asked about human rights abuses or political corruption.

Growing overseas investment in natural resources not only makes sense as a diversification strategy to reduce China’s exposure to the risk of dollar depreciation. It also allows China to increase its financial power, not least through its vast and influential sovereign wealth fund. And it justifies ambitious plans for naval expansion. In the words of Rear Admiral Zhang Huachen, deputy commander of the East Sea Fleet: “With the expansion of the country’s economic interests, the navy wants to better protect the country’s transportation routes and the safety of our major sea-lanes.” The South China Sea has already been declared a “core national interest,” and deep-water ports are projected in Pakistan, Burma and Sri Lanka.

Finally, and contrary to the view that China is condemned to remain an assembly line for products “designed in California,” the country is innovating more, aiming to become, for example, the world’s leading manufacturer of wind turbines and photovoltaic panels. In 2007 China overtook Germany in terms of new patent applications. This is part of a wider story of Eastern ascendancy. In 2008, for the first time, the number of patent applications from China, India, Japan and South Korea exceeded those from the West.

The dilemma posed to the “departing” power by the “arriving” power is always agonizing. The cost of resisting Germany’s rise was heavy indeed for Britain; it was much easier to slide quietly into the role of junior partner to the U.S. Should America seek to contain China or to accommodate it? Opinion polls suggest that ordinary Americans are no more certain how to respond than the president. In a recent survey by the Pew Research Center, 49% of respondents said they did not expect China to “overtake the U.S. as the world’s main superpower,” but 46% took the opposite view.

Coming to terms with a new global order was hard enough after the collapse of the Soviet Union, which went to the heads of many Western commentators. (Who now remembers talk of American hyperpuissance without a wince?) But the Cold War lasted little more than four decades, and the Soviet Union never came close to overtaking the U.S. economically. What we are living through now is the end of 500 years of Western predominance. This time the Eastern challenger is for real, both economically and geopolitically.

The gentlemen in Beijing may not be the masters just yet. But one thing is certain: They are no longer the apprentices.

Niall Ferguson is a professor of history at Harvard University and a professor of business administration at the Harvard Business School. His next book, “Civilization: The West and the Rest,” will be published in March.


Full article and photos:

The God-Science Shouting Match: A Response

In reading the nearly 700 reader responses to my Oct. 17 essay for The Stone, (“Morals Without God?“) I notice how many readers are relieved to see that there are shades of gray when it comes to the question whether morality requires God. I believe that such a discussion needs to revolve around both the distant past, in which religion likely played little or no role if we go back far enough, and modern times, in which it is hard to disentangle morality and religion. The latter point seemed obvious to me, yet proved controversial. Even though 90 percent of my text questions the religious origins of human morality, and wonders if we need a God to be good, it is the other 10 percent — in which I tentatively assign a role to religion — that drew most ire. Atheists, it seems (at least those who responded here) don’t like any less than 100 percent agreement with their position.

To have a productive debate, religion needs to recognize the power of the scientific method and the truths it has revealed, but its opponents need to recognize that one cannot simply dismiss a social phenomenon found in every major society. If humans are inherently religious, or at least show rituals related to the supernatural, there is a big question to be answered. The issue is not whether or not God exists — which I find to be a monumentally uninteresting question defined, as it is, by the narrow parameters of monotheism — but why humans universally feel the need for supernatural entities. Is this just to stay socially connected or does it also underpin morality? And if so, what will happen to morality in its absence?

Just raising such an obvious issue has become controversial in an atmosphere in which public forums seem to consist of pro-science partisans or pro-religion partisans, and nothing in between. How did we arrive at this level of polarization, this small-mindedness, as if we are taking part in the Oxford Debating Society, where all that matters is winning or losing? It is unfortunate when, in discussing how to lead our lives and why to be good — very personal questions — we end up with a shouting match. There are in fact no answers to these questions, only approximations, and while science may be an excellent source of information it is simply not designed to offer any inspiration in this regard. It used to be that science and religion went together, and in fact (as I tried to illustrate with Bosch’s paintings) Western science ripened in the bosom of Christianity and its explicit desire for truth. Ironically, even atheism may be looked at as a product of this desire, as explained by the philosopher John Gray:

Christianity struck at the root of pagan tolerance of illusion. In claiming that there is only one true faith, it gave truth a supreme value it had not had before. It also made disbelief in the divine possible for the first time. The long-delayed consequence of the Christian faith was an idolatry of truth that found its most complete expression in atheism. (Straw Dogs, 2002).

Those who wish to remove religion and define morality as the pursuit of scientifically defined well-being (à la Sam Harris) should read up on earlier attempts in this regard, such as the Utopian novel “Walden Two” by B. F. Skinner, who thought that humans could achieve greater happiness and productivity if they just paid better attention to the science of reward and punishment. Skinner’s colleague John Watson even envisioned “baby factories” that would dispense with the “mawkish” emotions humans are prone to, an idea applied with disastrous consequences in Romanian orphanages. And talking of Romania, was not the entire Communist experiment an attempt at a society without God? Apart from the question of how moral these societies turned out to be, I find it intriguing that over time Communism began to look more and more like a religion itself. The singing, marching, reciting of poems and pledges and waving in the air of Little Red Books smacked of holy fervor, hence my remark that any movement that tries to promote a certain moral agenda — even while denying God — will soon look like any old religion. Since people look up to those perceived as more knowledgeable, anyone who wants to promote a certain social agenda, even one based on science, will inevitably come face to face with the human tendency to follow leaders and let them do the thinking.

What I would love to see is a debate among moderates. Perhaps it is an illusion that this can be achieved on the Internet, given how it magnifies disagreements, but I do think that most people will be open to a debate that respects both the beliefs held by many and the triumphs of science. There is no obligation for non-religious people to hate religion, and many believers are open to interrogating their own convictions. If the radicals on both ends are unable to talk with each other, this should not keep the rest of us from doing so.

Frans B. M. de Waal is a biologist interested in primate behavior. He is C. H. Candler Professor in Psychology, and Director of the Living Links Center at the Yerkes National Primate Research Center at Emory University, in Atlanta, and a member of the National Academy of Sciences and the Royal Dutch Academy of Sciences. His latest book is “The Age of Empathy.”


Full article:

Stories vs. Statistics

Half a century ago the British scientist and novelist C. P. Snow bemoaned the estrangement of what he termed the “two cultures” in modern society — the literary and the scientific. These days, there is some reason to celebrate better communication between these domains, if only because of the increasingly visible salience of scientific ideas. Still a gap remains, and so I’d like here to take an oblique look at a few lesser-known contrasts and divisions between subdomains of the two cultures, specifically those between stories and statistics.

I’ll begin by noting that the notions of probability and statistics are not alien to storytelling. From the earliest of recorded histories there were glimmerings of these concepts, which were reflected in everyday words and stories. Consider the notions of central tendency — average, median, mode, to name a few. They most certainly grew out of workaday activities and led to words such as (in English) “usual,” “typical.” “customary,” “most,” “standard,” “expected,” “normal,” “ordinary,” “medium,” “commonplace,” “so-so,” and so on. The same is true about the notions of statistical variation — standard deviation, variance, and the like. Words such as “unusual,” “peculiar,” “strange,” “original,” “extreme,” “special,” “unlike,” “deviant,” “dissimilar” and “different” come to mind. It is hard to imagine even prehistoric humans not possessing some sort of rudimentary idea of the typical or of the unusual. Any situation or entity — storms, animals, rocks — that recurred again and again would, it seems, lead naturally to these notions. These and other fundamentally scientific concepts have in one way or another been embedded in the very idea of what a story is — an event distinctive enough to merit retelling — from cave paintings to “Gilgamesh” to “The Canterbury Tales,” onward.

The idea of probability itself is present in such words as “chance,” “likelihood,” “fate,” “odds,” “gods,” “fortune,” “luck,” “happenstance,” “random,” and many others. A mere acceptance of the idea of alternative possibilities almost entails some notion of probability, since some alternatives will be come to be judged more likely than others. Likewise, the idea of sampling is implicit in words like “instance,” “case,” “example,” “cross-section,” “specimen” and “swatch,” and that of correlation is reflected in “connection,” “relation,” “linkage,” “conjunction,” “dependence” and the ever too ready “cause.” Even hypothesis testing and Bayesian analysis possess linguistic echoes in common phrases and ideas that are an integral part of human cognition and storytelling. With regard to informal statistics we’re a bit like Moliere’s character who was shocked to find that he’d been speaking prose his whole life.

Despite the naturalness of these notions, however, there is a tension between stories and statistics, and one under-appreciated contrast between them is simply the mindset with which we approach them. In listening to stories we tend to suspend disbelief in order to be entertained, whereas in evaluating statistics we generally have an opposite inclination to suspend belief in order not to be beguiled. A drily named distinction from formal statistics is relevant: we’re said to commit a Type I error when we observe something that is not really there and a Type II error when we fail to observe something that is there. There is no way to always avoid both types, and we have different error thresholds in different endeavors, but the type of error people feel more comfortable may be telling. It gives some indication of their intellectual personality type, on which side of the two cultures (or maybe two coutures) divide they’re most comfortable.

People who love to be entertained and beguiled or who particularly wish to avoid making a Type II error might be more apt to prefer stories to statistics. Those who don’t particularly like being entertained or beguiled or who fear the prospect of making a Type I error might be more apt to prefer statistics to stories. The distinction is not unrelated to that between those (61.389% of us) who view numbers in a story as providing rhetorical decoration and those who view them as providing clarifying information.

The so-called “conjunction fallacy” suggests another difference between stories and statistics. After reading a novel, it can sometimes seem odd to say that the characters in it don’t exist. The more details there are about them in a story, the more plausible the account often seems. More plausible, but less probable. In fact, the more details there are in a story, the less likely it is that the conjunction of all of them is true. Congressman Smith is known to be cash-strapped and lecherous. Which is more likely? Smith took a bribe from a lobbyist or Smith took a bribe from a lobbyist, has taken money before, and spends it on luxurious “fact-finding” trips with various pretty young interns. Despite the coherent story the second alternative begins to flesh out, the first alternative is more likely. For any statements, A, B, and C, the probability of A is always greater than the probability of A, B, and C together since whenever A, B, and C all occur, A occurs, but not vice versa.

This is one of many cognitive foibles that reside in the nebulous area bordering mathematics, psychology and storytelling. In the classic illustration of the fallacy put forward by Amos Tversky and Daniel Kahneman, a woman named Linda is described. She is single, in her early 30s, outspoken, and exceedingly smart. A philosophy major in college, she has devoted herself to issues such as nuclear non-proliferation. So which of the following is more likely?

a.) Linda is a bank teller.

b.) Linda is a bank teller and is active in the feminist movement.

Although most people choose b.), this option is less likely since two conditions must be met in order for it to be satisfied, whereas only one of them is required for option a.) to be satisfied.

(Incidentally, the conjunction fallacy is especially relevant to religious texts. Imbedding the God character in a holy book’s very detailed narrative and building an entire culture around this narrative seems by itself to confer a kind of existence on Him.)

Yet another contrast between informal stories and formal statistics stems from the extensional/intensional distinction. Standard scientific and mathematical logic is termed extensional since objects and sets are determined by their extensions, which is to say by their member(s). Mathematical entities having the same members are the same even if they are referred to differently. Thus, in formal mathematical contexts, the number 3 can always be substituted for, or interchanged with, the square root of 9 or the largest whole number smaller than pi without affecting the truth of the statement in which it appears.

In everyday intensional (with an s) logic, things aren’t so simple since such substitution isn’t always possible. Lois Lane knows that Superman can fly, but even though Superman and Clark Kent are the same person, she doesn’t know that Clark Kent can fly. Likewise, someone may believe that Oslo is in Sweden, but even though Oslo is the capital of Norway, that person will likely not believe that the capital of Norway is in Sweden. Locutions such as “believes that” or “thinks that” are generally intensional and do not allow substitution of equals for equals.

The relevance of this to probability and statistics? Since they’re disciplines of pure mathematics, their appropriate logic is the standard extensional logic of proof and computation. But for applications of probability and statistics, which are what most people mean when they refer to them, the appropriate logic is informal and intensional. The reason is that an event’s probability, or rather our judgment of its probability, is almost always affected by its intensional context.

Consider the two boys problem in probability. Given that a family has two children and that at least one of them is a boy, what is the probability that both children are boys? The most common solution notes that there are four equally likely possibilities — BB, BG, GB, GG, the order of the letters indicating birth order. Since we’re told that the family has at least one boy, the GG possibility is eliminated and only one of the remaining three equally likely possibilities is a family with two boys. Thus the probability of two boys in the family is 1/3. But how do we come to think that, learn that, believe that the family has at least one boy? What if instead of being told that the family has at least one boy, we meet the parents who introduce us to their son? Then there are only two equally like possibilities — the other child is a girl or the other child is a boy, and so the probability of two boys is 1/2.

Many probability problems and statistical surveys are sensitive to their intensional contexts (the phrasing and ordering of questions, for example). Consider this relatively new variant of the two boys problem. A couple has two children and we’re told that at least one of them is a boy born on a Tuesday. What is the probability the couple has two boys? Believe it or not, the Tuesday is important, and the answer is 13/27. If we discover the Tuesday birth in slightly different intensional contexts, however, the answer could be 1/3 or 1/2.

Of course, the contrasts between stories and statistics don’t end here. Another example is the role of coincidences, which loom large in narratives, where they too frequently are invested with a significance that they don’t warrant probabilistically. The birthday paradox, small world links between people, psychics’ vaguely correct pronouncements, the sports pundit Paul the Octopus, and the various bible codes are all examples. In fact, if one considers any sufficiently large data set, such meaningless coincidences will naturally arise: the best predictor of the value of the S&P 500 stock index in the early 1990s was butter production in Bangladesh. Or examine the first letters of the months or of the planets: JFMAMJ-JASON-D or MVEMJ-SUN-P. Are JASON and SUN significant? Of course not. As I’ve written often, the most amazing coincidence of all would be the complete absence of all coincidences.

I’ll close with perhaps the most fundamental tension between stories and statistics. The focus of stories is on individual people rather than averages, on motives rather than movements, on point of view rather than the view from nowhere, context rather than raw data. Moreover, stories are open-ended and metaphorical rather than determinate and literal.

In the end, whether we resonate viscerally to King Lear’s predicament in dividing his realm among his three daughters or can’t help thinking of various mathematical apportionment ideas that may have helped him clarify his situation is probably beyond calculation. At different times and places most of us can, should, and do respond in both ways.

John Allen Paulos is Professor of Mathematics at Temple University and the author of several books, including “Innumeracy,” “Once Upon a Number,” and, most recently, “Irreligion.”


Full article and photo:

The Cold-Weather Counterculture Comes to an End

Not so long ago, any young man who was so inclined could ski all winter in the mountains of Colorado or Utah on a pauper’s budget. The earnings from a part-time job cleaning toilets or washing dishes were enough to keep him gliding down the mountain by day and buzzing on cheap booze by night, during that glorious adrenaline come-down that these days often involves an expensive hot-stone massage and is unashamedly referred to as “après-ski.”

He had a pretty good run, the American ski bum, but Jeremy Evans’ “In Search of Powder” suggests that the American West’s cold-weather counterculture is pretty much cashed. From Vail to Sun Valley, corporate-owned ski resorts have driven out family-run facilities, and America’s young college grads have mostly ceded control of the lift lines to students from south of the equator on summer break.

Mr. Evans, a newspaper reporter who himself “ignored the next logical step in adult life” to live in snowy Lake Tahoe, identifies with the graying powder hounds that fill his pages, and for the most part he shares their nostalgia for the way things used to be.

During the 1960s and 1970s, in alpine enclaves like Park City, Utah, and Aspen, Colo., hippies squatted in old miner’s shacks and clashed with rednecks. In Tahoe, Bay Area youths developed the liberated style of skiing known as “hot-dogging,” while in Jackson Hole, Wyo., stylish Europeans such as Jean-Claude Killy and Pepi Stiegler went one step further, inspiring generations of American youth to become daredevils on skis—and, eventually, to get paid for it.

Whether these ski bums intended to or not, they helped popularize the sport and make it profitable. What followed was reminiscent of urban gentrification. As the second-home owners took over, property prices outpaced local wages. Today’s would-be ski bum faces prohibitive commutes, and immigrant workers have taken over the sorts of menial jobs that carefree skier types once happily performed.

Skiing and snowboarding aren’t even a ski resort’s main attraction anymore. Rather, they are marketing tools used to boost real-estate sales and entice tourists who would just as soon go on a cruise or take a trip to Las Vegas. Four corporations—Vail Resorts, Booth Creek, Intrawest and American Skiing Co.—run most of the big mountains. Even Telluride, once considered remote and wild, plays host to Oprah Winfrey, Tom Cruise and a parade of summer festivals.

In 2002, Hal Clifford took the corporate ski industry to task in “Downhill Slide.” Mr. Evans’s book incorporates Mr. Clifford’s most salient findings, but his oral-history method limits his view of the topic. Mr. Evans would have done well, in particular, to mine the obvious connections between ski and surf culture. (Dick Barrymore’s landmark 1969 film, “Last of the Ski Bums,” was more or less an alpine remake of the classic 1966 surfer documentary “The Endless Summer.”)

Like surfing, skiing first went from sport to lifestyle during the 1960s and thus came of age with the baby boomers. They made skiing sexy and rebellious, and then they made it a big business. Two of America’s most exclusive mountain resorts, Beaver Creek (in Colorado) and Deer Valley (in Utah), opened around 1980—right when baby boomers hit a sweet spot in terms of athleticism and net worth. Now, as they age, golf courses and luxury spas have become de rigueur in big ski towns.

Another boomer legacy is the entire notion that the old-fashioned ski bum was a blessed soul who somehow belonged to the natural order. More likely, his was just a brief, Shangri-La moment. For ski towns in the West, the real problem is what will happen as the prosperous, once free-spirited baby-boomer generation begins to wane.

Mr. Hartman contributes to and


Full article and photo:

Drawing Funny

This is the ninth in a series.

The subject of this column is caricature, but I’m not going to explain or demonstrate it myself. When the art god was doling out the syrup of graphic wit, he must have slipped on a banana peel just as he got to my cup and most of it spilled out on the floor. This being the case, I have chosen three artists whose cups of graphic wit truly runneth over and whose work represents caricature at its highest and most droll level of accomplishment.

Two are friends of many years and are literary wits as well as being celebrated artists: Edward Sorel, whose covers for the New Yorker are legendary, and Robert Grossman, whose animated films, comic strips and sculptures are both political and hilarious. The third artist, Tom Bachtell, creates stylish drawings for The New Yorker every week and, memorably, for many months played graphic games with George Bush’s eyebrows.

I asked each of the artists to create a caricature of Pablo Picasso and to give us whatever back story on their process that they choose to share. I think the results show that in order to draw funny, it really helps to be able to free-associate with fish, ex-wives and square eyes.

So here’s Picasso — three ways.

Edward Sorel

Robert Grossman

Thought process: Picasso. Intense gaze. Makes sense in his case. One of his gimmicks was to put both eyes on one side of a face, which nature had only ever done in the instance of the flounder. Can I show him as a flounder?


Refining the flounder concept until I realize I’m the one who’s floundering.

Pablo in Art Heaven glaring down at the puny efforts of mere mortals.


Tom Bachtell

“I work in brush and ink. I drew the face a dozen times, playing with various brushes, strokes, line weight and other ways of applying the ink. I started to imagine the face on the surface of the paper and chase after it with the brush, trying to capture the squat, vigorous, self-confident poser that I see when I think of Picasso, those black eyes blazing out at the viewer. Since he often broke faces into different, distorted planes I felt free to do that, as well as making his eyes into squares and his nose into a Guernica-like protuberance.”


In the next column I introduce the challenge and the possibility of drawing the figure.

James McMullan, New York Times


Full article and photos:

Philosophers Through the Lens

I have spent almost a quarter century photographing philosophers. For the most part, philosophers exist, and have always existed, outside the public spotlight. Yet when we reflect upon those eras of humankind that burn especially bright, it is largely the philosophers that we remember. Despite being unknown at a time, the philosophers of an era survive longer in collective memory than wealthy nobleman and politicians, or the popular figures of stage, song and stadium. Because of this disconnect between living fame and later recognition, we have less of a record of these thinkers than we should. Our museums are filled with busts and paintings of long-forgotten wealth and beauty instead of the philosophers who have so influenced contemporary politics and society. My aim in this project has been the modest one of making sure that, for this era at least, there is some record of the philosophers.

Judith Thomson, Cambridge, Mass., April 14, 2010

I did not initially plan to spend more than 20 years with philosophers. It was the fall of 1988. I was working for a number of different magazines, primarily The Face, taking photographs of the kind of cultural figures who typically rise to public awareness — musicians, artists, actors and novelists. One day I received an assignment to photograph the philosopher Sir Alfred Ayer. Ayer was dimly known to me, England being one of those rare countries in which each era has a few public philosophers — respected thinkers who are called upon to comment on the issues of the day. When I was growing up in the Midlands of the United Kingdom in the 1960s, that figure was Bertrand Russell. Later, he was replaced by Ayer. Still, I knew very little about Ayer other than what I recalled from snippets on BBC question time.

I was told in advance that he was very ill, and that my time with him would be limited to 10 minutes. When I walked into the room, he was wearing an oxygen mask. There were two women in the room. I can’t remember how we got beyond those evident barriers — the social and the physical — but I remained with him for four hours. We talked about many things, but mainly the Second World War. Apparently, many Oxford philosophers had been involved in the war effort, in intelligence. I recall in particular a story Ayer told me about having saved De Gaulle’s life from a faction of the French resistance.

I can’t identify why I found him such a compelling and fascinating figure. Partly it was him. But it was also the fact that philosophers come with a certain combination of mystery and weight. Our discussion gave me a burn to meet more philosophers. That is how my project started.

Umberto Eco, New York City, Nov,. 15 2007

Philosophy is not the only profession I have cataloged. For example, I also have taken over the years many photographs of filmmakers. But my relationship with filmmakers is very different than my relationship with philosophers. My extensive experience with film gives me the ability to make my own judgments of relative merit. A sophisticated appreciation of film is something that many of us can and do cultivate. In the case of philosophers, however, I am, like most people, at sea. The philosophers whose work is most admired by other philosophers are very different from the philosophers who occasionally float to public consciousness. These are not people with connections to the larger world of media (one thing I have learned over these many years is that the cast of mind that leads one to philosophy is rarely one that lends itself to networking). I could only hope to be guided to them by those who had themselves struggled with its problems.

After my meeting with Ayer, I devised a plan to ask each philosopher I photographed for three names of philosophers they admired. Initially, I planned to meet and photograph perhaps 15 philosophers, and publish the results in a magazine. I certainly had no plan to spend the next quarter century pursuing philosophers around the globe. But Ayer had given me the names of four — Isaiah Berlin, Michael Dummett, Ted Honderich and Peter Strawson. Each of them in turn gave me three names, and there was not as much overlap as I had expected. My initial plan had to be modified. Soon, I settled on a formula. If a philosopher was mentioned by three different philosophers as someone whose work was important, I would photograph that philosopher. Of course, employing this formula required meeting many more philosophers. The idea of a short project with 15 photographs was rapidly shelved. To date, the list of those I’ve photographed is nearly 200 names long.

Sally Haslanger and Steve Yablo, New York City, July 14, 2010

Throughout my career I have had to pursue my work with philosophers while making a living with my other professional work, and the cost has sometimes been high. But like any artist who has completed a large and demanding project, I have had good fortune.

Early in my career, I lived not far from Oxford University, a great center for philosophy for centuries. At that time, it employed many of the people whose names were mentioned most by other philosophers. In 2004, I moved to New York to take up a position at The New Yorker. The departments of philosophy at New York University and Rutgers University, like Oxford, are also staffed by many of the figures most mentioned by other philosophers. The New York area also has many other first-rate philosophy departments. Philosophers are a garrulous and argumentative species. Their chief form of social interaction is the lecture, which is typically an hour long, and followed by an hour of probing and aggressive objections from the audience. If one of the figures mentioned three times by other philosophers was not teaching at one of these departments, they almost certainly came to lecture at one of them at some point over the last seven years. My project has benefited from this happenstance. No doubt, I have missed many philosophers worthy of photographing. But had I not been in New York these past six years, I would have missed many more.

In the course of my work, I knew that most appreciators of art, even the most educated, would have but a dim window on the views of the philosophers I was photographing. So I asked each philosopher I photographed to supply 50 words summarizing their work or their view of philosophy (perhaps not surprisingly, several exceeded that limit.) These statements are as much a part of the work as the pictures themselves. Statement and portrait together form a single image of a thinker.

Most philosophers have spent their entire lives in intense concentration, developing and defending lines of argument that can withstand the fearsome critical scrutiny of their peers. Perhaps this leaves some mark on their faces; to that I leave others to judge.

Steve Pyke, a contributing photographer at The New Yorker and at Vanity Fair since 1998, has recently completed a series of portraits of the Apollo astronauts. The second volume of “Philosophers,” with more than 100 portraits, will be published by Oxford University Press in May 2011.


Full article and photos:

Getting the Attention of Serena and Andre

To be an effective speaker, you first have to win the confidence of your audience. In my case, I’m usually working with very talented young tennis players, often teenagers. My goal is to help them to develop into world-class competitors. To achieve this, they need to view me as a benevolent dictator. I have to persuade them that I know what I’m talking about and that they should listen to me, but I also need them to understand that I care about them, as players and people.

Serena Williams needed to learn to play each point as if it were match point at Wimbledon.

Establishing your authority or expertise is not a matter of bravado. It depends on the needs of individual players and what they respond to best. Jim Courier was a player who responded to toughness, and I’d use a firm voice to kick him into gear. When I was trying to teach Serena Williams that she had to play every shot as if it were match point at Wimbledon, we were often in each other’s faces, just short of body contact, when we exchanged our thoughts about a rally or point. With Monica Seles, by contrast, I was always sure to use kid gloves.

Sometimes you show that you know your stuff by saying nothing. In the late ’90s, I was asked to be the coach of Boris Becker, who had already achieved greatness as the youngest player ever to win Wimbledon. I went to Munich with the goal of getting Boris back into physical shape and reviewing every aspect of his game. I watched and watched for two weeks, and never said a word. Boris finally turned to me and said “Mr. B, can you talk?” My answer, “When I talk to you, I better know what to say.” His answer: “Mr. B, we will get along real well.”

You can’t remake someone’s game all at once, so my brand of instruction has always been to give simple words of advice. When I focused on footwork with Yuki Bhambri (currently the No. 1 junior men’s player in the world), I let him know that the recovery step is crucial to any level of play, especially if you hit from a neutral stance. Yuki would respond to these simple verbal tips, and then I’d demonstrate to him exactly what I meant.

At the beginning of my career, I found it very difficult to listen to anyone, but you can’t imagine how much more persuasive you become when you listen well. In today’s world, young people expect to be heard, and their input very often helps you to work with them.

As a student at my academy 25 years ago, Andre Agassi was always testing the boundaries, especially with the dress code. His hair was long and dyed, he wore nail polish, he wouldn’t wear regular tennis clothes. My first instinct was to make him conform, but I still remember one year when he stopped by my office, before going home for Christmas. “Nick,” he said, “the head of the school wants me to cut my hair and dress a little different. Can you please change his mind?” I listened, and we decided to let him be himself.

With a player like Andre, as with anyone you’re trying to motivate, you have to get a sense of their individual spirit and try to harness it to develop top talent. Benevolence was not enough with Andre, but it had to be part of the mix.

Nick Bollettieri founded Nick Bollettieri Tennis Academy in 1978 and is the president of IMG Academies.


Full article and photo:

The Lost Art of Argument

Polemic seems to have gone the way of the typewriter and the soda fountain. The word was once associated with the best practitioners of the form: Voltaire, Jonathan Swift, George Orwell, Rebecca West. Nowadays, if you say “polemic,” you get strange looks, as if you were referring not to refined argument, especially written argument, but to some sort of purgative.

Of course, polemic isn’t just an argument. It’s a literary offensive prosecuted with the goal not only of winning your point but of demolishing your opponent’s entire case. The name-calling and yelling on cable TV and the Internet have made us forget that the classic polemic is a dazzling, meticulously crafted statement.

It was precisely the intellectually demanding nature of polemic that enthralled me when I was a teenager. While my pals were out driving around, I spent dusty summers in my town library, where I lost myself in back issues of Partisan Review and Commentary. Debating with the world seemed the best way to make my way into the world. James Joyce once said that words were a poor boy’s arsenal. For me, it was words built into arguments.

My father was a professional pianist, and though I never learned to play the piano with great proficiency, I was entranced by the function of the pedals. You could make the notes quaver, persist, echo or abruptly stop. Eventually, I found that to make a forceful polemic, the argument had to have pedals. That way, you could even make it beautiful.

A polemic isn’t just an argument. The goal is to demolish your opponent’s entire case.

With the right use of rhetorical pedals, a writer can pour into polemic the care and craft that poets and novelists invest in their art. Confronted by a piece of lofty illogic, you can draw some telling counterpoint from everyday life. An armchair warrior, for instance, might be put in his place by this: “Any man who considers war an exercise in nobility is either not a father or an unfeeling one.” You can deploy a literary allusion or historical reference to show that a pompous emperor lacks clothes, as it were. Does a writer sentimentalize Africans as innocent children, corrupted by Western imperialism? Introduce her to the African role in the slave trade or the recent history of the Congo.

The most effective polemic, I have always found, is actually a kind of musical reinterpretation of someone else’s argument. Arguments are rarely wrong. Rather, they are ethically or intellectually unfinished. To return to my example above: Though it is hardly the whole story, imperialism did indeed help to set Africa on a destructive path. A successful polemic would have to concede the point, before completing it with the larger point that attributing Africa’s problems to Western imperialism is yet another way of robbing Africans of their independence.

The most successful polemic against something must also be a form of understanding the thing that you are disputing. Empathy is indispensable to a forensic drubbing. A talented polemicist should win some converts from the opposition simply by making the case that his opponent’s argument isn’t crazed or pernicious, just radically incomplete.

In our age of feral opining, however, we have no use for patient absorption in someone else’s logic and rhetoric. Better to scream obscenities into the white void of your screen, turn off your computer and….But here I go, succumbing to my great passion and starting to polemicize.

Which brings me to my final point: Disappointed love, not hatred or aversion, is the strongest motivation behind the urge to make an argument sing, rather than shout. And that is why the screamers on every side leave me so cold. They argue without love— and they are not true polemicists.

Lee Siegel is a columnist and editor at large for the New York Observer.


Full article and photo:

Purpose-Driven Prose

“Have you ever written political speeches? It’s a particularly low form of rhetoric.”

Thus did Arthur M. Schlesinger Jr., who knew a few things about writing political speeches, welcome me to the guild. It was early 1998, and I was just preparing to join the White House staff as a speechwriter for President Bill Clinton—who, by the way, did not disagree with Mr. Schlesinger. Mr. Clinton’s face soured when he said “rhetoric.” To him it was an epithet. “I don’t want rhetoric,” he’d complain. “I actually want to say something.”

Political rhetoric has a bad rap. Except in certain college courses—where the speeches of Lincoln and Churchill are pinned down like dead frogs and inspected for signs of logos, elocutio, and aposiopesis—rhetoric is held beneath contempt. And not without cause. The speeches of this wretched campaign year have brought to mind what H.L. Mencken said of President Warren G. Harding’s rhetoric: “It reminds me of a string of wet sponges…of stale bean-soup, of college yells, of dogs barking idiotically through endless nights. It is so bad that a sort of grandeur creeps into it.”

Former Clinton speechwriter Jeff Shesol on the art of the political speech.

I’ll concede that speechwriters are part of the problem. We tend to blame the person at the podium, but the enemy, let’s admit it, is us, too. Speechwriters have an unhealthy affection for alliteration (“nattering nabobs of negativism”). We cling to clichés (see above, “the enemy is us”). And we tend to believe that phrases like “take back our country,” if repeated enough, might come to mean something.

But smart speechwriting still has the power to inspire, educate, entertain—and to make a difference. As a writer, the first thing to remember is that a good speech has a point. It is purpose-driven. It is ends-oriented. Its true test is not whether it gets quoted in Bartlett’s, but whether it gets people to embrace an idea, support a bill, throw a bum out, invest in plastics or stop feeding their kids potato chips for breakfast.

But how is this alchemy achieved? Google “speechwriting,” and you’ll enter a thicket of rules, tips and tricks of the trade. Many of them work as advertised. But formulas only carry you so far. Speechwriters reach for whatever tools they need—reason, emotion, repetition, humor, statistics, stories—to frame and win an argument. Organization is always important. To give the speech forward momentum, a kind of inevitability, certain ideas must be established in a certain sequence.

Every word should serve that goal. “Why is that in there?” President Clinton would ask us, before drawing a neat, black line through the offending phrase. Occasionally, we’d return the favor: under cover of darkness at the 2000 Democratic National Convention, I struck a sentence the president had dictated about his administration’s success in reducing the rate of salmonella infections.

This single-mindedness was apparent in every line of the 2008 speech in which Bill Gates introduced his concept of “creative capitalism.” Through real-world examples and a rousing call to action, he showed how institutions can “stretch the reach of market forces so that more people can make a profit, or gain recognition, doing work that eases the world’s inequities.” Or consider Steve Jobs. In his commencement address at Stanford in 2005, he told very personal stories, including one about “facing death” as he battled cancer, to inspire graduates to live by their own intuition.

Such speeches redeem the promise of the spoken word and show that, this campaign season notwithstanding, rhetoric need not be at odds with reality.

Jeff Shesol, a partner at West Wing Writers, is the author of “Supreme Power: Franklin Roosevelt vs. the Supreme Court.”


Full article and photo:

Hegel on Wall Street

As of today, the Troubled Asset Relief Program, known as TARP, the emergency bailouts born in the financial panic of 2008, is no more. Done. Finished. Kaput.

Last month the Congressional Oversight Panel issued a report assessing the program. It makes for grim reading.  Once it is conceded that government intervention was necessary and generally successful in heading off an economic disaster, the narrative heads downhill quickly: TARP was badly mismanaged, the report says, it created significant moral hazard and failed miserably in providing mortgage foreclosure relief. 

That may not seem like a shocking revelation. Everyone left, right, center, red state, blue state, even Martians — hated the bailout of Wall Street, apart of course from the bankers and dealers themselves, who could not even manage a grace moment of red-faced shame before they eagerly restocked their far from empty vaults.  A perhaps bare majority, or more likely just a significant minority, nonetheless thought the bailouts were necessary.  But even those who thought them necessary were grieved and repulsed.  There was, I am suggesting, no moral disagreement about TARP and the bailouts — they stank. The only significant disagreement was practical and causal: would the impact of not bailing out the banks be catastrophic for the economy as a whole or not?  No one truly knew the answer to this question, but that being so the government decided that it could not and should not play roulette with the future of the nation and did the dirty deed.

That we all agreed about the moral ugliness of the bailouts should have led us to implementing new and powerful regulatory mechanisms.  The financial overhaul bill that passed congress in July certainly fell well short of what would be necessary to head-off the next crisis.  Clearly, political deal-making and the influence of Wall Street over our politicians is part of the explanation for this failure; but the failure also expressed continuing disagreement about the nature of the free market.  In pondering this issue I want to, again, draw on the resources of Georg W.F. Hegel.  He is not, by a long shot, the only philosopher who could provide a glimmer of philosophical illumination in this area.  But the primary topic of his practical philosophy was analyzing the exact point where modern individualism and the essential institutions of modern life meet.  And right now, this is also where many of the hot-button topics of the day reside.

Hegel, of course, never directly wrote about Wall Street, but he was philosophically invested in the logic of market relations.  Near the middle of the “Phenomenology of Spirit” (1807), he presents an argument that says, in effect: if Wall Street brokers and bankers understood themselves and their institutional world aright, they would not only accede to firm regulatory controls to govern their actions, but would enthusiastically welcome regulation.  Hegel’s emphatic but paradoxical way of stating this is to say that if the free market individualist acts “in [his] own self-interest, [he] simply does not know what [he] is doing, and if [he] affirms that all men act in their own self-interest, [he] merely asserts that all men are not really aware of what acting really amounts to.”  For Hegel, the idea of unconditioned rational self-interest — of, say, acting solely on the motive of making a maximal profit — simply mistakes what human action is or could be, and is thus rationally unintelligible.  Self-interested action, in the sense it used by contemporary brokers and bankers, is impossible.  If Hegel is right, there may be deeper and more basic reasons for strong market regulation than we have imagined.

The “Phenomenology” is a philosophical portrait gallery that presents depictions, one after another, of different, fundamental ways in which individuals and societies have understood themselves.  Each self-understanding has two parts: an account of how a particular kind of self understands itself and, then, an account of the world that the self considers its natural counterpart.  Hegel narrates how each formation of self and world collapses because of a mismatch between self-conception and how that self conceives of the larger world.  Hegel thinks we can see how history has been driven by misshapen forms of life in which the self-understanding of agents and the worldly practices they participate in fail to correspond.  With great drama, he claims that his narrative is a “highway of despair.” 

The discussion of market rationality occurs in a section of the “Phenomenology” called “Virtue and the way of the world.”  Believing in the natural goodness of man, the virtuous self strives after moral self-perfection in opposition to the wicked self-interested practices of the marketplace, the so-called “way of the world.”  Most of this section is dedicated to demonstrating how hollow and absurd is the idea of a “knight of virtue” — a fuzzy, liberal Don Quixote tramping around a modern world in which the free market is the central institution.  Against the virtuous self’s “pompous talk about what is best for humanity and about the oppression of humanity, this incessant chatting about the sacrifice of the good,” the “way of the world” is easily victorious.

However, what Hegel’s probing account means to show is that the defender of holier-than-thou virtue and the self-interested Wall Street banker are making the same error from opposing points of view.  Each supposes he has a true understanding of what naturally moves individuals to action.  The knight of virtue thinks we are intrinsically good and that acting in the nasty, individualist, market world requires the sacrifice of natural goodness; the banker believes that only raw self-interest, the profit motive, ever leads to successful actions.

Both are wrong because, finally, it is not motives but actions that matter, and how those actions hang together to make a practical world.  What makes the propounding of virtue illusory — just so much rhetoric — is that there is no world, no interlocking set of practices into which its actions could fit and have traction: propounding peace and love without practical or institutional engagement is delusion, not virtue.  Conversely, what makes self-interested individuality effective is not its self-interested motives, but that there is an elaborate system of practices that supports, empowers, and gives enduring significance to the banker’s actions.  Actions only succeed as parts of practices that can reproduce themselves over time.  To will an action is to will a practical world in which actions of that kind can be satisfied — no corresponding world, no satisfaction.  Hence the banker must have a world-interest as the counterpart to his self-interest or his actions would become as illusory as those of the knight of virtue.  What bankers do, Hegel is urging, is satisfy a function within a complex system that gives their actions functional significance.

Actions are elements of practices, and practices give individual actions their meaning. Without the game of basketball, there are just balls flying around with no purpose.  The rules of the game give the action of putting the ball through the net the meaning of scoring, where scoring is something one does for the sake of the team.   A star player can forget all this and pursue personal glory, his private self-interest.  But if that star — say, Kobe Bryant — forgets his team in the process, he may, in the short term, get rich, but the team will lose.  Only by playing his role on the team, by having an L.A. Laker interest as well as a Kobe Bryant interest, can he succeed.  I guess in this analogy, Phil Jackson has the role of “the regulator.”

The series of events leading up to near economic collapse have shown Wall Street traders and bankers to be essentially knights of self-interest — bad Kobe Bryants.  The function of Wall Street is the allocation of capital; as Adam Smith instructed, Wall Street’s task is to get capital to wherever it will do the most good in the production of goods and services.  When the financial sector is fulfilling its function well, an individual banker succeeds only if he is routinely successful in placing investors’ capital in businesses that over time are profitable.  Time matters here because what must be promoted is the practice’s capacity to reproduce itself.  In this simplified scenario, Wall Street profits are tightly bound to the extra wealth produced by successful industries.

Every account of the financial crisis points to a terrifying series of structures that all have the same character: the profit-driven actions of the financial sector became increasingly detached from their function of supporting and advancing the growth of capital.  What thus emerged were patterns of action which, may have seemed to reflect the “ways of the world” but in financial terms, were as empty as those of a knight of virtue, leading to the near collapse of the system as a whole.  A system of compensation that provides huge bonuses based on short-term profits necessarily ignores the long-term interests of investors. As does a system that ignores the creditworthiness of borrowers; allows credit rating agencies to be paid by those they rate and encourages the creation of highly complex and deceptive financial instruments.  In each case, the actions — and profits — of the financial agents became insulated from both the interests of investors and the wealth-creating needs of industry.

Despite the fact that we have seen how current practices are practically self-defeating for the system as a whole, the bill that emerged from the Congress comes nowhere near putting an end to the practices that necessitated the bailouts.  Every one of those practices will remain in place with just a veneer of regulation giving them the look of legitimacy.

What market regulations should prohibit are practices in which profit-taking can routinely occur without wealth creation; wealth creation is the world-interest that makes bankers’ self-interest possible.  Arguments that market discipline, the discipline of self-interest, should allow Wall Street to remain self-regulating only reveal that Wall Street, as Hegel would say, “simply does not know what it is doing.”

We know that nearly all the financial conditions that led to the economic crisis were the same in Canada as they were in the United States with a single, glaring exception: Canada did not deregulate its banks and financial sector, and, as a consequence, Canada avoided the worst of the economic crisis that continues to warp the infrastructure of American life.  Nothing but fierce and smart government regulation can head off another American economic crisis in the future.  This is not a matter of “balancing” the interests of free-market inventiveness against the need for stability; nor is it a matter of a clash between the ideology of the free-market versus the ideology of government control.  Nor is it, even, a matter of a choice between neo-liberal economic theory and neo-Keynesian theory.  Rather, as Hegel would have insisted, regulation is the force of reason needed to undo the concoctions of fantasy.

J.M. Bernstein is University Distinguished Professor of Philosophy at the New School for Social Research and the author of five books. He is now completing a book entitled “Torture and Dignity.”


Full article and photo:

Don’t touch my junk

Ah, the airport, where modern folk heroes are made. The airport, where that inspired flight attendant did what everyone who’s ever been in the spam-in-a-can crush of a flying aluminum tube – where we collectively pretend that a clutch of peanuts is a meal and a seat cushion is a “flotation device” – has always dreamed of doing: pull the lever, blow the door, explode the chute, grab a beer, slide to the tarmac and walk through the gates to the sanity that lies beyond. Not since Rick and Louis disappeared into the Casablanca fog headed for the Free French garrison in Brazzaville has a stroll on the tarmac thrilled so many.

Who cares that the crazed steward got arrested, pleaded guilty to sundry charges, and probably was a rude, unpleasant SOB to begin with? Bonnie and Clyde were psychopaths, yet what child of the ’60s did not fall in love with Faye Dunaway and Warren Beatty?

And now three months later, the newest airport hero arrives. His genius was not innovation in getting out, but deconstructing the entire process of getting in. John Tyner, cleverly armed with an iPhone to give YouTube immortality to the encounter, took exception to the TSA guard about to give him the benefit of Homeland Security’s newest brainstorm – the upgraded, full-palm, up the groin, all-body pat-down. In a stroke, the young man ascended to myth, or at least the next edition of Bartlett’s, warning the agent not to “touch my junk.”

Not quite the 18th-century elegance of “Don’t Tread on Me,” but the age of Twitter has a different cadence from the age of the musket. What the modern battle cry lacks in archaic charm, it makes up for in full-body syllabic punch.

Don’t touch my junk is the anthem of the modern man, the Tea Party patriot, the late-life libertarian, the midterm election voter. Don’t touch my junk, Obamacare – get out of my doctor’s examining room, I’m wearing a paper-thin gown slit down the back. Don’t touch my junk, Google – Street View is cool, but get off my street. Don’t touch my junk, you airport security goon – my package belongs to no one but me, and do you really think I’m a Nigerian nut job preparing for my 72-virgin orgy by blowing my johnson to kingdom come?

In “Up in the Air,” that ironic take on the cramped freneticism of airport life, George Clooney explains why he always follows Asians in the security line:

“They pack light, travel efficiently, and they got a thing for slip-on shoes, God love ’em.”

“That’s racist!”

“I’m like my mother. I stereotype. It’s faster.”

That riff is a crowd-pleaser because everyone knows that the entire apparatus of the security line is a national homage to political correctness. Nowhere do more people meekly acquiesce to more useless inconvenience and needless indignity for less purpose. Wizened seniors strain to untie their shoes; beltless salesmen struggle comically to hold up their pants; 3-year-olds scream while being searched insanely for explosives – when everyone, everyone, knows that none of these people is a threat to anyone.

The ultimate idiocy is the full-body screening of the pilot. The pilot doesn’t need a bomb or box cutter to bring down a plane. All he has to do is drive it into the water, like the EgyptAir pilot who crashed his plane off Nantucket while intoning “I rely on God,” killing all on board.

But we must not bring that up. We pretend that we go through this nonsense as a small price paid to ensure the safety of air travel. Rubbish. This has nothing to do with safety – 95 percent of these inspections, searches, shoe removals and pat-downs are ridiculously unnecessary. The only reason we continue to do this is that people are too cowed to even question the absurd taboo against profiling – when the profile of the airline attacker is narrow, concrete, uniquely definable and universally known. So instead of seeking out terrorists, we seek out tubes of gel in stroller pouches.

The junk man’s revolt marks the point at which a docile public declares that it will tolerate only so much idiocy. Metal detector? Back-of-the-hand pat? Okay. We will swallow hard and pretend airline attackers are randomly distributed in the population.

But now you insist on a full-body scan, a fairly accurate representation of my naked image to be viewed by a total stranger? Or alternatively, the full-body pat-down, which, as the junk man correctly noted, would be sexual assault if performed by anyone else?

This time you have gone too far, Big Bro’. The sleeping giant awakes. Take my shoes, remove my belt, waste my time and try my patience. But don’t touch my junk.

Charles Krauthammer, Washington Post


Full article:

Writing wrongs

THE politics of the latest attacks by Hindu nationalists on Indian authors is not terribly hard to divine. One extremist bunch, the Rashtriya Swayamsevak Sangh (RSS), an outfit often banned by India’s government, has threatened Arundhati Roy, a prize-winning Indian novelist turned political activist. Ms Roy’s crime? That in recent weeks she dared to speak out in favour of protesting (Muslim) Kashmiris, some 110 of whom have been killed in a police crackdown that began in the summer. Ms Roy’s call for an inquiry into those deaths has lead the RSS to demand that she be charged with sedition. Hindu Nationalists reportedly attacked Ms Roy’s home in Delhi at the end of October, determined to settle scores personally.
This followed a similar move by another Hindu outfit to ban a book by Rohinton Mistry, an Indian-born Canadian novelist. In this case the thuggish Shiv Sena, a powerful political party in the western state of Maharashtra, has fiercely objected to Mr Mistry’s “Such a Long Journey”, a novel that has become part of the university curriculum in Mumbai, the state capital. At issue is the fact that the book lampoons Bal Thackeray, a Mumbai kingpin who founded the Hindu nationalist Shiv Sena party over four decades ago. Aditya Thackeray, his grandson and a student in Mumbai, helped to whip up a storm against the novel, ultimately encouraging the university to drop the book from its classes. Even the chief minister of the state has called “Such a Long Journey” abusive.
But why make a fuss now, considering the book was published nearly 20 years ago? It seems that the young Mr Thackeray has political ambitions of his own, and this was a handy way to draw attention to his Hindu nationalist credentials. Indeed, Shiv Sena has a reputation for being tetchy towards even moderate Hindus who dare to suggest that Muslims or Pakistanis might have views worth listening to. Early this year when Shah Rukh Khan, a Bollywood star, pointed out the stupidity of leaving Pakistani cricketers out of the Indian Premier League, the elder Mr Thackeray threatened to disrupt the release of his latest film.
Hindu nationalists work up a lather in such cases to put pressure on the Congress party, which looks powerful at the national level but much less so at the state level. If Congress slips in Maharashtra, where a property scandal could yet bring down many of Congress’s leaders, the likely political beneficiaries would be Hindu nationalists of various stripes, including Shiv Sena and the Thackerays.
As for Mr Mistry, the 58-year-old author has published just three novels, but each has been shortlisted for the Man Booker prize. Though slow, he is an elegant writer. When his last novel, “Family Matters”, came out in 2002, The Economist praised him as “one of the best of the Indian writers in English”. His publisher is eagerly awaiting his latest book, which is nearly finished. A new book would raise Mr Mistry’s profile among critics and readers, and perhaps stiffen the spine of Mumbai university.


Full article and photo:

Too Good to Check

On Nov. 4, Anderson Cooper did the country a favor. He expertly deconstructed on his CNN show the bogus rumor that President Obama’s trip to Asia would cost $200 million a day. This was an important “story.” It underscored just how far ahead of his time Mark Twain was when he said a century before the Internet, “A lie can travel halfway around the world while the truth is putting on its shoes.” But it also showed that there is an antidote to malicious journalism — and that’s good journalism.

In case you missed it, a story circulated around the Web on the eve of President Obama’s trip that it would cost U.S. taxpayers $200 million a day — about $2 billion for the entire trip. Cooper said he felt impelled to check it out because the evening before he had had Representative Michele Bachmann of Minnesota, a Republican and Tea Party favorite, on his show and had asked her where exactly Republicans will cut the budget.

Instead of giving specifics, Bachmann used her airtime to inject a phony story into the mainstream. She answered: “I think we know that just within a day or so the president of the United States will be taking a trip over to India that is expected to cost the taxpayers $200 million a day. He’s taking 2,000 people with him. He’ll be renting over 870 rooms in India, and these are five-star hotel rooms at the Taj Mahal Palace Hotel. This is the kind of over-the-top spending.”

The next night, Cooper explained that he felt compelled to trace that story back to its source, since someone had used his show to circulate it. His research, he said, found that it had originated from a quote by “an alleged Indian provincial official,” from the Indian state of Maharashtra, “reported by India’s Press Trust, their equivalent of our A.P. or Reuters. I say ‘alleged,’ provincial official,” Cooper added, “because we have no idea who this person is, no name was given.”

It is hard to get any more flimsy than a senior unnamed Indian official from Maharashtra talking about the cost of an Asian trip by the American president.

“It was an anonymous quote,” said Cooper. “Some reporter in India wrote this article with this figure in it. No proof was given; no follow-up reporting was done. Now you’d think if a member of Congress was going to use this figure as a fact, she would want to be pretty darn sure it was accurate, right? But there hasn’t been any follow-up reporting on this Indian story. The Indian article was picked up by The Drudge Report and other sites online, and it quickly made its way into conservative talk radio.”

Cooper then showed the following snippets: Rush Limbaugh talking about Obama’s trip: “In two days from now, he’ll be in India at $200 million a day.” Then Glenn Beck, on his radio show, saying: “Have you ever seen the president, ever seen the president go over for a vacation where you needed 34 warships, $2 billion — $2 billion, 34 warships. We are sending — he’s traveling with 3,000 people.” In Beck’s rendition, the president’s official state visit to India became “a vacation” accompanied by one-tenth of the U.S. Navy. Ditto the conservative radio talk-show host Michael Savage. He said, “$200 million? $200 million each day on security and other aspects of this incredible royalist visit; 3,000 people, including Secret Service agents.”

Cooper then added: “Again, no one really seemed to care to check the facts. For security reasons, the White House doesn’t comment on logistics of presidential trips, but they have made an exception this time. He then quoted Robert Gibbs, the White House press secretary, as saying, “I am not going to go into how much it costs to protect the president, [but this trip] is comparable to when President Clinton and when President Bush traveled abroad. This trip doesn’t cost $200 million a day.” Geoff Morrell, the Pentagon press secretary, said: “I will take the liberty this time of dismissing as absolutely absurd, this notion that somehow we were deploying 10 percent of the Navy and some 34 ships and an aircraft carrier in support of the president’s trip to Asia. That’s just comical. Nothing close to that is being done.”

Cooper also pointed out that, according to the Congressional Budget Office, the entire war effort in Afghanistan was costing about $190 million a day and that President Bill Clinton’s 1998 trip to Africa — with 1,300 people and of roughly similar duration, cost, according to the Government Accountability Office and adjusted for inflation, “about $5.2 million a day.”

When widely followed public figures feel free to say anything, without any fact-checking, we have a problem. It becomes impossible for a democracy to think intelligently about big issues — deficit reduction, health care, taxes, energy/climate — let alone act on them. Facts, opinions and fabrications just blend together. But the carnival barkers that so dominate our public debate today are not going away — and neither is the Internet. All you can hope is that more people will do what Cooper did — so when the next crazy lie races around the world, people’s first instinct will be to doubt it, not repeat it.

Thomas L. Friedman, New York Times


Full article:

Bill O’Reilly’s threats

Bill O’Reilly wants my head.

On Thursday night, the Fox News host asked, as part of a show that would be seen by 5.5 million people: “Does sharia law say we can behead Dana Milbank?” He then added, “That was a joke.”

Hilarious! Decapitation jokes just slay me, and this one had all the more hilarity because the topic of journalist beheadings brings to mind my late friend and colleague Danny Pearl, who replaced me in the Wall Street Journal’s London bureau and later was murdered in Pakistan by people who thought sharia justified it.

The next night, O’Reilly read a complaint from one of his viewers, Heidi Haverlock of Cleveland, who said: “I thought the joke about whether sharia law would allow the beheading of the Washington Post guy was completely inappropriate.” O’Reilly replied to her on air: “Well, let me break this to you gently, Heidi. If Dana Milbank did in Iran what he does in Washington, he’d be hummus.”

O’Reilly is partly right about that. As an American and a Jew, I probably wouldn’t last long in Iran. And criticizing the government there, as I do here, wouldn’t add to my life expectancy. But what was he trying to say? That America would be better if it were more like Iran?

O’Reilly’s on-air fantasizing about violent ends for me was precipitated by a column I wrote describing Fox News’s election-night coverage as a victory party for the Republicans. This didn’t strike me as a terribly controversial point, but it evidently offended O’Reilly. “He said there were no Democrats except for Schoen on,” O’Reilly complained. “It was an outright lie.”

That would have been an outright lie, except that I said no such thing. I wrote: “To be fair and balanced, Fox brought in a nominal Democrat, pollster Doug Schoen. ‘This is a complete repudiation of the Democratic Party,’ he proclaimed.”

Though I didn’t claim Schoen was the sole Democrat, in hindsight I should have quoted other putative liberals who appeared on Fox that night – and sounded much like Schoen. There was Bob Beckel, proclaiming: “I feel like the blind guy whose guide dog died” and “I give all the credit to Republicans on this.” Or Juan Williams on President Obama: “I just don’t think he gets it.”

I suspect O’Reilly’s fury – he went after me on three consecutive nights last week – has less to do with one sentence in one column than with a book and a series of columns I’ve written about O’Reilly’s colleague Glenn Beck. I’ve argued that Beck, with his talk of violence, Nazis and conspiracy theories, is all but inviting fringe characters to take up arms. I’ve held O’Reilly up as a responsible alternative to Beck – but O’Reilly seems determined to prove this wrong.

On Thursday night, he made an eerie reference to The Post’s editorial page editor. “Would you put Fred Hiatt’s picture up on the screen here?” he asked. “This is the editor, Milbank’s editor, Fred Hiatt. And, Fred won’t do anything about Milbank lying in his column. I just want everybody in America to know what The Washington Post has come to. All right, you can take Fred’s picture off. Fred, have a nice weekend, buddy.”

Shortly after this, O’Reilly proposed to his fellow Fox News host, Megyn Kelly, a way to handle their disagreement with me: “I think you and I should go and beat him up.”

The two continued on to a discussion of the attempt to bar sharia law in Oklahoma. That’s when he made his little “joke” about beheading me, which led to his talk the next night about garbanzo puree.

Kelly, too, took issue with what I wrote, but to her credit she didn’t join in O’Reilly’s violent fantasies. “When somebody missteps, especially when it comes to any sort of speech or expression of opinion, the answer is to have more speech and opinion,” she said.

“I’m not trying to muzzle the guy,” O’Reilly replied.

True. You don’t need a muzzle if your head has been cut off.

O’Reilly has every right to quarrel with my opinion or question my accuracy. But why resort to intimidation and violent imagery? I don’t believe O’Reilly really wants to sever my head, but if only one of his millions of viewers interprets his message otherwise, that’s still a problem for me. Already, Beck fans have been accused of a police killing, threatening to kill a senator and having a highway shootout en route to an alleged attack on liberal groups.

Let’s drop the thuggish tactics – before more people get hurt.

Dana Milbank, Washington Post


Full article and photo:

Walking to the Heart of Greece

As a boy in 1940s Greece, my friend Costas, now a retired banker, had a pistol shoved in his face by a communist guerrilla screaming that he wanted to requisition the family mule. Knowing that the animal meant his family’s survival in desperate times, Costas refused. He might have been shot then and there if the guerrilla had not been restrained by more compassionate comrades. Many years later, attending his nephew’s wedding in Athens, Costas was stunned to recognize the best man. It was the very fellow who had nearly killed him over a mule.

Such stories are common in Greece, where a merciless occupation by Germans and Italians during World War II, violence between left and right, and foreign meddling during the civil war (roughly 1945-49) and the Junta years (1967-74) left Greeks living cheek by jowl with people they could never forgive.

Kevin Andrews experienced the dangers of the countryside during the civil war. “The Flight of Ikaros,” the book he produced from his travels, remains not only one of the greatest we have about postwar Greece—memorializing a village culture that has almost vanished—but also one of the most moving accounts I have ever read of people caught up in political turmoil. (It is richer than George Orwell’s “Homage to Catalonia” because Andrews spent more time getting to know the people he wrote about.) “Flight” was first published in 1959 and last reprinted by Penguin in 1984. For too many years, this rare account has languished out of print.

Kevin Andrews posing in the ruins of Mistras during his travels through the Peloponnese in the early 1950s.

Born half English, half American in China in 1924, Andrews saw combat with the 10th Mountain Division in Italy, graduated from Harvard in 1947, and set out to study archaeology in Greece. A fellowship allowed him to spend years in country, working on a study of the ruins of the medieval fortresses of the Pel o pon nese. The research was largely conducted on foot in perilous times, when the mountains hid bands of guerillas and the rugged villages were full of soldiers and suspicious police.

Though “Flight” occasionally sketches the larger political picture—the sources of the civil war and effects of the Marshall Plan—Andrews’s interests are consistently in the ordinary people he encounters. His politics were clearly of the left, yet many of the villagers he befriended were rightists, royalists or worse. Kostandí, a hardened killer living near the ruins of the Byzantine city of Mistras, is exuberantly generous with his foreign friend. His wife grows to trust Andrews enough to tell him the gripping story of a recent battle for the hilltop castle, where the guerrillas charged outnumbered soldiers. In her telling, it merges with a crazy feud between Kostandí and his brother: “I said, ‘Eh, Kotso, did you kill him?’ because I saw his clothes, his hands, his whole body covered with blood, but he only laughed. ‘Him? No, he got away through the upper gate. He’ll be halfway across Taygetos by now. Why do you look at me like that? This morning I killed sixteen men near Pend’ Alónia. Now give me my baby and take my clothes and wash them.’ ”

The villagers were intrigued by the foreigner dressed in rags who was as happy sleeping under the stars as in their homes, and he captured their speech and manners. He had a perfect ear for the conversation—the rumors, the paranoia, the generosity—of rural Greeks. Roger Jinkinson’s biography of Andrews, “American Ikaros” (2010), suggests that he was a difficult man, lacking empathy for others. You would never guess it from the affectionate portraits in “Flight.” The book is full of intimate dramas. On a train to Athens he meets an old man and his youngest son, Greeks who had been forced to leave Romania after the war and were essentially interned by the Greek government. ” ‘You who come from America,’ said the boy, ‘tell me, is it possible to live there like a human being? Is there a place in the world where one can live like a human being?’—he repeated the phrase bitterly.” After the father explains their woes—” ‘Sorrow lasts as long as life. Life is long; I can never remember a time when I was not alive,’ he murmured absently”—Andrews shares some “dusty, half-squashed grapes” with them so they can satisfy the demands of hospitality.

Later, Andrews becomes koumbáros (godfather) to the child of a royalist shepherd, Andoni, a man who finds a visit to Athens utterly baffling:

He looked across the shabby, humble, sprawling little town to the blurred outlines of the mountains he knew better, and then back up at the columns of the Parthenon, and said, “Who made these things, Koumbáre?”

“People who lived here thousands of years ago.”

And he said, “Things like this are from God.” 

Distracted by his study of castles and a climb up Mount Olympus, Andrews took a long time getting back to his godson’s family in the book. When he finally did, it was to acknowledge that he would soon return to America and did not know whether he would see them again. “At last Andoni and I sat alone over the end of our meal. One of the girls came in and put on the table a bag full of biscuits she and her mother had baked that morning, a bottle of some kind of red syrup and a jar of sweets. ” ‘For you, Godfather,’ she murmured softly, looking at me; then she lowered her eyes and went out of the room. I sat gazing at the objects on the table and suddenly turned my face to the wall. Andoni leaped up and clasped my head in his arm.”

A life in America did not work out. Andrews eventually married a daughter of the poet E.E. Cummings and returned to Greece. The couple separated during the Junta years, she taking their children abroad while he stubbornly stayed on—in 1975 he renounced his American citizenship in fury over our support of the absurd and incompetent government. Among his books, most of which remain nearly impossible to find, are two studies of Athens, two longer poems and a volume called “Greece in the Dark: 1967-1974,” perhaps the best account in English of resistance to the colonels’ regime. It stirringly re-creates the major protest marches as well as the funeral of Greece’s first Nobel Prize-winning poet, George Seferis. One day in 1989, Andrews set out to swim the rough waters off Kythera. He was heading for Avgó (Egg), a little islet said to be Aphrodite’s birthplace. His body was recovered the next day.

“Does anything impoverish like caution?” Andrews asked, and reading his books most of us will feel a twinge of regret about our more conventional paths, likely combined with relief at having avoided many of his mistakes. Few would call Andrews’s life a success. He was too much a loner, too contrary, and though he wrote much—including a long-labored over, probably unfinished novel—he published little and obscurely. But he left behind at least one indisputably great book. “The Flight of Ikaros” is evocative and painful; restrained and full of compassionate feeling. Here are Greeks in all their flinty reality, their contradiction, their resistance.

Mr. Mason teaches at Colorado College. His latest book is a memoir, “News From the Village: Aegean Friends.”


Full article and photo:

Justice Stevens on ‘Invidious Prejudice’

A great deal of what public figures have said about the proposed Islamic cultural center near ground zero in Lower Manhattan has been aimed at playing off fear and intolerance for political gain. Former Justice John Paul Stevens of the Supreme Court, on the other hand, delivered one of the sanest and most instructive arguments for tolerance that we have heard in a long time.

Justice Stevens, who retired at the end of the court’s last term, served for two and a half years as an intelligence officer in Pearl Harbor during World War II. In a speech on Thursday in Washington, he confessed his initial negative reaction decades later at seeing dozens of Japanese tourists visiting the U.S.S. Arizona memorial.

“Those people don’t really belong here,” he recalled thinking about the Japanese tourists. “We won the war. They lost it. We shouldn’t allow them to celebrate their attack on Pearl Harbor even if it was one of their greatest victories.”

But then Justice Stevens said that he recognized his mistake in “drawing inferences” about the group of tourists that might not apply to any of them. “The Japanese tourists were not responsible for what some of their countrymen did decades ago,” he said, just as “the Muslims planning to build the mosque are not responsible for what an entirely different group of Muslims did on 9/11.”

Many Muslims who pray in New York City mosques, he added, “may well have come to America to escape the intolerance of radicals like those who dominate the Taliban.” Descendants of pilgrims “who came to America in the 17th century to escape religious persecutions” and helped establish our democracy should get that, he said.

Justice Stevens ended with a powerful message that participants in the debate over the mosque and community center in Lower Manhattan should heed: “Ignorance — that is to say, fear of the unknown — is the source of most invidious prejudice.”


Full article:

Radio Renegades

Taking on statism’s pride and joy, the BBC.

On the night of June 21, 1966, Oliver Smedley, who operated a pirate radio station off the coast of England, shot a rival named Reg Calvert during a heated confrontation at Smedley’s home outside London. Calvert died instantly, but there were other victims—pirate radio itself and, it seemed, Smedley’s dream of using that colorful, ephemeral medium to help roll back the British welfare state.

The phrase pirate radio conjures an image of wild times on the high seas as free-spirited DJs in the 1960s stick it to The Man by giving the kids their rock ‘n’ roll. But Adrian Johns’s “Death of a Pirate” is more concerned with Friedrich von Hayek and “The Road to Serfdom” than with Mick Jagger and the Rolling Stones. Mr. Johns, a University of Chicago history professor who specializes in intellectual property, portrays the British radio pirates not in the warm glow of sentimental memory that the period usually enjoys but in the historian’s cold bright light. “Death of a Pirate” is, in its way, a treasure.

At the center of the tale stands Oliver Smedley, a conservative political activist and entrepreneur determined to stop what he saw as Britain’s slide toward socialism. After dabbling in politics and journalism in the 1950s, he launched a network of think tanks and political organizations that pressed his call to cut taxes, slash public spending, eliminate tariffs and reduce government’s role in economic life. When in 1964 two like-minded acquaintances pitched him on the idea of launching a pirate-radio ship, Smedley seized on the project as a chance to trade talk for action by taking on statism’s pride and joy, the BBC.

The BBC is a nonprofit “state corporation” funded primarily by an annual license fee (currently about $200) charged to every television owner. At its founding in 1922, the BBC was designated as the sole provider of radio programming in the United Kingdom. Unofficially, the Beeb was expected to reinforce a traditional view of British culture and life. The programming was a highbrow blend of mostly classical music and lectures. Commercials were forbidden for their alleged coarsening effect. Critics of laissez-faire capitalism, including John Maynard Keynes, cited the BBC’s “success” in delivering a vital service to the masses as proof that public corporations were the answer to the free market’s problems.

Oliver Smedley was eager to demonstrate otherwise. His Radio Atlanta would show the benefits of giving people what they desired instead of what central planners thought they should get. The station would sell commercials not only to make a profit but also to deliver knowledge that is essential to the efficient operation of a market economy. Smedley raised capital, created the convoluted corporate structure necessary to skirt British law, set up an advertising sales operation, bought a ship, fitted it with the necessary broadcast gear and sent it to sea—where it immediately began leaking money.

Radio Atlanta wasn’t alone in that predicament. Advertisers were reluctant to spend money with pirate stations—there were about 10—that might be made to disappear the following week by forces of nature or government. Radio Atlanta was also hampered by its programming. Contrary to myth, not all British pirates were full-time rockers, even if plenty of British kids were dying to hear rock music on that must-have new gadget, the transistor radio. The legendary Radio Caroline, for example, featured a music mix that ranged from the Beatles and Searchers to the Mantovani Orchestra and West End show tunes. Radio Atlanta’s offerings were so staid, says Mr. Johns, that “at times they could even sound distinctly similar to BBC fare.”

What’s more, many of the pirate-radio operators were dreadful businessmen. Smedley and other owners seriously underestimated the cost of building and operating a pirate station. In July 1964, just weeks after launching Radio Atlanta, Smedley entered an uneasy partnership with the rival Radio Caroline. The following year, he sold his station’s meager assets to Caroline in a bid to pay off his creditors. Undeterred, Smedley then formed an informal “alliance” with another pirate operation, Radio City, which broadcast from Shivering Sands, an abandoned antiaircraft gun emplacement in the Thames estuary.

Radio City owner Reg Calvert was a streetwise dance-hall impresario who used the airwaves to promote his stable of aspiring rock and pop stars, including Screaming Lord Sutch, who became Radio City’s star DJ. Calvert unwisely regarded the tapped-out Smedley as a potential source of capital; Smedley coveted the Shivering Sands facility. But Calvert, frustrated with Smedley’s failure to deliver promised equipment and payments, soon began talks with yet another pirate, American-owned Radio London. Matters came to a head in June 1966 when a gang of strike-idled dockworkers hired by Smedley seized Shivering Sands and expelled the Radio City staff. The move prompted the fatal confrontation at Smedley’s house.

Smedley pleaded self-defense and was acquitted. But Reg Calvert’s death and the resulting headlines forced the British government to address what appeared to be an out-of-control situation. Unfortunately for the authorities, radio piracy wasn’t illegal. Parliament rectified that situation by passing a marine “broadcast offences” act outlawing offshore radio stations and, more important, forbidding British companies to advertise on them. By late 1967, the pirate armada had largely been swept from the seas.

While Radio Caroline is the best-remembered of the renegade stations—the 2009 film “Pirate Radio” is loosely based on its story—the nearly forgotten Oliver Smedley, who died in 1989, was arguably the most successful buccaneer of the bunch. After all, as Mr. Johns notes, the pirate-radio episode sparked just the sort of transformation in British broadcasting that Smedley had envisioned. The government soon licensed commercial radio stations, the BBC accepted pop music and even adopted a more skeptical stance toward officialdom. Smedley succeeded beyond any reasonable expectation in spotlighting the flaws of state media and, by extension, state-controlled business.

Mr. Bloomquist is president of the consulting firm Talk Frontier Media.


Full article and photo:

Utopia, With Tears

No meat, no wool, no coffee or candles to read by, but plenty of high aspirations—and trouble.

In 1843, in the quiet middle of Massachusetts, a group of high-minded people set out to create a new Eden they called Fruitlands. The embryonic community miscarried, lasting only seven months, from June to January. Fruitlands now has a new chronicler in Richard Francis, a historian of 19th-century America. “This is the story,” he writes, “of one of history’s most unsuccessful utopias ever—but also one of the most dramatic and significant.” As we learn in his thorough and occasionally hilarious account, the claim is about half right.

The utopian community of Fruitlands had two progenitors: the American idealist Bronson Alcott and the English socialist Charles Lane. Alcott was a farm boy from Connecticut who had turned from the plough to philosophy. According to Ralph Waldo Emerson, his friend, Alcott could not chat about anything “less than A New Solar System & the prospective Education in the nebulae.” Airy as his thoughts were, Alcott could be a mesmerizing speaker. Indeed, his words partly inspired an experimental community in England, where he met Lane.

Lane has often been considered the junior partner in the Fruitlands story, merely the guy who put up the money (for roughly 100 acres, only 11 of which were arable). But Mr. Francis fleshes him out, showing him to be a tidier and more bitter thinker than Alcott, with a practical streak that could be overrun by his hopes for humanity.

As Mr. Francis notes, Alcott and Lane shared a “tendency to take moderation to excess,” pushing their first principles as far as they could go. One such principle was that you should do no harm to living things, including plants. As Mr. Francis explains: “If you cut a cabbage or lift a potato you kill the plant itself, just as you kill an animal in order to eat its meat. But pluck an apple, and you leave the tree intact and healthy.”

The Fruitlands community never numbered more than 14 souls, five of them children. The members included a nudist, a former inmate of an insane asylum, and a man who had once gotten into a knife fight to defend his right to wear a beard. Then there was the fellow who thought swearing elevated the spirit. He would greet the Alcott girls: “Good morning, damn you.” Lane thought the members should be celibate; Alcott’s wife, Abigail, the mother of his four daughters and the sole permanent woman resident, was a living reproach to this view.

All of Fruitlands members, however, agreed to certain restrictions: No meat or fish; in fact nothing that came from animals, so no eggs and no milk. No leather or wool, and no whale oil for lamps or candles made from tallow (rendered animal fat). No stimulants such as coffee or tea, and no alcohol. Because the Fruitlanders were Abolitionists, cane sugar and cotton were forbidden (slave labor produced both). The members of the community wore linen clothes and canvas shoes. The library was stocked with a thousand books, but no one could read them after dark.

And how did the whole experiment go? Well, most of the men at Fruitlands had little farming experience. Alcott, who did, impressed Lane with his ability to plow a straight furrow; but Alcott was always a better talker than worker. The community rejected animal labor—and even manure, a serious disadvantage if you want to produce enough food to be self-sufficient. The farming side of Fruitlands was a dud.

But the experiment was indeed, as Mr. Francis claims, “dramatic.” The drama came from a common revolutionary trajectory in which “a group of idealists ends by trying to destroy each other.” “Of spiritual ties she knows nothing,” Lane wrote of Abigail. “All Mr. Lane’s efforts have been to disunite us,” she confided to a friend, referring to her relations with Bronson. Even the usually serene Bronson agonized: “Can a man act continually for the universal end,” he asked Lane, “while he cohabits with a wife?” By Christmas, which he spent in Boston, Bronson seemed on the verge of dissolving his family. In the new year he returned to Fruitlands, but he had a breakdown. This was no way to run a utopia, and the experiment ended.

Was Fruitlands “significant”? In Mr. Francis’s reading, the community “intuited the interconnectedness of all living things.” That intuition, he believes, underlies our notions of the evils of pollution and the imminence of environmental catastrophe, as well as our concerns about industrialized farming. The Fruitlanders’ understanding of the world, he argues, helped create a parallel universe—an alternative to scientific empiricism—that is still humming along in the current day.

Perhaps so. Certainly many New Age and holistic notions, in their fuzzy and well-meaning romanticism, share a common ancestor with the Fruitlands outlook. But the result is not always benign. It was the Fruitlanders’ belief, for instance, that “all disease originates in the soul.” One descendant of this idea is the current loathsome view that cancer is caused by bad thoughts.

Though obviously sympathetic to the Fruitlands experiment, Mr. Francis gives us enough facts to let us draw our own conclusions. He records Bronson and Abigail’s acts of charity, already familiar to us from their daughter Louisa’s novel “Little Women” (1868). But he also retells less admiring stories, of their petty vindictiveness and casual callousness. Along the way he adumbrates the ways in which idealism can slide into megalomania.

Mr. Francis reports a conversation that Alcott once had with Henry James Sr., the father of the novelist Henry and the philosopher William. Alcott let it drop that he, like Jesus and Pythagoras before him, had never sinned. James asked whether Alcott had ever said, “I am the Resurrection and the Life.” “Yes, often,” Alcott replied. Unfortunately, Mr. Francis fails to record James’s rejoinder: “And has anyone ever believed you?”

Ms. Mullen writes for the Barnes & Noble Review.


Full article and photo:

Bungled bungled

SILVIO BERLUSCONI’S opponents have tried everything to get rid of him. They have manoeuvred against him, decried his policies, condemned his methods and, he has long claimed, incited left-wing prosecutors to try and jail him.

But since October 26th, a new possibility has emerged: that the 74-year-old Mr Berlusconi might just be laughed off the Italian political stage. Some of the details of the latest scandal to engulf him are such that even his most faithful supporters must realise he makes Italy an object of derision.

The girl at the centre of the affair—a 17-year-old Moroccan runaway—calls herself Ruby Rubacuori, or “Ruby Heartstealer”. On her Facebook page, her activities include belly dancing, and before she became involved with Italy’s prime minister she appears to have worked in Milan nightclubs.

The precise nature of their involvement is unclear. “Ruby”—whose real name appears to be Karima El Mahroug—said in an interview published on Saturday that she visited Mr Berlusconi’s home outside Milan only once, on Valentine’s Day this year, and that after giving him an account of her misfortunes, he gave her €7,000 ($9,770) and some jewellery. But, according to leaked details from an inquiry in Milan, she had earlier told police and prosecutors that she had been there three times, and that one of the parties ended in an erotic game called “Bunga, Bunga”.

Unsurprisingly, this has led to any number of jokes and even a song performed on Italian network television to the tune of Shakira’s Waka Waka World Cup anthem.

However amusing to others, the affair is potentially serious for Mr Berlusconi. Three close associates of the prime minister are reportedly under investigation on suspicion of aiding and abetting prostitution on the basis of Ms El Mahroug’s depositions. She denies having had sex with the prime minister, but the investigators are looking into whether others did, and were rewarded for doing so.

That would not incriminate Mr Berlusconi. But it might be enough to bring charges against his associates, who are suspected of procuring the women. One, a former showgirl called Nicole Minetti who is now a regional parliamentarian for Mr Berlusconi’s party, collected Ms El Mahroug after she was released from a police station in May.

The young Moroccan had been detained on suspicion of stealing €3,000, but was let go. The station commander said in an interview on October 29th that one of his officers had earlier received a call from the prime minister’s office informing them, erroneously, that Ms El Mahroug was the grand-daughter of Egypt’s president, Hosni Mubarak.

As opposition politicians swiftly noted, that could mean Mr Berlusconi had abused his position and thus committed an offence under Italian law. Far from denying it, the prime minister appears bent on defiance.

On October 29th, he admitted he had sent Ms Minetti “to provide help to someone who could have been consigned not to a home or the jails… but fostered”. Mr Berlusconi added that he had no intention of changing his lifestyle or explaining what went on at his home.

That sort of brazenness got him through the last bout of sex scandals in 2009. But there are several reasons for questioning whether it will work this time.

Mr Berlusconi is much weaker now. His poll ratings have fallen as Italians have becoming increasingly sceptical about his blithe assurances on the state of the economy. That has made them less tolerant of evidence of corruption in his government. And since July, when his former ally, Gianfranco Fini, formed a separate parliamentary group, the prime minister has been without an assured majority in the lower chamber.

Last year, most of the prime minister’s supporters went along with him as he ignored calls for a parliamentary statement, shrugged off claims he had laid himself open to blackmail and jauntily admitted he was no angel. But it was expected that he would save them from future embarrassment by being, if not more virtuous, then at least more discreet.

Mr Berlusconi has confounded that expectation, calling into question not just his private life but his judgement.


Full article and photo:

Unpopular Science

Whether we like it or not, human life is subject to the universal laws of physics.

My day, for example, starts with a demonstration of Newton’s First Law of Motion.

Christoph Niemann - Physics

It states, “Every body continues in its state of rest, or of uniform motion in a straight line…”

Christoph Niemann - Physics

“…unless it is compelled to change that state by forces impressed upon it.”

Christoph Niemann - Physics

Based on supercomplicated physical observations, Einstein concluded that two objects may perceive time differently.

Based on simple life experience, I have concluded that this is true.

Christoph Niemann - Physics

Newtonʼs Cradle shows how energy travels through a series of objects.

In our particular arrangement, kinetic energy is ultimately converted into a compression of the forehead.

Christoph Niemann - Physics

The forehead can be uncrumpled by a downward movement of the jaw.

Christoph Niemann - Physics

Excessive mechanical strain will compromise the elasticity of most materials, though.

Christoph Niemann - Physics

The human body functions like a combustion engine. To produce energy, we need two things:
– Oxygen, supplied through the nostrils (once the toy car is removed, that is).
– Carbohydrates, which come in various forms (vanilla, chocolate, dulce de leche).

Christoph Niemann - Physics

By the by: I had an idea for a carb-neutral ice cream.
All you need is to freeze a pint of ice cream to -3706 F.
The energy it will take your system to bring the ice cream up to a digestible temperature is roughly 1,000 calories, neatly burning away all those carbohydrates from the fat and sugar.
The only snag is the Third Law of Thermodynamics, which says it’s impossible to go below -459 F.

Christoph Niemann - Physics

But back to Newton: he discovered that any two objects in the universe attract each other, and that this force is proportional to their mass.

The Earth is heavier than the Moon, and therefore attracts our bodies with a much greater force.

Christoph Niemann - Physics

This explains why an empty refrigerator administrates a much smaller gravitational pull than, say, one thatʼs stacked with 50 pounds of delicious leftovers. Great: that means we can blame the leftovers.

Christoph Niemann - Physics

(Fig. A): Letʼs examine the behavior of particles in a closed container.

(Fig. B): The more particles we squeeze into the container, the testier they will become, especially if the container happens to be a rush-hour downtown local at 86th and Lex.

(Fig. C): Usually the particles will distribute evenly, unless there is a weird-looking puddle on the floor.

Christoph Niemann - Physics

The probability of finding a seat on the subway is inversely proportional to the number of people on the platform.

Even worse, the utter absence of people is 100 percent proportional to just having missed the train.

Christoph Niemann - Physics

To describe different phenomena, physicists use various units.

PASCALS, for example, measure the pressure applied to a certain area.

COULOMBS measure electric charge (that can occur if said area is a synthetic carpet)

DECIBELS measure the intensity of the trouble the physicist gets into because he didnʼt take off his shoes first.

Christoph Niemann - Physics

Often those units are named after people to recognize historic contributions to their field of expertise. One NEWTON, for example, describes the force that is necessary to accelerate 1 kilogram of mass by one meter per second squared.

This is not to be confused with one NIEMANN, which describes the force necessary to make a three-year-old put on his shoes and jacket when weʼre already late for kindergarten.

Christoph Niemann - Physics

Once the child is ready to go, I search for my keys. I start spinning around to scan my surroundings. This rotation exposes my head and all its contents to centrifugal forces, resulting in loss of hair and elongated eyeballs. That’s why I need to wear prescription glasses, which are yet another thing I constantly misplace.

Christoph Niemann - Physics

Obviously, the hair loss theory I just presented is bogus. Hair canʼt be “lost.” Since Antoine Lavoisier, we all know that “matter can be neither created nor destroyed, though it can be rearranged,” which, sadly, it eventually will.

Christoph Niemann - Physics

Not everything can be explained through physics, though. Iʼve spent years searching for a rational explanation for the weight of my wifeʼs luggage. There is none. It is just a cruel joke of nature.

Christoph Niemann, New York Times


Full article and photos:

Chilean President Wrote ‘Deutschland Über Alles’ in German Guest Book

Diplomatic Gaffe

“Deutschland Über Alles:” Chilean President Sebastian Pinera wrote his controversial dedication into the official guest book of German President Christian Wulff (left).

In a gesture of thanks for Germany’s help in rescuing the 33 Chilean miners, President Sebastián Piñera wrote the historically charged slogan ‘Deutschland Über Alles’ into the guest book of German President Christian Wulff last week. Now Wulff’s office is pondering how to remove the words.

Chilean President Sebastián Piñera has apologized for writing the words “Deutschland Über Alles,” a phrase frowned on in Germany because of its association with the Nazi era, into the official guest book of German President Christian Wulff during a visit to Berlin last week.

Media reports claimed Piñera had said on Monday that he had learned the slogan in school in the 1950s and 1960s and understood it to be a celebration of German unification in the 19th century under Chancellor Otto von Bismarck. He said he was unaware that it was “linked to that country’s dark past.”

The first verse was dropped from the anthem after World War II because it is deemed too nationalistic. Piñera had been on a European trip to thank countries for their help in freeing the 33 Chilean miners. A spokesman for Wulff’s office played down the gaffe on Monday, saying the president had no doubt intended to express something positive about Germany.

Bild’s Loser of the Day

Piñera isn’t the only one to have unwittingly broken the taboo. Even experienced Europeans have done so. Last year, the French presidential office was so excited at the prospect that Chancellor Angela Merkel would attend the official celebrations to mark the French victory in World War I, the first German leader ever to do so, that its press department announced that the choir of the French army would sing “Deutschland Über Alles” at the event, the Frankfurter Allgemeine Zeitung newspaper reported at the time.

The mistake was spotted in time and the choir confined itself to singing the third verse which has been officially used since the end of World War II, starting with the unoffensive words: “Unity and justice and freedom for the German fatherland!”

Bild, Germany’s best-selling tabloid newspaper, responded to the faux pas by declaring Piñera as its loser of the day, a regular item on its front page, on Tuesday. “He’s better at rescuing miners,” the paper declared.

Meanwhile, “Deutschland Über Alles” continues to sully the pages of Wulff’s guest book. Wulff’s office now plans to discuss the matter with the Chilean embassy in Berlin. Piñera may get a chance to revise his entry.


Full article and photo:,1518,725382,00.html

Stories vs. Statistics

Half a century ago the British scientist and novelist C. P. Snow bemoaned the estrangement of what he termed the “two cultures” in modern society — the literary and the scientific. These days, there is some reason to celebrate better communication between these domains, if only because of the increasingly visible salience of scientific ideas. Still a gap remains, and so I’d like here to take an oblique look at a few lesser-known contrasts and divisions between subdomains of the two cultures, specifically those between stories and statistics.

I’ll begin by noting that the notions of probability and statistics are not alien to storytelling. From the earliest of recorded histories there were glimmerings of these concepts, which were reflected in everyday words and stories. Consider the notions of central tendency — average, median, mode, to name a few. They most certainly grew out of workaday activities and led to words such as (in English) “usual,” “typical.” “customary,” “most,” “standard,” “expected,” “normal,” “ordinary,” “medium,” “commonplace,” “so-so,” and so on. The same is true about the notions of statistical variation — standard deviation, variance, and the like. Words such as “unusual,” “peculiar,” “strange,” “original,” “extreme,” “special,” “unlike,” “deviant,” “dissimilar” and “different” come to mind. It is hard to imagine even prehistoric humans not possessing some sort of rudimentary idea of the typical or of the unusual. Any situation or entity — storms, animals, rocks — that recurred again and again would, it seems, lead naturally to these notions. These and other fundamentally scientific concepts have in one way or another been embedded in the very idea of what a story is — an event distinctive enough to merit retelling — from cave paintings to “Gilgamesh” to “The Canterbury Tales,” onward. 

The idea of probability itself is present in such words as “chance,” “likelihood,” “fate,” “odds,” “gods,” “fortune,” “luck,” “happenstance,” “random,” and many others. A mere acceptance of the idea of alternative possibilities almost entails some notion of probability, since some alternatives will be come to be judged more likely than others. Likewise, the idea of sampling is implicit in words like “instance,” “case,” “example,” “cross-section,” “specimen” and “swatch,” and that of correlation is reflected in “connection,” “relation,” “linkage,” “conjunction,” “dependence” and the ever too ready “cause.” Even hypothesis testing and Bayesian analysis possess linguistic echoes in common phrases and ideas that are an integral part of human cognition and storytelling. With regard to informal statistics we’re a bit like Moliere’s character who was shocked to find that he’d been speaking prose his whole life.

Despite the naturalness of these notions, however, there is a tension between stories and statistics, and one under-appreciated contrast between them is simply the mindset with which we approach them. In listening to stories we tend to suspend disbelief in order to be entertained, whereas in evaluating statistics we generally have an opposite inclination to suspend belief in order not to be beguiled. A drily named distinction from formal statistics is relevant: we’re said to commit a Type I error when we observe something that is not really there and a Type II error when we fail to observe something that is there. There is no way to always avoid both types, and we have different error thresholds in different endeavors, but the type of error people feel more comfortable may be telling. It gives some indication of their intellectual personality type, on which side of the two cultures (or maybe two coutures) divide they’re most comfortable.

People who love to be entertained and beguiled or who particularly wish to avoid making a Type II error might be more apt to prefer stories to statistics. Those who don’t particularly like being entertained or beguiled or who fear the prospect of making a Type I error might be more apt to prefer statistics to stories. The distinction is not unrelated to that between those (61.389% of us) who view numbers in a story as providing rhetorical decoration and those who view them as providing clarifying information.

The so-called “conjunction fallacy” suggests another difference between stories and statistics. After reading a novel, it can sometimes seem odd to say that the characters in it don’t exist. The more details there are about them in a story, the more plausible the account often seems. More plausible, but less probable. In fact, the more details there are in a story, the less likely it is that the conjunction of all of them is true. Congressman Smith is known to be cash-strapped and lecherous. Which is more likely? Smith took a bribe from a lobbyist or Smith took a bribe from a lobbyist, has taken money before, and spends it on luxurious “fact-finding” trips with various pretty young interns. Despite the coherent story the second alternative begins to flesh out, the first alternative is more likely. For any statements, A, B, and C, the probability of A is always greater than the probability of A, B, and C together since whenever A, B, and C all occur, A occurs, but not vice versa.

This is one of many cognitive foibles that reside in the nebulous area bordering mathematics, psychology and storytelling. In the classic illustration of the fallacy put forward by Amos Tversky and Daniel Kahneman, a woman named Linda is described. She is single, in her early 30s, outspoken, and exceedingly smart. A philosophy major in college, she has devoted herself to issues such as nuclear non-proliferation. So which of the following is more likely?

a.) Linda is a bank teller.

b.) Linda is a bank teller and is active in the feminist movement.

Although most people choose b.), this option is less likely since two conditions must be met in order for it to be satisfied, whereas only one of them is required for option a.) to be satisfied.

(Incidentally, the conjunction fallacy is especially relevant to religious texts. Imbedding the God character in a holy book’s very detailed narrative and building an entire culture around this narrative seems by itself to confer a kind of existence on Him.)

Yet another contrast between informal stories and formal statistics stems from the extensional/intensional distinction. Standard scientific and mathematical logic is termed extensional since objects and sets are determined by their extensions, which is to say by their member(s). Mathematical entities having the same members are the same even if they are referred to differently. Thus, in formal mathematical contexts, the number 3 can always be substituted for, or interchanged with, the square root of 9 or the largest whole number smaller than pi without affecting the truth of the statement in which it appears.

In everyday intensional (with an s) logic, things aren’t so simple since such substitution isn’t always possible. Lois Lane knows that Superman can fly, but even though Superman and Clark Kent are the same person, she doesn’t know that Clark Kent can fly. Likewise, someone may believe that Oslo is in Sweden, but even though Oslo is the capital of Norway, that person will likely not believe that the capital of Norway is in Sweden. Locutions such as “believes that” or “thinks that” are generally intensional and do not allow substitution of equals for equals.

The relevance of this to probability and statistics? Since they’re disciplines of pure mathematics, their appropriate logic is the standard extensional logic of proof and computation. But for applications of probability and statistics, which are what most people mean when they refer to them, the appropriate logic is informal and intensional. The reason is that an event’s probability, or rather our judgment of its probability, is almost always affected by its intensional context. 

Consider the two boys problem in probability. Given that a family has two children and that at least one of them is a boy, what is the probability that both children are boys? The most common solution notes that there are four equally likely possibilities — BB, BG, GB, GG, the order of the letters indicating birth order. Since we’re told that the family has at least one boy, the GG possibility is eliminated and only one of the remaining three equally likely possibilities is a family with two boys. Thus the probability of two boys in the family is 1/3. But how do we come to think that, learn that, believe that the family has at least one boy? What if instead of being told that the family has at least one boy, we meet the parents who introduce us to their son? Then there are only two equally like possibilities — the other child is a girl or the other child is a boy, and so the probability of two boys is 1/2.

Many probability problems and statistical surveys are sensitive to their intensional contexts (the phrasing and ordering of questions, for example). Consider this relatively new variant of the two boys problem. A couple has two children and we’re told that at least one of them is a boy born on a Tuesday. What is the probability the couple has two boys? Believe it or not, the Tuesday is important, and the answer is 13/27. If we discover the Tuesday birth in slightly different intensional contexts, however, the answer could be 1/3 or 1/2.

Of course, the contrasts between stories and statistics don’t end here. Another example is the role of coincidences, which loom large in narratives, where they too frequently are invested with a significance that they don’t warrant probabilistically. The birthday paradox, small world links between people, psychics’ vaguely correct pronouncements, the sports pundit Paul the Octopus, and the various bible codes are all examples. In fact, if one considers any sufficiently large data set, such meaningless coincidences will naturally arise: the best predictor of the value of the S&P 500 stock index in the early 1990s was butter production in Bangladesh. Or examine the first letters of the months or of the planets: JFMAMJ-JASON-D or MVEMJ-SUN-P. Are JASON and SUN significant? Of course not. As I’ve written often, the most amazing coincidence of all would be the complete absence of all coincidences.

I’ll close with perhaps the most fundamental tension between stories and statistics. The focus of stories is on individual people rather than averages, on motives rather than movements, on point of view rather than the view from nowhere, context rather than raw data. Moreover, stories are open-ended and metaphorical rather than determinate and literal.

In the end, whether we resonate viscerally to King Lear’s predicament in dividing his realm among his three daughters or can’t help thinking of various mathematical apportionment ideas that may have helped him clarify his situation is probably beyond calculation. At different times and places most of us can, should, and do respond in both ways.

John Allen Paulos is Professor of Mathematics at Temple University and the author of several books, including “Innumeracy,” “Once Upon a Number,” and, most recently, “Irreligion.”


Full article and photo:


Attention passengers: It’s perfectly safe to use your cellphones

With more than 28,000 commercial flights in the skies over the United States every day, there are probably few sentences in the English language that are spoken more often and insistently than this: “Please turn off all electronic devices.”

Asking why passengers must turn off their mobile phones on airplanes seems like an odd question. Because! With a sentence said so often there simply must be a reason for it. Or — is there not?

Flight attendants are required to make their preflight safety announcement by the Federal Communications Commission because of “potential interference to the aircraft’s navigation and communication systems.” Perhaps this seems like a no-brainer: turning off your cellphone inside a piece of technology as sensitive as an airplane. In our civilized times, there are only a few things imaginable which more likely lead to direct physical conflict with the person in the seat next to you than turning on your cellphone during takeoff and nonchalantly calling your hairdresser to reschedule that appointment next Wednesday. In Great Britain, a 28-year-old oil worker was sentenced to 12 months in prison in 1999 for refusing to switch off his cellphone on a flight from Madrid to Manchester. He was convicted of “recklessly and negligently endangering” an aircraft.

Yet with people losing their freedom over the rule, it may come as a bit of a surprise that scientific studies have never actually proven a serious risk associated with the use of mobile phones on airplanes. In the late 1990s, when cellphones and mobile computers became mainstream, Boeing received reports from concerned pilots who had experienced system failures and suggested the problems may have been caused by laptops and phones the cabin crew had seen passengers using in-flight. Boeing actually bought the equipment from the passengers but was unable reproduce any of the problems, concluding it had “not been able to find a definite correlation between passenger-carried portable electronic devices and the associated reported airplane anomalies.”

The National Aeronautics and Space Administration released a study in 2003, stating that of eight tested cellphone models, none would be likely to interfere with navigation or radio systems of the aircraft — systems which are, of course, carefully shielded against all sources of natural or artificial radiation by design. Another study by the IEEE Electromagnetic Compatibility Society concluded in 2006 that “there is no definitive instance of an air accident known to have been caused by a passenger’s use of an electronic device.”

The same study also found that, on average, one to four calls are illegally made during every flight, meaning that there are tens of thousands of phone calls from American airplanes every day — and still no definitive evidence of a problem.

What makes the ban of mobile phones in the United States look even more odd is that it doesn’t exist in other parts of world. The European Aviation Safety Agency lifted the ban in 2007. “EASA does not ban the use of mobile phones on board as they are not considered to be a threat to safety,” says EASA spokesman Dominique Fouda. Several airlines like Ryanair and Emirates have since allowed passengers to use their phones during flights. According to EASA, some American airlines will soon allow the use of cellphones outside of US airspace.

While the safety argument sounds like a neat story every passenger would understand, there seems to be a second, more important reason for the ban. According to the Federal Aviation Agency, the current ban by the Federal Communications Commission has not been issued for security concerns, but “because of potential interference with ground networks,” says FAA spokeswoman Arlene Salac. An airplane with activated mobile phones flying over a city could cause these several hundred phones to simultaneously log into a base station on the ground, perhaps overloading it and threatening the network.

Europeans seem to not worry about this problem, since European airlines allowing cellphones install base stations inside each aircraft, forwarding all calls through the plane’s satellite system, charging passengers by the minute. If all phones are logged into the base station on the airplane, they will not cause trouble on the ground.

But even if the FCC were to revoke the ban, the FAA’s current regulations for the certification of electronic equipment would apply. This would mean air carriers would have to show that every particular cellphone model is compatible with every particular airplane type. With hundreds of cellphone models released every year, this would mean a continuing source of cost for airlines, while the only benefit would be the convenience of passengers.

In the end, the ban of mobile phones on airplanes might not be a story about safety concerns, but about the psychology of governmental agencies. Bureaucracy, in theory, is designed to eliminate irrationality by replacing the biased judgment of individuals with a system of fixed requirements. Bureaucracies are machines to make judgments according to the best objective knowledge available. Given that, and the suspicion that the threat by mobile phones is indeed minor, how is it possible that two bureaucratic agencies, the FAA and the FCC, act with disproportionate caution? Is the apparatus not so rational after all?

“The point of bureaucracy is to have a less emotive discussion. But that doesn’t mean you get rid of that factor,” says Daniel Carpenter, professor of government at Harvard University.

When it comes to the question of allowing people to use their mobile phones, the bureaucratic incentive to do so could not be weaker. For any agency involved in this, two errors are possible. The first is what Carpenter calls an error of commission: The agency allows mobile phones and something bad happens, either an airplane crash or a network failure on the ground. The other possible error is one of omission: The agency fails to allow the use of mobile phones, though they are safe, and people subsequently cannot make phone calls while on the airplane.

“One of these errors is much more vivid and evocative. The error of not letting people talk on cellphones when they should — it’s hard to see people dying from that,” says Carpenter.

This suggests the most important reason mobile phones are still banned on airplanes might be the absence of anger — the fact that passengers are not organizing and demanding the right to make calls.

Still, there might be yet another way of thinking about the issue. Despite the current ban, Congress debated the “Halting Airplane Noise to Give Us Peace Act” (also known as the “Hang Up Act”) in 2008, prohibiting all voice communications on commercial flights. The bill was never voted on, but the reasoning behind it was simple: No calls in airplanes, not because the calls are dangerous — but because they are so annoying.

Justus Bender is a reporter with Die Zeit, a weekly newspaper based in Hamburg, Germany.


Full article and photo:

The Seafarer

Rescue ship: Joshua Slocum (at left), his wife and sons Victor and Garfield aboard the Liberdade, the 35-foot ‘sailing canoe’ he built to get them home after they were shipwrecked on the coast of Brazil in 1888.

Joshua Slocum is remembered for two things—being the first person to sail single-handedly around the world and writing a marvelous account of the journey. In his biography of Slocum, “The Hard Way Around,” Geoffrey Wolff focuses less on the nautical and literary achievements than on what Slocum did before them.

It is, for the most part, not a pretty picture. The New York Times called Slocum a barbarian after he was imprisoned for allegedly mistreating a sailor. On one of the vessels he commanded, in the 1880s, several crewmen contracted smallpox, and Slocum was arrested again, this time for killing a mutinous member of the crew. Although he eventually resumed command of that ship, it then went aground and was lost in Brazil. By age 45, two of the ships Slocum commanded had been wrecked, his first wife and three of his children had died, and he was unemployed and broke.

I confess that, halfway into this tale of woe, I found myself thinking about bailing out. The early chapters seemed slow-moving, especially for anyone expecting an adventure story. There are also some odd change-ups in style, from carefully considered, grown-up prose to informal sentences such as this one: “It was a miracle the hulk didn’t sink, though if you wait a bit, she will.”

But Mr. Woolf’s writing was not my problem. I was troubled by his overall approach to his subject. Slocum’s solo circumnavigation—he set out from Boston in April 1895 and arrived back in Newport, R.I., in June 1898—was an extraordinary feat, and Slocum’s book about it all, “Sailing Alone Around the World” (1899), is an intoxicating masterpiece. I saw no purpose in exposing the great man’s failings more than a century after his death.

But I kept reading, propelled by Mr. Wolff’s engaging description of the life of a young seaman during the great age of sail. Slocum was 16 when he went to sea in 1860. He wanted to command one of the tall-masted clipper ships, and once he achieved his objective, 10 years later, he didn’t just chart the ship’s course and direct its crew. He also functioned as the resident entrepreneur, identifying cargos to carry and negotiating the terms. He called on exotic ports throughout the world, with his wife and children onboard most of the time.

But Slocum was born too late. The clipper-ship era is probably the most celebrated period of marine history—the inspiration for the paintings and prints that seem to hang everywhere, from stodgy clubs to fast-food restaurants. But it didn’t last long. In 1860, wood-hulled sailing vessels were already being displaced by steel ships powered by steam. By the time Slocum took over his most impressive ship, the 233-foot-long Northern Lights, in 1881, the tide was flowing swiftly against him.

It is in the attempt to connect Slocum’s circumstances and choices to his failures and his immortalizing achievements that Mr. Wolff finds book-worthy purpose. After Slocum lost his ship in Brazil in 1887, he built a 35-foot “sailing canoe” and set out on a 5,000-mile journey back to the U.S., this time with his second wife, Hettie (his first wife, Virginia, had died three years before), and two of his children. This is how Slocum, in his book, explained the switch to small-boat sailing: “The old boating trick came back fresh to me, the love of the thing itself gaining on me as the little ship stood out; and my crew with one voice said, ‘Go on.’ ”

Not far into the journey, the little boat ran into a squall and the sails, which had been sewn by Hettie, shredded. Seeking to answer the question of what Slocum was thinking at such times, Mr. Wolff bores into Slocum’s prose like a literary detective. Of Slocum’s lifetime sailing obsession and his arresting phrase “the love of the thing itself” he writes that it came from “irreducible, hard-nut recognition and radiant sentiment.”

Mr. Wolff doesn’t get around to describing Slocum’s 46,000-mile lap around the planet until his book’s penultimate chapter. By then many readers will be so fascinated by the man and the why-did-he-do-it question that they may be eager to read Slocum’s own book, which has never gone out of print.

What is it that drives some people to undertake the audacious? We live at a time when many of the most important firsts have already been claimed, but people seem more obsessed than ever with establishing records, some of them of dubious distinction. Businessmen-climbers search for mountain peaks that have never been surmounted, marathoners go to Antarctica to run, and a procession of teenagers seeks to replicate Slocum’s circumnavigation (with the benefit of high-tech boats, push-button navigational equipment and satellite telephones).

Was Slocum like these people? Before I read Mr. Wolff’s book, I would have said no, that his motives and achievement were more pure and singular. Now I am unsure. Many modern-day adventurers are driven by ego. And ego probably played a role with Slocum, who was no doubt eager to demonstrate that he was, in spite of his many setbacks, exceptionally skilled at what he did, to the point, as he put it, of “neglecting all else.” And aren’t some contemporary adventurers individuals who, like Slocum, feel as if they have run out of other options?

Then again, perhaps Slocum was different. Maybe it was all about “the thing itself.” In November 1908, Slocum sailed from his home on Martha’s Vineyard to undertake a solo exploration of the Venezuelan coast and the Amazon. Somewhere along the way he disappeared. No one knows exactly what happened.

Mr. Knecht is the author of “The Proving Ground: The Inside Story of the 1998 Sydney of Hobart Race.”


Full article and photo:

The Other ‘G’ Spot

At the beginning of the 20th century the British psychologist Charles Spearman “discovered” the idea of general intelligence. Spearman observed that students’ grades in different subjects, and their scores on various tests, were all positively correlated. He then showed that this pattern could be explained mathematically by assuming that people vary in special abilities for the different tests as well as a single general ability—or “g”—that is used for all of them.

John Duncan, one of the world’s leading cognitive neuroscientists, explains Spearman’s work early in “How Intelligence Happens,” before moving on to his own attempts to locate the source of Spearman’s “g” in the brain. To get us grounded, Mr. Duncan also provides a wonderfully compact summary of brain architecture and function. Throughout the book, he makes it clear that his fascination with intelligent behavior has to do with how the brain brings it about—he leaves it to others to ponder things like the economic import of intelligence and how it is influenced by genes, upbringing, and education.

He also doesn’t waste time dilating on the question of what, precisely, we mean by “intelligence.” Defining terms is not the expertise of scientists, but their attempts can be thought-provoking. Two decades ago, the cognitive science and artificial-intelligence pioneer Allen Newell proposed that an entity should be considered intelligent to the extent that it uses all the information it has when making decisions. But according to that definition, a device as simple as a thermostat would have perfect intelligence—not terribly helpful when trying to understand human differences.

I have been doing research on intelligence for more than a decade, and I have to confess that I do not know of a perfect definition. But most psychologists consider intelligence a general ability to perform well on a wide variety of mental tasks and challenges. In everyday speech, it sometimes means roughly the same thing: We call someone “intelligent” if we believe that their mental abilities are generally high—not if they are skilled in just one narrow field.

Mr. Duncan’s early work on intelligence and the brain resolved an old paradox. Before imaging technologies like MRI were invented, neuropsychologists used IQ tests to determine what parts of the brain were damaged in patients suffering from strokes and other closed-head injuries. If the patient had trouble with the verbal parts of the test, the damage was probably in the left hemisphere; if the trouble was in the visual parts, the damage was probably in the back of the brain; and so on. But oddly, damage to the frontal lobes seemed to have very little effect on IQ—despite the frontal lobes’ constituting nearly 40% of the cerebral cortex.

Mr. Duncan found that patients with frontal-lobe damage were impaired on tests of “fluid intelligence” that, until recently, were not part of standard IQ tests. These tests measure the ability to solve abstract nonverbal problems in which prior knowledge of language or facts is of no help. For example, a “matrix reasoning” problem presents a grid of complex shapes with one empty space that the test-taker must fill by choosing the correct option from a set of up to eight alternatives. Such tests seem to reveal a raw ability to make optimal use of the information contained within a problem or situation.

Later, Mr. Duncan used PET scanning to measure the brain activity of people without brain damage as they solved problems that varied in difficulty. Regardless of content, as the tests got harder, the subjects made more use of areas in their frontal lobes, as well as in their parietal lobes, which are farther toward the back of the brain.

Mr. Duncan makes a convincing case that these brain areas constitute a special circuit that is crucial for both Spearman’s “g” and for intelligent behavior more generally. But his book elides the question of whether this circuit is also the source of IQ differences. That is, do people who score high on IQ tests use the frontal and parietal areas of their brains differently from people who score lower? The answer, discovered by other researchers, turns out to be yes.

There are other properties of the brain that contribute to “g,” including the speed of basic information-processing (measured by how fast people can press buttons in response to flashing lights) and even the total size of the brain (larger is better). One of the next steps in understanding “g” is to figure out how all these factors interact and combine to produce the wide range of differences we see in human intelligence. Mr. Duncan no doubt will be a key player in this effort, frontal and parietal lobes firing way.

Mr. Chabris is a psychology professor at Union College and the co-author, with Daniel Simons, of “The Invisible Gorilla: And Other Ways Our Intuitions Deceive Us” (Crown).


Full article and photo:

Magic by Numbers

I RECENTLY wound up in the emergency room. Don’t worry, it was probably nothing. But to treat my case of probably nothing, the doctor gave me a prescription for a week’s worth of antibiotics, along with the usual stern warning about the importance of completing the full course.

I understood why I needed to complete the full course, of course. What I didn’t understand was why a full course took precisely seven days. Why not six, eight or nine and a half? Did the number seven correspond to some biological fact about the human digestive tract or the life cycle of bacteria?

My doctor seemed smart. She probably went to one of the nation’s finest medical schools, and regardless of where she trained, she certainly knew more about medicine than I did. And yet, as I walked out of the emergency room that night with my prescription in hand, I couldn’t help but suspect that I’d just been treated with magic.

Certain numbers have magical properties. E, pi and the Fibonacci series come quickly to mind — if you are a mathematician, that is. For the rest of us, the magic numbers are the familiar ones that have something to do with the way we keep track of time (7, say, and 24) or something to do with the way we count (namely, on 10 fingers). The “time numbers” and the “10 numbers” hold remarkable sway over our lives. We think in these numbers (if you ask people to produce a random number between one and a hundred, their guesses will cluster around the handful that end in zero or five) and we talk in these numbers (we say we will be there in five or 10 minutes, not six or 11).

But these magic numbers don’t just dominate our thoughts and dictate our words; they also drive our most important decisions.

Consider my prescription. Antibiotics are a godsend, but just how many pills should God be sending? A recent study of antibiotic treatment published in a leading medical journal began by noting that “the usual treatment recommendation of 7 to 10 days for uncomplicated pneumonia is not based on scientific evidence” and went on to show that an abbreviated course of three days was every bit as effective as the usual course of eight.

My doctor had recommended seven. Where in the world had seven come from?

Italy! Seven is a magic number because only it can make a week, and it was given this particular power in 321 A.D. by the Roman emperor Constantine, who officially reduced the week from eight days to seven. The problem isn’t that Constantine’s week was arbitrary — units of time are often arbitrary, which is why the Soviets adopted the five-day week before they adopted the six-day week, and the French adopted the 10-day week before they adopted the 60-day vacation.

The problem is that Constantine didn’t know a thing about bacteria, and yet modern doctors continue to honor his edict. If patients are typically told that every 24 hours (24 being the magic number that corresponds to the rotation of the earth) they should take three pills (three being the magic number that divides any time period into a beginning, middle and end) and that they should do this for seven days, they will end up taking 21 pills.

If even one of those pills is unnecessary — that is, if people who take 20 pills get just as healthy just as fast as people who take 21 — then millions of people are taking at least 5 percent more medication than they actually need. This overdose contributes not only to the punishing costs of health care, but also to the evolution of the antibiotic-resistant strains of “superbugs” that may someday decimate our species. All of which seems like a rather high price to pay for fealty to ancient Rome.

Magic “time numbers” cost a lot, but magic “10 numbers” may cost even more. In 1962, a physicist named M. F. M. Osborne noticed that stock prices tended to cluster around numbers ending in zero and five. Why? Well, on the one hand, most people have five fingers, and on the other hand, most people have five more. It isn’t hard to understand why an animal with 10 fingers would use a base-10 counting system. But according to economic theory, a stock’s price is supposed to be determined by the efficient workings of the free market and not by the phalanges of the people trading it.

And yet, research shows that fingers affect finances. For example, a stock that closed the previous day at $10.01 will perform about as well as a stock that closed at $10.03, but it will significantly outperform a stock that closed at $9.99. If stocks close two pennies apart, then why does it matter which pennies they are? Because for animals that go from thumb to pinkie in four easy steps, 10 is a magic number, and we just can’t help but use it as a magic marker — as a reference point that $10.01 exceeds and $9.99 does not. Retailers have known this for centuries, which is why so many prices end in nine and so few in one.

The hand is not the only part of our anatomy that gives certain numbers their magical powers. The tongue does too. Because of the acoustic properties of our vocal apparatus, some words just sound bigger than others. The back vowels (the “u” in buck) sound bigger than the front vowels (the “i” in sis), and the stops (the “b” in buck) sound bigger than the fricatives (the “s” in sis). As it turns out, in well over 100 languages, the words that denote bigness are made with bigger sounds.

The sound a number makes can influence our decisions about it. In a recent study, one group was shown an ad for an ice-cream scoop that was priced at $7.66, while another was shown an ad for a $7.22 scoop. The lower price is the better deal, of course, but the higher price (with its silky s’s) makes a smaller sound than the lower price (with its rattling t’s).

And because small sounds usually name small things, shoppers who were offered the scoop at the higher but whispery price of $7.66 were more likely to buy it than those offered the noisier price of $7.22 — but only if they’d been asked to say the price aloud.

The magic that magic numbers do is all too often black. They hold special significance for terrestrial mammals with hands and watches, but they mean nothing to streptococcus or the value of Google. Which is why we should be suspicious when the steps to sobriety correspond to a half turn of our planet, when the eternal commandments of God correspond to the architecture of our paws and when the habits of highly effective people — and highly trained doctors — correspond to the whims of a dead emperor.

Daniel Gilbert is a professor of psychology at Harvard, the author of “Stumbling on Happiness” and the host of the television series “This Emotional Life.”


Full article and photo:

Everyman’s Gun

Sheer numbers have made the AK-47 the world’s primary tool for killing

The AK-47 is the most numerous and widely distributed weapon in history, with a name and appearance that are instantly recognized worldwide. Designed in the late 1940s for the Soviet Army, the Avtomat Kalashnikova 47 (“Automatic of Kalashnikov 1947”) became the universal weapon by the late 20th century, used by armies, militias and terrorists in practically every armed conflict, and by all sides in most of them. Even the United States has purchased mass quantities of AK-47s for friendly forces in Iraq and Afghanistan, and the armed services and the State Department train U.S. military and civilian personnel to handle and fire AK-47s in emergencies as part of their training for deployment to the war zones.

How did the AK-47 became as fundamental to contemporary warfare as Microsoft operating systems are to corporate computing? C.J. Chivers sets out to tell the story in “The Gun.” A Pulitzer Prize-winning reporter for the New York Times and a former infantry officer in the Marine Corps, he has seen the AK-47 in action while covering wars from Iraq and Afghanistan to Chechnya and Central Asia, and his experiences enhance his account.

The world’s most popular gun in a model with a side-folding stock

The AK-47’s origins are shrouded in the sort of mystery familiar to any historian researching Cold War issues in Russia. The Soviet state built up myths around its chosen heroes, and the man credited with the creation of the AK-47 was one of the leading figures in the pantheon. Mikhail Kalashnikov (born 1919) received not only the Soviet Union’s highest honors but also a suitable official story: a sergeant of modest origins wounded in battle against the Germans in 1941, who during months spent recovering in a hospital turns his previously unrecognized creative genius to designing a weapon that would better defend his homeland. The post-communist Russian government has kept up the accolades—the nonagenarian Mr. Kalashikov is now a lieutenant general—and Mr. Chivers received little cooperation in his search for authoritative information on the development of the AK-47.

What Mr. Chivers can relate with certainty is the weapon’s place in the evolution of warfare and its ongoing impact on the world. He presents the AK-47 as a final stage in the development of automatic weapons—a compact, simple to manufacture, easily handled and almost indestructible rapid-firing rifle—tracing its story from the first attempts to create machine guns a century earlier. He describes at length Richard Gatling’s invention of hand-cranked rapid-fire weapons during the Civil War. The Gatling gun was little used during that conflict, and the U.S. Army was slow to adopt it afterward, but European armies used Gatling guns to devastating effect in colonial wars. Then Hiram Maxim’s fully automatic machine gun appeared in the 1880s. Capable of firing 600 rounds per minute with the press of a trigger, the Maxim gun in its many derivations (the German Spandau, the British Vickers, the Russian Sokolov, and others) created the dense walls of fire that defined the trench warfare of World War I.

But the Maxim was unwieldy for use by solitary soldiers, and even before the war ended the search for an effective one-man automatic weapon was underway. Germany introduced the MP18, a nine-pound submachine gun designed by Hugo Schmeisser. By using low-powered pistol ammunition, Schmeisser had made a small and portable weapon, but one with a severely limited effective range. Interest in such weapons dwindled after 1918, and the first practical automatic rifle did not appear until late in World War II. The StG44—also designed by Schmeisser and dubbed the “assault rifle” (Sturmgewehr) by Hitler himself—appeared too late for wide distribution during the war. But as Mr. Chivers notes, the StG44 may have had a direct influence on the AK-47.

The Soviet Union began trying to design an automatic rifle just after World War II ended. Mr. Kalashnikov was an obscure 26-year-old sergeant with little formal education and only a few years of experience designing weapons. He headed one of several teams of engineers competing to win the contest to design the automatic rifle, most of them led by established arms designers who had won high honors for their work during the war. After two years of competitive tests and design modifications, the AK-47 emerged the winner.

Mr. Chivers emphasizes that competition between teams of designers and a long back-and-forth process of modification and improvement under army supervision—not the individual brilliance of one man—created the AK-47. Borrowing from the StG44 may have occurred as well. The two weapons share many distinctive design features: the gas piston above the barrel that powers the rifle’s action, the curved 30-round magazine, the stock meant for controlling the weapon when firing on full automatic. Suspicions that the AK-47 was based on the StG44 are reinforced by the fact that Hugo Schmeisser was captured by the Soviet Army in 1945 and spent years in the city of Izhevsk, the main center of AK-47 production to this day.

Regardless of the details of its origins, the AK-47 brought the spread of automatic firepower to its logical conclusion. Like the StG44, the AK-47 used an intermediate-size cartridge scaled down from the rifle rounds of the two world wars which gave it sufficient range for any realistic battlefield target but the minimal recoil to make possible automatic fire from a one-man, hand-held weapon.

The Soviet Army was also obsessed with simplicity and ruggedness in its weapons and so the winning design used a minimum of parts, was built far more strongly than necessary and was constructed with a relatively loose fit between its major moving parts, allowing the AK-47 to continue firing even when clogged with powder residue and dirt.

The result: a practically foolproof weapon that works in the most extreme conditions despite neglect and abuse. Mass production of the AK-47 began by 1950, 15 years before the U.S. introduced its own automatic rifle, the M-16. In addition to cranking out AK-47s by the millions, the Soviet Union set up factories to produce them in Warsaw Pact countries and the People’s Republic of China, and eventually in states such as Egypt and Iraq, where the Soviets sought influence. The outpouring of AK-47s is estimated at more than 100 million and still rising—one for every 70 people in the world and more than 10 times the number of M-16s produced. Mr. Chivers notes that this vast supply of AK-47s has made them widely and cheaply available—readily purchased for less than $200 (including delivery by air) in the international arms market.

Mr. Chivers’s efforts to put the AK-47 in a broad historical context are both the great strength and great weakness of “The Gun.” He devotes the first several chapters to the history of the machine gun and biographies of Gatling and Maxim; the book’s longest chapter concerns the M-16’s origins and its early problems during the Vietnam War. The reader spends fully half of the book not reading about the AK-47 at all. Yet the digressive chapters are the more interesting, displaying impressive research—of a kind not possible on the AK-47 and Mikhail Kalashnikov—and deft descriptions of individuals and their experiences.

The author shows equal skill in discussing how lives were changed by the AK-47. He writes about a Hungarian who during the 1956 Soviet invasion became one of the first insurgents to use the AK-47; East Germans shot trying to escape over the Berlin Wall; American soldiers under fire in Vietnam; Israeli athletes murdered in the Munich Olympic Village in 1972; child soldiers in Uganda’s Lord’s Resistance Army; and a Kurdish bodyguard wounded during an attempted assassination in northern Iraq in 2002. Mr. Chivers reminds the reader constantly of the human consequences of the firepower that the AK-47 has made cheap and widely available.

Sheer numbers have made the AK-47 the world’s primary tool for killing—an “everyman’s gun,” Mr. Chivers calls it. The proliferation of weapons of mass destruction has for decades been a primary U.S. and international concern, and much press attention in recent years has been focused on the fashionable campaign against landmines. Mr. Chivers focuses our attention on an ordinary item that has been vastly more destructive and done more to define the character of warfare today than any other weapon.

Mr. Kim, a lawyer, recently returned from a year in Iraq working for the U.S. Treasury Department.


Full article and photo:

A Kimjongunia would smell as sweet

SOMETIMES there are Kimilsungia exhibitions. Sometimes there are Kimjongilia ones. Citizens of Pyongyang are also treated to combined Kimilsungia and Kimjongilia shows. One such got underway at the beginning of this month, at the Kimilsungia-Kimjongilia Exhibition House: innumerable pots filled with the same two kinds of plant, a monotony alleviated only by a guide’s prediction that North Korea will one day get a third variety.

Kim Jong Il has resisted his late father Kim Il Sung’s predilection for studding North Korea with statues of himself (Pyongyang’s first of Kim Jong Il was reportedly unveiled earlier this year, 16 years after he succeeded his father as North Korea’s leader). Instead, Kim Jong Il says it with flowers. Foreign correspondents invited in for celebrations of the ruling party’s 65th birthday on October 10th saw them everywhere: on billboards, on huge digital screens erected for the festivities on Kim Il Sung Square, in a cascading display in the hotel lobby and in endless profusion at the exhibition (along with huge portraits of the two Kims).

Kim Il Sung officially remains president, against the odds, but the Kimjongilia, a giant red begonia, somehow leaves its visual stamp on Pyongyang even more pervasively than the Kimilsungia, a normal-sized purple orchid. It might be said that the Kimjongilia’s bouffant petals echo the hairstyle of North Korea’s eponymous ruler, but a guide at the exhibition has a more politically correct explanation of the flower’s appearance. Its bright red hue, she says, reflects Kim Jong Il as a “person of passion, with a very strong character”.  

A journalist asked whether different temperature requirements made it difficult to keep begonias and orchids together. “We grow them with our hearts”, said the guide.  In August North Korea’s Kimilsungia and Kimjongilia Research Centre came up with what might be a more reliable way of getting the best out of the Kimjongilia. After “years of research”, said the state news agency KCNA, it devised a chemical agent that could lengthen the blooming period by a week in summer or by 20 days in winter.

Interspersed among the potted plants were occasional models of items representing the two leaders’ great achievements: “a nuclear weapon” was how the guide described one missile-like object. Another was a model of a rocket supposedly carrying a satellite into space (the actual rocket blew up after launch in April 2009, but North Korean officials resolutely insist that it successfully put a satellite into orbit).  Another represented a hand grenade, rifle and rocket launcher. But, no doubt deliberately, it was the Kimjongilia’s redness that struck the eye.

One display was of potted Kimjongilias supposedly donated by foreign diplomatic missions. China’s was uppermost, together with a photograph of Kim Jong Il shaking hands with China’s president, Hu Jintao. Individual European countries were conspicuous by their absence, but there was one pot plant there in the name of the European Union. (The North Koreans had tried to gouge each of the seven European embassies in Pyongyang for flower contributions—though hard currency, it was understood, would do nicely in lieu. The single Kimjongilia was their cost-saving solution.)

Oddly for plants that have acquired such crucial political significance in North Korea—the army has its own huge breeding centre for them—both are actually foreign creations. The Kimilsungia was presented in 1965 by Indonesia’s founding president, Sukarno, and the Kimjongilia arrived in 1988, courtesy a Japanese botanist. Kim Jong Un, Kim Jong Il’s anointed successor, who was seen by foreign journalists for the first time on October 9th and 10th, has yet to acquire a flower. “In future we will have one”, assures the guide. 


Full article and photo:

Read This Review or . . .

Forgive me if I open on a personal note: The other night I started laughing so hard I had to leave the room. My daughter was trying to study, and I could see she was getting alarmed. It was kind of scary to me, too, if you want to know the truth. For a moment there, as I made it into the bathroom and shut the door, I thought my body was approaching organ failure, not that I know what organ failure feels like, thank God. You hear people say things like “I laughed so hard I cried” and “I nearly fell out of my chair,” but I had gone well beyond the crying stage by the time my metabolism began to return to equilibrium. And then I realized that I hadn’t laughed so hard in 35 years, since I was a teenager, reading National Lampoon.

American men of a certain age will recall the feeling. What I’d been reading the other night was, no coincidence, National Lampoon—specifically the monologue of a fictional New York cabbie named Bernie X. He was the creation of Gerald Sussman, a writer and editor for the Lampoon from its early days in the 1970s to its sputtering death in 1998. Sussman, it is said, wrote more words for the magazine than any other contributor. I’m sorry I can’t quote any of his pieces here. They’re filthy.

If I’d gone ahead and died the other night, my wife would have known whom to sue. “Drunk Stoned Brilliant Dead,” in which Bernie X appears, is the work of Rick Meyerowitz, himself a valued contributor to the Lampoon who had the bright idea to gather his favorite pieces from the magazine into a handsomely produced coffee-table book. Mr. Meyerowitz is best known as the man who painted Mona Gorilla, a shapely, primly dressed primate with come-hither eyes and a smile far more unsettling than Leonardo’s original. That ape may be the most celebrated magazine illustration of the 1970s, its only competition being the Lampoon cover from January 1973. The photograph showed a cowering pup with a revolver to its head next to the timeless tagline: “If You Don’t Buy This Magazine, We’ll Kill This Dog.”

As an illustrator, Mr. Meyerowitz has a bias toward pieces with a strong graphic element. This is altogether fitting. The production values of the earliest issues of National Lampoon were rag-tag, but with the hiring of the art director Michael Gross and gifted painters and designers like Mr. Meyerowitz and Bruce McCall, the presentation of a piece of writing on the page became as essential to the joke as the writing itself.

In parodies of everything from comic books to Babylonian hieroglyphs, the Lampoon technique was a dead-on verisimilitude, exquisitely detailed. No matter how absurd the jokes were, how incongruous, abstract, whimsical or—I repeat myself—filthy, they were delivered with the straightest possible face. Great performers, old showfolk say, never let you see them sweat. National Lampoon writers never let you hear them chuckle.

The classic marriage of word and picture, which Mr. Meyerowitz reprints in full, was a 10-page spoof of travel magazines titled “Stranger in Paradise.” The soft-focus prose of the travel writer (“Wild fruits hang from the branches, waiting to be plucked”) transports us to a lush South Sea island where a “modern day Robinson Crusoe” lives in idyllic retirement. Sumptuous, full-color photographs show him dodging the surf, frolicking with the natives, sunbathing nude on the beach. Our Crusoe is Adolf Hitler, complete with the toothbrush mustache, the penetrating stare and a bottom as pale as a baby’s. No one who has seen the sunbathing photograph has ever been able to forget it. I’ve tried.

Amid the belly laughs was an irony so cool that it could sink to absolute zero. “Making people laugh is the lowest form of humor,” said Michael O’Donoghue, who founded the magazine with some Harvard pals in 1969 and later gained TV fame with “Saturday Night Live.” And it’s true that you—meaning me and my friends —sometimes had trouble finding the joke. Mr. Meyerowitz includes all 12,000 words of a parody by Henry Beard, another founding editor, of a typically grim law-review article. It’s called “Law of the Jungle,” by which he means the real law of the jungle, covering torts, trusts and property rights as understood by hippos and boa constrictors. With its high rhetoric, labyrinthine arguments and endless footnotes, it is as flawlessly rendered as any parody ever written—so precise that it becomes as tedious as the articles it was meant to send up.

You have to be very good to fail in this way, and nobody could have doubted the vast talent assembled behind that grinning gorilla. In the 1970s, however, old-fashioned moralists (soon to be extinct) complained about a deep vein of nihilism running through the magazine. Out in the suburbs we irony-soaked, pseudo-sophisticated teenage boys could only roll our eyes at the tut-tutting. We knew, or thought we did, that every sex joke in Bernie X’s monologues was redeemed by the tonally perfect rendering of the cabbie’s patois (I don’t think we used the word patois).

But from this distance the justice of the moralists’ charge looks glaringly obvious. In their more pompous moments, the Lampoon editors could have defended an appallingly tasteless joke about, say, the My Lai massacre or the Kennedy assassination as an effort to shake the bourgeois out of their complacency. Now it just looks tasteless or worse: an assault on the very notion of tastelessness, on our innate belief that sometimes some subjects should be off-limits.

Tony Hendra, one of the most pretentious of the original editors—quite a distinction in an office full of Harvard boys—writes here of the magazine’s “unique high-low style of comedy, incredible disgustingness paired with intellectual and linguistic fireworks.” The juxtaposition, as they proved every month and as Mr. Meyerowitz’s collection reconfirms, can be side-splitting. The mix is hard to sustain, though, and it makes for a terrible legacy. The high, being so hard to pull off, inevitably fades away, leaving only the low. Gresham’s Law—the bad driving out the good—holds true for comedy too.

With a few exceptions—the Onion, a sitcom or two—this seems to be where American humor finds itself now. You have only to wade into the opening minutes of any Will Ferrell movie to be rendered numb by the body-part jokes, unredeemed by the Lampoon’s intellectual or linguistic fireworks. The unhappy state of humor today gives this dazzling book the feel of a nostalgic excursion—back to a purer era, when all you had to do to make someone laugh was threaten to shoot a dog.

Mr. Ferguson is a senior editor at the Weekly Standard.


Full article and photos:

The Traveling Salesmen of Climate Skepticism

‘Science as the Enemy’

A dried-up reservoir in Spain (May 2005 photo): The professional skeptics tend to use inconsistent arguments. Sometimes they say that there is no global warming. At other times, they point out that while global warming does exist, it is not the result of human activity.

A handful of US scientists have made names for themselves by casting doubt on global warming research. In the past, the same people have also downplayed the dangers of passive smoking, acid rain and the ozone hole. In all cases, the tactics are the same: Spread doubt and claim it’s too soon to take action.

With his sonorous voice, Fred Singer, 86, sounded like a grandfather explaining the obvious to a dim-witted child. “Nature, not human activity, rules the climate,” the American physicist told a discussion attended by members of the German parliament for the business-friendly Free Democratic Party (FDP) three weeks ago.

Marie-Luise Dött, the environmental policy spokeswoman for the parliamentary group of Angela Merkel’s center-right Christian Democratic Union (CDU), also attended Singer’s presentation. She said afterwards that it was “extremely illuminating.” She later backpedaled, saying that her comments had been quoted out of context, and that of course she supports an ambitious climate protection policy — just like Chancellor Merkel.

Merkel, as it happens, was precisely the person Singer was trying to reach. “Our problem is not the climate. Our problem is politicians, who want to save the climate. They are the real problem,” he says. “My hope is that Merkel, who is not stupid, will see the light,” says Singer, who has since left for Paris. Noting that he liked the results of his talks, he adds: “I think I achieved something.”

Salesman of Skepticism

Singer is a traveling salesman of sorts for those who question climate change. On this year’s summer tour, he gave speeches to politicians in Rome, Paris and the Israeli port city of Haifa. Paul Friedhoff, the economic policy spokesman of the FDP’s parliamentary group, had invited him to Berlin. Singer and the FDP get along famously. The American scientist had already presented his contrary theories on the climate to FDP politicians at the Institute for Free Enterprise, a Berlin-based free-market think tank, last December.

Singer is one of the most influential deniers of climate change worldwide. In his world, respected climatologists are vilified as liars, people who are masquerading as environmentalists while, in reality, having only one goal in mind: to introduce socialism. Singer wants to save the world from this horror. For some, the fact that he made a name for himself as a brilliant atmospheric physicist after World War II lends weight to his words.

Born in Vienna, Singer fled to the United States in 1940 and soon became part of an elite group fighting the Cold War on the science front. After the collapse of the Soviet Union, Singer continued his struggle — mostly against environmentalists, and always against any form of regulation.

Whether it was the hole in the ozone layer, acid rain or climate change, Singer always had something critical to say, and he always knew better than the experts in their respective fields. But in doing so he strayed far away from the disciplines in which he himself was trained. For example, his testimony aided the tobacco lobby in its battle with health policy experts.

‘Science as the Enemy’

The Arlington, Virginia-based Marshall Institute took an approach very similar to Singer’s. Founded in 1984, its initial mission was to champion then US President Ronald Reagan’s Strategic Defense Initiative (SDI), better known as “Star Wars.” After the fall of the Iron Curtain, the founders abruptly transformed their institute into a stronghold for deniers of environmental problems.

“The skeptics thought, if you give up economic freedom, it will lead to losing political freedom. That was the underlying ideological current,” says Naomi Oreskes, a historian of science at the University of California, San Diego, who has studied Singer’s methods. As scientists uncovered more and more environmental problems, the skeptics “began to see science as the enemy.”

Oreskes is referring to only a handful of scientists and lobbyists, and yet they have managed to convince many ordinary people — and even some US presidents — that science is deeply divided over the causes of climate change. Former President George H.W. Bush even referred to the physicists at the Marshall Institute as “my scientists.”

Whatever the issue, Singer and his cohorts have always used the same basic argument: that the scientific community is still in disagreement and that scientists don’t have enough information. For instance, they say that genetics could be responsible for the cancers of people exposed to secondhand smoke, volcanoes for the hole in the ozone layer and the sun for climate change.

Cruel Nature

It almost seems as if Singer were trying to disguise himself as one of the people he is fighting. With his corduroy trousers, long white hair and a fish fossil hanging from a leather band around his neck, he comes across as an amiable old environmentalist. But the image he paints of nature is not at all friendly. “Nature is much to be feared, very cruel and very dangerous,” he says.

At conferences, Singer likes to introduce himself as a representative of the Nongovernmental International Panel on Climate Change (NIPCC). As impressive as this title sounds, the NIPCC is nothing but a collection of like-minded scientists Singer has gathered around himself. A German meteorologist in the group, Gerd Weber, has worked for the German Coal Association on and off for the last 25 years.

According to a US study, 97 percent of all climatologists worldwide assume that greenhouse gases produced by humans are warming the Earth. Nevertheless, one third of Germans and 40 percent of Americans doubt that the Earth is getting warmer. And many people are convinced that climatologists are divided into two opposing camps on the issue — which is untrue.

So how is it that people like Singer have been so effective in shaping public opinion?

Experience Gained Defending Big Tobacco

Many scientists do not sufficiently explain the results of their research. Some climatologists have also been arrogant or have refused to turn over their data to critics. Some overlook inconsistencies or conjure up exaggerated horror scenarios that are not always backed by science. For example, sloppy work was responsible for a prediction in an Intergovernmental Panel on Climate Change (IPCC) report that all Himalayan glaciers would have melted by 2035. It was a grotesque mistake that plunged the IPCC into a credibility crisis.Singer and his fellow combatants take advantage of such mistakes and utilize their experiences defending the tobacco industry. For decades, Big Tobacco managed to cast doubt on the idea that smoking kills. An internal document produced by tobacco maker Brown & Williamson states: “Doubt is our product since it is the best means of competing with the ‘body of fact’ that exists in the minds of the general public.”

In 1993, tobacco executives handed around a document titled “Bad Science — A Resource Book.” In the manual, PR professionals explain how to discredit inconvenient scientific results by labeling them “junk.” For example, the manual suggested pointing out that “too often science is manipulated to fulfill a political agenda.” According to the document: “Proposals that seek to improve indoor air quality by singling out tobacco smoke only enable bad science to become a poor excuse for enacting new laws and jeopardizing individual liberties.”

‘Junk Science’

In 1993, the US Environmental Protection Agency (EPA) published what was then the most comprehensive study on the effects of tobacco smoke on health, which stated that exposure to secondhand smoke was responsible for about 3,000 deaths a year in the United States. Singer promptly called it “junk science.” He warned that the EPA scientists were secretly pursuing a communist agenda. “If we do not carefully delineate the government’s role in regulating … dangers, there is essentially no limit to how much government can ultimately control our lives,” Singer wrote.

Reacting to the EPA study, the Philip Morris tobacco company spearheaded the establishment of “The Advancement of Sound Science Coalition” (TASSC). Its goal was to raise doubts about the risks of passive smoking and climate change, and its message was to be targeted at journalists — but only those with regional newspapers. Its express goal was “to avoid cynical reporters from major media.”

Singer, Marshall Institute founder Fred Seitz and Patrick Michaels, who is now one of the best known climate change skeptics, were all advisers to TASSC.

Not Proven

The Reagan administration also appointed Singer to a task force on acid rain. In that group, Singer insisted that it was too early to take action and that it hadn’t even been proven yet that sulfur emissions were in fact the cause. He also said that some plants even benefited from acid rain.

After acid rain, Singer turned his attention to a new topic: the “ozone scare.” Once again, he applied the same argumentative pattern, noting that although it was correct that the ozone concentration in the stratosphere was declining, the effect was only local. Besides, he added, it wasn’t clear yet whether chlorofluorocarbons (CFCs) from aerosol cans were even responsible for ozone depletion.

As recently as 1994, Singer claimed that evidence “suggested that stratospheric chlorine comes mostly from natural sources.” Testifying before the US Congress in 1996, he said there was “no scientific consensus on ozone depletion or its consequences” — even though in 1995 the Nobel Prize had been awarded to three chemists who had demonstrated the influence of CFCs on the ozone layer.

The Usual Suspects

Multinational oil companies also soon adopted the tried-and-true strategies of disinformation. Once again, lobbying groups were formed that were designed to look as scientific as possible. First there was the Global Climate Coalition, and then ExxonMobil established the Global Climate Science Team. One of its members was lobbyist Myron Ebell. Another one was a veteran of the TASCC tobacco lobby who already knew the ropes. According to a 1998 Global Climate Science Team memo: “Victory will be achieved when average citizens ‘understand’ (recognize) uncertainties in climate science.”

It soon looked as though there were a broad coalition opposing the science of climate change, supported by organizations like the National Center for Policy Analysis, the Heartland Institute and the Center for Science and Public Policy. In reality, these names were often little more than a front for the same handful of questionable scientists — and Exxon funded the whole illusion to the tune of millions of dollars.

It was an excellent investment.

In 2001, the administration of then-President George W. Bush reneged on previous climate commitments. After that, the head of the US delegation to the Kyoto negotiations met with the oil lobbyists from the Global Climate Coalition to thank them for their expertise, saying that President Bush had “rejected Kyoto in part based on input from you.”

Singer’s comrade-in-arms Patrick Michaels waged a particularly sharp-tongued campaign against the phalanx of climatologists. One of his books is called: “The Satanic Gases: Clearing the Air about Global Warming.” Michaels has managed to turn doubt into a lucrative business. The German Coal Association paid him a hefty fee for a study in the 1990s, and a US electric utility once donated $100,000 to his PR firm.

Inconsistent Arguments

Both Michaels and Ebell are members of the Cooler Heads Coalition. Unlike Singer and Seitz, they are not anti-communist crusaders from the Cold War era, but smooth communicators. Ebell, a historian, argues that life was not as comfortable for human beings in the Earth’s cold phases than in the warm ones. Besides, he adds, there are many indications that we are at the beginning of a cooling period.

The professional skeptics tend to use inconsistent arguments. Sometimes they say that there is no global warming. At other times, they point out that while global warming does exist, it is not the result of human activity. Some climate change deniers even concede that man could do something about the problem, but that it isn’t really much of a problem. There is only one common theme to all of their prognoses: Do nothing. Wait. We need more research.

People like Ebell cannot simply be dismissed as cranks. He has been called to testify before Congress eight times, and he unabashedly crows about his contacts at the White House, saying: “We knew whom to call.”

Ebell faces more of an uphill battle in Europe. In his experience, he says, Europe is controlled by elites who — unlike ordinary people — happen to believe in climate change.

Einstein on a Talk Show

But Fred Singer is doing his best to change that. He has joined forces with the European Institute for Climate and Energy (EIKE). The impressive-sounding name, however, is little more than a P.O. box address in the eastern German city of Jena. The group’s president, Holger Thuss, is a local politician with the conservative Christian Democratic Union (CDU).

Hans Joachim Schellnhuber, director of the respected Potsdam Institute for Climate Impact Research and an adviser to Chancellor Merkel on climate-related issues, says he has no objection to sharing ideas with the EIKE, as long as its representatives can stick to the rules of scientific practice. But he refuses to join EIKE representatives in a political panel discussion, noting that this is precisely what the group hopes to achieve, namely to create the impression among laypeople that experts are discussing the issues on a level playing field.

Ultimately, says Schellnhuber, science has become so complicated that large segments of the population can no longer keep up. The climate skeptics, on the other hand, are satisfied with “a desire for simple truths,” Schellnhuber says.

This is precisely the secret of their success, according to Schellnhuber, and unfortunately no amount of public debate can change that. “Imagine Einstein having to defend the theory of relativity on a German TV talk show,” he says. “He wouldn’t have a snowball’s chance in hell.”


Full article and photo:,1518,721846,00.html

The Beagle Vanishes

In the second column we freed the circle from being a flat-on geometric shape so that it could move out into space as the ellipse. We’ve used it to help us draw a pot and to see the roundness of forms, and now we’re going to use that ellipse to fly us into an imaginary scene that introduces us to the principles of perspective.

We follow that flying Frisbee of an ellipse as it settles down as a perfect little pond on a vast Kansas prairie. A man walks out onto that plain with a picnic basket, a blanket and a beagle. He sits down on his blanket to admire the view and the improbably perfect pond.


The beagle catches the scent of the little rabbit on the other side of the pond and takes off after it. Ignoring the shouts of his master, the dog paddles through the pond, bounds across the vast expanse and disappears over the horizon. (Two nice farmers in the next town find him and call the ASPCA.)


The runaway beagle’s trajectory has given us a vanishing point, the first element in the geometry of perspective: the point on the horizon towards which objects in the picture converge. In the first drawing, the man is sitting down so his viewpoint is low (and let’s imagine that we’re in a slightly elevated position behind him), and because the horizon line occurs roughly at eye level, the horizon line is also low and all the shapes appear relatively flattened out. Also, in one point perspective, all the lines running from left to right are parallel to the horizon line.

In the second diagram, where the man stands up to call his dog, he sees the scene from a higher viewpoint and thus the horizon line is also higher within the rectangle of our image. Now the blanket and the pool become wider, front to back, as does the perceived distance between the man’s feet and the horizon. It’s just as the ellipses in the drawing of the pot became wider in the same way the more we looked down on them.

As useful as one-point perspective is in drawing a Kansas picnic or highways in the Nevada dessert leading straight to the sunset, other scenes require more complicated angles. For these images we need two-point perspective.

Let’s start by going back to the circle and plotting it in two-point perspective so we know how to make an official ellipse. It may not be as fluid or interesting as your free-hand ellipse, but you should know how to do it so you can move on with your life.

Get Giotto to draw you a circle. Or use a compass or trace around a glass. Then, with T square and ruler, draw a box around the circle. Draw a horizon line above the box. Now draw a vertical line through the middle of the box up to the horizon line (A). Draw another line bisecting the box horizontally (B). Then draw two lines from corner to corner to bisect the box diagonally. Now draw two more vertical lines through the points where the diagonals bisect the circle (C and D). This will give you four intersection points, E, F, G and H, around the circumference of the circle.

Are you still with me? Now, something a little easier to do. Choose two vanishing points, left and right (J and I), along the horizon and roughly equidistant from the center. Now draw lines from the right vanishing point (I) to the top corners of the box and to the intersection points C, A and D. Count the lines you have just made — there should be five.

Now, by drawing a line from vanishing point J to the right corner of the box, we are crossing four lines (check the diagram) that give us important intersections. The first is intersection K , the point at which you can make a horizontal line to complete the perspective square. Think of it as a flap bending away from the bottom box. I’ll get to the second intersection in a minute. The third is intersection L, which shows you where to make another horizontal line to establish the center of the perspective box, and marking both left and right intersections “point B” to match how they are identified in the lower box.

The second and fourth intersections, E and H, along with intersections G, F, B and A, match the same points in the lower box and give you the theoretical means to draw a circle seen in perspective. The theory is that you simply connect these eight points (A and B are doubled) with curved lines and, voila, you have the correct ellipse. However, I find it takes a certain amount of fiddling to swing these curves around the corners to make them look right. In other words, you already have to have some sense of what a perspective circle looks like in order to carry out this last bit of the procedure. Whew! Work on your free-hand ellipses.

Now, back to our Kansas prairie picnic. This time, we’ll let our beagle run off at an angle, which will give us a vanishing point, A, to the right of our picture frame. Establish the left-hand vanishing point, B, along the horizon at roughly the same distance from the center as the first vanishing point (as you did in plotting the perspective circle). Choose a point in the lower left of the picture frame along the angle of the first vanishing point for the corner of the blanket, (C). Now join that point to the second vanishing point. This gives you the angles of two sides of the blanket. Now choose two points that seem reasonable for the width (D), and length (E), of the blanket and join those points to the appropriate vanishing points. Now you have completed the perspective view of the rectangle of the blanket as it has turned to match the trajectory of the beagle’s flight.

Since we’ve spent so much time plotting the circle in perspective (a.k.a. the ellipse), let’s turn our pond into a little house on the prairie to get some practice with rectilinear shapes.

First, choose a point (F) above and to the right of the blanket for the near corner of the house (as you did with the blanket), and extend lines along the vanishing point angles to establish the length (H) and width (G) of the house. From the point F draw a vertical line to establish the height (I) of the house.

Using the vanishing point trajectories you can now complete the basic box of the house. In order to establish the center points of the two visible walls, make a horizontal base line, J to K, running through point F at the corner of the house. Using lines running from both vanishing points and through the corners of the house establish the width and length along the base line, J to K. Measure the halfway points along that line. By extending vanishing point lines back to the house from the two midpoints you can figure out where to put the centered roof peak and the centered door, and where to center windows in the remaining spaces. You have now completed a scene of a blanket and a house viewed from the same vantage point

My personal take on perspective is that one should understand enough of the basic idea of vanishing points to substantiate how you see objects and buildings recede in space in your everyday life, so that it helps you to draw a convincing image without having to do a lot of plotting. An easy little exercise you can do is to draw a rectangle with a horizon line and then, free-hand, draw a series of boxes aligning with the same one or two vanishing points. It will help you, too, with understanding what things look like when they are low or high in an image field.


For those of you anxious to move deeper into the labyrinth of perspective, I offer this little taste of what you’re in for.

But to see what a great artist can create playing around with very simple perspective, I include this painting, “Melancholy and Mystery of a Street,” by Giorgio de Chirico.

Giorgio de Chirico
Giorgio de Chirico’s “Melancholy and Mystery of a Street”
James McMullan, New York Times

The Frisbee of Art

Pope Boniface VIII was looking for a new artist to work on the frescoes in St. Peter’s Basilica, so he sent a courtier out into the country to interview artists and collect samples of their work that he could judge. The courtier approached the painter Giotto and asked for a drawing to demonstrate his skill. Instead of a study of angels and saints, which the courtier expected, Giotto took a brush loaded with red paint and drew a perfect circle. The courtier was furious, thinking he had been made a fool of; nonetheless, he took the drawing back to Boniface. The Pope understood the significance of the red circle, and Giotto got the job.

This is often told as the story of the ultimate test of drawing, and I don’t dispute that it is very hard to draw a perfect circle. However, I would argue that it is much more useful to be able to draw a circle existing in space, a circle seen turned at various angles as we usually encounter it in the world. We need to be able to draw an ellipse.

The ellipse is the Frisbee of art, the circle freed from its flatness that sails out into imagined space tilting this way and that and ending up on the top of the soup bowl and silver cup in Jean-Baptiste Chardin’s still life or, imagine this, on the wheels of the speeding Batmobile.

Jean-Baptiste Chardin The Silver Goblet
Pablo Picasso,  Mother and Child

Once you tune into ellipses, you will begin to see them everywhere: in art, as in the Chardin painting, or in life, in your morning coffee cup or the table top on which the cup sits. The ellipse is also implicit in every cylindrical form whether or not we see its end exposed (as it would be in a can or a cup or a length of pipe). Just look at the Picasso “Mother and Child.” Highlighting the ellipses, as I have done, helps you to understand the basic roundness of those limbs, encouraging you to see and to draw with a volumetric rather than a flat perception of what you are observing. So the ellipse is important because it exists in so many places as an actual shape, and because it is “buried” in so many round forms that we are likely to draw.

The challenge of drawing an ellipse is that it must be done with enough speed to engage the natural “roundingness” of your reflexes. In essence, you are deciding to make a particular shaped ellipse and then letting your hand and wrist move autonomously to accomplish the job. Much of what you are practicing in learning to draw is engaging your fine motor skills in this way, so that the hand moves to do your bidding without a ”controlling” space between deciding to make a particular line and the hand moving to do it. Before this kind of almost simultaneous cooperation between your brain and your hand occurs, you will tend to worry the line out in slow incremental steps. In this hand-eye coordination, drawing is an athletic activity that benefits from practice, like golf or tennis. On a page of your drawing pad, make various kinds of ellipses as a warmup for the exercise below. Keep the movement of your hand fluid and relatively fast.

Let’s begin by drawing a pot. A few words on perspective are in order before we start. Think of looking at a can of soda on a table in front of you: the implied ellipse at the bottom of the can where it sits on the table is rounder that the ellipse at the top of the can because you are looking down on it more. If it’s easier to observe this in a straight-sided drinking glass then use that as an example. This describes the basic idea, illustrated by the diagram below, that as you look down on an ellipse you see more of it than if the ellipse is higher up relative to your eye level.

Start your drawing by looking at the top of the pot and making an ellipse as close as you can to the shape you see. Give it a couple of tries if you need to. Now bring down two outside-edge lines to where the pot bulges out. Add a center line all the way down to where you think the bottom of the pot is. Now add two horizontal lines, one at the bulge point and one a little above the bottom of the pot. These lines will guide you as you make the two ellipses that describe the cylindrical shape of the pot. Make the ellipse at the bulge point a little rounder than the top ellipse. Make the bottom ellipse rounder still. Now, looking at the outside edges at the bottom of the pot, draw connecting curves between the two ellipses, trying to capture the nature of the shapes in the way that the bulge is more pronounced at the top, like shoulders, and then curves inward.

Congratulations! You have now made a basic linear drawing of a pot. I encourage you to strengthen your understanding of analyzing round forms by doing an additional exercise; choose a basically cylindrical object from your surroundings and draw it using ellipses in the same way I have just demonstrated. Because you will be studying an object in three dimensions rather than in a photograph it may be easier to see the ellipses. I have photographed a group of household objects to suggest some of the things that you might consider.

Household Objects

In the next column I’ll show the same pot we’ve just drawn in a more dramatic light to make it easier to understand its volumes, so you can see how the direction the light comes from affects the shadows. You’ll have an opportunity to practice the logic and art of shading.

James McMullan, New York Times


Full article and photos:

Serendipitous Connections

Innovation occurs when ideas from different people bang against each other.

In the physical universe, chemical reactions are limited by the molecules that are close to one another and the ease with which they can meet up. You can run an electrical current through a chemical bath and synthesize the basic amino acids that form the building-blocks of human life. You cannot synthesize a llama.

So in the field of human knowledge. New ideas are limited by the supply of existing ideas and by the speed with which those ideas can combine to form new ones. The ancients could build accurate astronomical models but could not generate a theory of gravity; they needed better telescopes, better measurements and a theory of calculus. “If I see farther than other men,” said Isaac Newton, “it is because I stood on the shoulders of giants.”

This idea, the importance of proximity, is one of the first concepts that Steven Johnson introduces in “Where Good Ideas Come From.” In many ways, it is the heart of the book, defining not just what innovations are possible at a given time, but also how innovation gets done within the current frontiers of human knowledge. In Mr. Johnson’s telling, innovation is most likely to occur when ideas from different people, and even different fields, are rapidly banging against one another; every so often the ideas will spawn some radical new combination. The most innovative institutions will create settings where ideas are free to move, and connect, in unexpected ways.

Anyone who has written about business or science knows how often stories about inventions start with some chance encounter: “I was sitting next to this guy on an airplane, and he said . . .” The pacemaker was invented by an electronics technician who happened to have lunch with two heart surgeons; McDonald’s became a national chain after Ray Kroc stopped by the original hamburger shack to sell milkshake machines and realized that he had stumbled onto a good thing.

Mr. Johnson thinks that the adjacent possible explains why cities foster much more innovation than small towns: Cities abound with serendipitous connections. Industries, he says, may tend to cluster for the same reason. A lone company in the middle of nowhere has only the mental resources of its employees to fall back on. When there are hundreds of companies around, with workers more likely to change jobs, ideas can cross-fertilize.

The author outlines other factors that make innovation work: the tolerance of failure, as in Thomas Edison’s inexorable process-of-elimination approach to finding a workable light-bulb filament; the way that ideas from one field can be transformed in another; and the power of information platforms to connect disparate data and research. “Where Good Ideas Come From” is filled with fascinating, if sometimes tangential, anecdotes from the history of entrepreneurship and scientific discovery. The result is that the book often seems less a grand theory of innovation than a collection of stories and theories about creativity that Steven Johnson happens to find interesting.

It turns out that Mr. Johnson himself has a big idea, but it’s not a particularly incisive one: He proposes that competition and market forces are less important to innovation than openness and inspiration. The book includes a list of history’s most important innovations and divides them along two axes: whether the inventor was working alone or in a network; and whether he was working for a market reward or for some other reason. Market-led innovations, it turns out, are in the minority.

Certainly it is true that great discoveries happen in government projects or academic labs; it would be foolish to declare that only market incentives can produce transformative ideas. But Mr. Johnson’s list ultimately proves less about the market’s shortcomings than about the shortcomings of the great-discovery model of innovation on which he dwells. Markets may be less effective at delivering radical new ideas, but they excel at converting those ideas into useful tools.

Reverence for the great-discovery model of innovation is what prompts critics of the pharmaceutical industry to declare that all the “real work” of drug discovery is done in university labs, often with taxpayer funding. Drug companies, we are often told, simply steal the ideas and monetize them. And yet what “Big Pharma” does is no less crucial to drug discovery than the basic research that takes place in academia. It is not enough to learn that a certain disease process can be thwarted by a given molecule. You also have to figure out how to cheaply mass-produce that chemical, in a form that can be easily taken by ordinary patients (no IV drugs for acid reflux, please). And before the drug can be approved, it must be run through the expensive human trials required by the Food and Drug Administration.

The endless creativity of the human animal is one of the differences between us and a chimpanzee poking sticks into an anthill in search of a juicy meal. But another one is our capacity for the endless elaboration and refinement of ideas—particularly in a modern economy. Toyota’s prowess at this sort of incremental improvement is legendary, even radical. Wal-Mart, it is said, was responsible for 25% of U.S. productivity growth in the 1990s. That’s not because Sam Walton emerged from his lab one night waving blueprints for a magic productivity machine. The company made continual, often tiny, improvements in the management of its supply chain, opening thousands of stores along the way and putting the benefits within reach of virtually every American.

We are all of us, every day, discovering many things that don’t work very well and a few things that do. Reducing the history of innovation to a few “big ideas” misses the full power of human ingenuity.

Ms. McArdle is the business and economics editor of The Atlantic and a fellow at the New America Foundation.


Full article and photo:

Kant on a Kindle?

The technology of the book—sheafs of paper covered in squiggles of ink—has remained virtually unchanged since Gutenberg. This is largely a testament to the effectiveness of books as a means of transmitting and storing information. Paper is cheap, and ink endures.

In recent years, however, the act of reading has undergone a rapid transformation, as devices such as the Kindle and iPad account for a growing share of book sales. (Amazon, for instance, now sells more e-books than hardcovers.) Before long, we will do most of our reading on screens—lovely, luminous screens.

The displays are one of the main selling points of these new literary gadgets. Thanks to dramatic improvements in screen resolution, the words shimmer on the glass; every letter is precisely defined, with fully adjustable fonts. Think of it as a beautifully printed book that’s always available in perfect light. For contrast and clarity, it’s hard for Gutenberg to compete.

And these reading screens are bound to get better. One of the longstanding trends of modern technology is to make it easier and easier to perceive fine-grained content. The number of pixels in televisions has increased fivefold in the last 10 years, VHS gave rise to the Blu-Ray, and computer monitors can display millions of vibrant colors.

I would be the last to complain about such improvements—I shudder to imagine a world without sports on HDTV—but it’s worth considering the ways in which these new reading technologies may change the nature of reading and, ultimately, the content of our books.

Let’s begin by looking at how reading happens in the brain. Stanislas Dehaene, a neuroscientist at the Collège de France in Paris, has helped to demonstrate that the literate brain contains two distinct pathways for making sense of words, each activated in different contexts. One pathway, known as the ventral route, is direct and efficient: We see a group of letters, convert those letters into a word and then directly grasp the word’s meaning. When you’re reading a straightforward sentence in a clear format, you’re almost certainly relying on this neural highway. As a result, the act of reading seems effortless. We don’t have to think about the words on the page.

But the ventral route is not the only way to read. The brain’s second reading pathway, the dorsal stream, is turned on when we have to pay conscious attention to a sentence. Perhaps we’ve encountered an obscure word or a patch of smudged ink. (In his experiments, Mr. Dehaene activates this pathway in a variety of ways, such as rotating the letters or filling the prose with errant punctuation.) Although scientists had previously assumed that the dorsal route ceased to be active once we became literate, Mr. Dehaene’s research demonstrates that even adults are still forced to occasionally decipher a text.

The lesson of his research is that the act of reading observes a gradient of awareness. Familiar sentences rendered on lucid e-ink screens are read quickly and effortlessly. Unusual sentences with complex clauses and odd punctuation tend to require more conscious effort, which leads to more activation in the dorsal pathway. All the extra cognitive work wakes us up; we read more slowly, but we notice more. Psychologists call this the “levels-of-processing” effect, since sentences that require extra levels of analysis are more likely to get remembered.

E-readers have yet to dramatically alter the reading experience; e-ink still feels a lot like old-fashioned ink. But it seems inevitable that the same trends that have transformed our televisions will also affect our reading gadgets. And this is where the problems begin. Do we really want reading to be as effortless as possible? The neuroscience of literacy suggests that, sometimes, the best way to make sense of a difficult text is to read it in a difficult format, to force our brain to slow down and process each word. After all, reading isn’t about ease—it’s about understanding. If we’re going to read Kant on the Kindle, or Proust on the iPad, then we should at least experiment with an ugly font.

Every medium eventually influences the message that it carries. I worry that, before long, we’ll become so used to the mindless clarity of e-ink that the technology will feed back onto the content, making us less willing to endure challenging texts. We’ll forget what it’s like to flex those dorsal muscles, to consciously decipher a thorny stretch of prose. And that would be a shame, because not every sentence should be easy to read.

Jonah Lehrer is the author, most recently, of ‘”How We Decide.”


Full article and photo:

Descent Into Legal Hell

On the afternoon of Sept. 28, 1999, sheriff’s deputies pulled into the driveway of Cynthia Stewart’s Ohio home and arrested her. Her crime: taking pictures of her 8-year-old daughter playing in the bathtub. She had sent the photos to a film-processing lab, and the lab called the police. The police took the pictures to the town prosecutor, who viewed them as harmless and declined to press charges. The police then turned to the county prosecutor, who was all too happy to take the case. He promptly brought child- pornography charges against Ms. Stewart.

“Framing Innocence” is Lynn Powell’s reported account of Ms. Stewart’s descent into legal hell. For two years, the case meandered through the justice system. Child Services filed suit, seeking custody of Ms. Stewart’s daughter on the grounds that the young girl had been abused. Ms. Stewart was threatened with 16 years in jail. Her legal bills ran upwards of $40,000. In the end an intense public campaign on her behalf forced the ambitious prosecutor (who is now a federal judge) to cut a deal in which Ms. Stewart was absolved of wrongdoing.

The case is not unique. By the time Ms. Stewart’s saga ended, another mother in Ohio and a grandmother in New Jersey had also been arrested on similarly absurd charges. The relevant case law, Osborne v. Ohio, gives such a loosely worded standard for child pornography that it allows the state nearly unfettered intrusion into family life. Like the Supreme Court’s eminent-domain decision (Kelo v. City of New London), Osborne has had the effect of unleashing the power of the state on unsuspecting individuals. If you have ever taken a picture of a naked toddler, the only thing standing between you and criminal prosecution is the good judgment of government workers.

The Stewart case is particularly notable in that it took place in Oberlin, Ohio, which is Middle America’s version of Berkeley, Calif. Nearly every character in the cast is liberal to the point of self-parody. Before her court appearance Ms. Stewart had never owned a bra. Her cat was named after a Sandinista spy. Her then-partner worked for the Nation magazine. (They have split up since.) And yet the liberals who rallied to Ms. Stewart’s defense are undiverted from their belief that government should have a great deal to say about how people live their lives.

“Framing Innocence” is thoroughly and fairly reported, without a strong polemical thrust. If there is a point to this morality tale, in Ms. Powell’s telling, it seems to be that in the justice system mistakes can be made—sometimes terrible mistakes. True enough, but more could be said. If a well-meaning law about child pornography can wreak such havoc on families, anyone care to speculate on what a 2,000-page health-care law might do?

Mr. Last is a senior reader at the Weekly Standard


Full article and photo:

Too Funny for Words

WHEN my dad, Allen Funt, produced “Candid Microphone” back in the mid-1940s, he used a clever ruse to titillate listeners. A few times per show he’d edit out an innocent word or phrase and replace it with a recording of a sultry woman’s voice saying, “Censored.” Audiences always laughed at the thought that something dirty had been said, even though it hadn’t.

When “Candid Camera” came to television, the female voice was replaced by a bleep and a graphic that flashed “Censored!” As my father and I learned over decades of production, ordinary folks don’t really curse much in routine conversation — even when mildly agitated — but audiences love to think otherwise.

By the mid-1950s, TV’s standards and practices people decided Dad’s gimmick was an unacceptable deception. There would be no further censoring of clean words.

I thought about all this when CBS started broadcasting a show last week titled “$#*! My Dad Says,” which the network insists with a wink should be pronounced “Bleep My Dad Says.” There is, of course, no mystery whatsoever about what the $-word stands for, because the show is based on a highly popular Twitter feed, using the real word, in which a clever guy named Justin Halpern quotes the humorous, often foul utterances of his father, Sam.

Bleeping is broadcasting’s biggest deal. Even on basic cable, the new generation of “reality” shows like “Jersey Shore” bleep like crazy, as do infotainment series like “The Daily Show With Jon Stewart,” where scripted curses take on an anti-establishment edge when bleeped in a contrived bit of post-production. This season there is even a cable series about relationships titled “Who the (Bleep) Did I Marry?” — in which “bleep” isn’t subbing for any word in particular. The comedian Drew Carey is developing a series that CBS has decided to call “WTF!” Still winking, the network says this one stands for “Wow That’s Funny!”

Although mainstream broadcasters won a battle against censorship over the summer when a federal appeals court struck down some elements of the Federal Communications Commission’s restrictions on objectionable language, they’ve always been more driven by self-censorship than by the government-mandated kind. Eager to help are advertisers and watchdog groups, each appearing to take a tough stand on language while actually reveling in the double entendre.

For example, my father and I didn’t run across many dirty words when recording everyday conversation, but we did find that people use the terms “God” and “Jesus” frequently — often in a gentle context, like “Oh, my God” — and this, it turned out, worried broadcasting executives even more than swearing. If someone said “Jesus” in a “Candid Camera” scene, CBS made us bleep it, leaving viewers to assume that a truly foul word had been spoken. And that seemed fine with CBS, because what mainstream TV likes best is the perception of naughtiness.

TV’s often-hypocritical approach to censorship was given its grandest showcase back in 1972, when the comedian George Carlin first took note of “Seven Words You Can Never Say on Television.” The bit was recreated on stage at the Kennedy Center a few years ago in a posthumous tribute to Carlin, but all the words were bleeped — not only for the PBS audience but for the theatergoers as well.

Many who saw the show believed the bleeped version played funnier. After all, when Bill Maher and his guests unleash a stream of nasty words on HBO, it’s little more than barroom banter. But when Jon Stewart says the same words, knowing they’ll be bleeped, it revs up the crowd while also seeming to challenge the censors.

In its July ruling, the appeals court concluded, “By prohibiting all ‘patently offensive’ references to sex … without giving adequate guidance as to what ‘patently offensive’ means, the F.C.C. effectively chills speech, because broadcasters have no way of knowing what the F.C.C. will find offensive.” That’s quite reasonable — and totally beside the point. Most producers understand that when it comes to language, the sizzle has far more appeal than the steak. Broadcasters keep jousting with the F.C.C. begging not to be thrown in the briar patch of censorship, because that’s really where they most want to be.

Jimmy Kimmel has come up with a segment for his late-night ABC program called “This Week in Unnecessary Censorship.” He bleeps ordinary words in clips to make them seem obscene. How bleepin’ dare he! Censorship, it seems, remains one of the most entertaining things on television.

Peter Funt writes about social issues on his Web site, Candid Camera.


Full article and photo:

The Genius of the Tinkerer

The secret to innovation is combining odds and ends, writes Steven Johnson.

In the year following the 2004 tsunami, the Indonesian city of Meulaboh received eight neonatal incubators from international relief organizations. Several years later, when an MIT fellow named Timothy Prestero visited the local hospital, all eight were out of order, the victim of power surges and tropical humidity, along with the hospital staff’s inability to read the English repair manual.

Nerdbots are assembled from found objects. Like ideas, they’re random pieces connected to create something new.

Mr. Prestero and the organization he cofounded, Design That Matters, had been working for several years on a more reliable, and less expensive, incubator for the developing world. In 2008, they introduced a prototype called the NeoNurture. It looked like a streamlined modern incubator, but its guts were automotive. Sealed-beam headlights supplied the crucial warmth; dashboard fans provided filtered air circulation; door chimes sounded alarms. You could power the device with an adapted cigarette lighter or a standard-issue motorcycle battery. Building the NeoNurture out of car parts was doubly efficient, because it tapped both the local supply of parts and the local knowledge of automobile repair. You didn’t have to be a trained medical technician to fix the NeoNurture; you just needed to know how to replace a broken headlight.

The NeoNurture incubator is a fitting metaphor for the way that good ideas usually come into the world. They are, inevitably, constrained by the parts and skills that surround them. We have a natural tendency to romanticize breakthrough innovations, imagining momentous ideas transcending their surroundings, a gifted mind somehow seeing over the detritus of old ideas and ossified tradition.

But ideas are works of bricolage. They are, almost inevitably, networks of other ideas. We take the ideas we’ve inherited or stumbled across, and we jigger them together into some new shape. We like to think of our ideas as a $40,000 incubator, shipped direct from the factory, but in reality they’ve been cobbled together with spare parts that happened to be sitting in the garage.

As a tribute to human ingenuity, the evolutionary biologist Stephen Jay Gould maintained an odd collection of sandals made from recycled automobile tires, purchased during his travels through the developing world. But he also saw them as a metaphor for the patterns of innovation in the biological world. Nature’s innovations, too, rely on spare parts.

Evolution advances by taking available resources and cobbling them together to create new uses. The evolutionary theorist Francois Jacob captured this in his concept of evolution as a “tinkerer,” not an engineer; our bodies are also works of bricolage, old parts strung together to form something radically new. “The tires-to-sandals principle works at all scales and times,” Mr. Gould wrote, “permitting odd and unpredictable initiatives at any moment—to make nature as inventive as the cleverest person who ever pondered the potential of a junkyard in Nairobi.”

You can see this process at work in the primordial innovation of life itself. Before life emerged on Earth, the planet was dominated by a handful of basic molecules: ammonia, methane, water, carbon dioxide, a smattering of amino acids and other simple organic compounds. Each of these molecules was capable of a finite series of transformations and exchanges with other molecules in the primordial soup: methane and oxygen recombining to form formaldehyde and water, for instance.

Think of all those initial molecules, and then imagine all the potential new combinations that they could form spontaneously, simply by colliding with each other (or perhaps prodded along by the extra energy of a propitious lightning strike). If you could play God and trigger all those combinations, you would end up with most of the building blocks of life: the proteins that form the boundaries of cells; sugar molecules crucial to the nucleic acids of our DNA. But you would not be able to trigger chemical reactions that would build a mosquito, or a sunflower, or a human brain. Formaldehyde is a first-order combination: You can create it directly from the molecules in the primordial soup. Creating a sunflower, however, relies on a whole series of subsequent innovations: chloroplasts to capture the sun’s energy, vascular tissues to circulate resources through the plant, DNA molecules to pass on instructions to the next generation.



Big new ideas more often result from recycling and combining old ideas than from eureka moments. Consider the origins of some familiar innovations.

Double-entry accounting

One of the essential instruments of modern capitalism appears to have been developed collectively in Renaissance Italy. Now the cornerstone of bookkeeping, double-entry’s innovation of recording every financial event in two ledgers (one for debit, one for credit) allowed merchants to accurately track the health of their businesses. It was first codified by the Franciscan friar Luca Pacioli in 1494, but it had been used for at least two centuries by Italian bankers and merchants.

Gutenberg press

The printing press is a classic combinatorial innovation. Each of its key elements—the movable type, the ink, the paper and the press itself—had been developed separately well before Johannes Gutenberg printed his first Bible in the 15th century. Movable type, for instance, had been independently conceived by a Chinese blacksmith named Pi Sheng four centuries earlier. The press itself was adapted from a screw press that was being used in Germany for the mass production of wine.

Air conditioning

AC counts as a rare instance of innovation through sheer individual insight. After summer heat waves in 1900 and 1901, the owners of a printing company asked the heating-systems specialist Buffalo Forge Co. for a way to make the air in its press rooms less humid. The project fell to a 25-year-old electrical engineer named Willis Carrier, who built a system that cooled the air to a temperature that would produce 55% humidity. His idea ultimately rearranged the social and political map of America.


The scientist Stuart Kauffman has a suggestive name for the set of all those first-order combinations: “the adjacent possible.” The phrase captures both the limits and the creative potential of change and innovation. In the case of prebiotic chemistry, the adjacent possible defines all those molecular reactions that were directly achievable in the primordial soup. Sunflowers and mosquitoes and brains exist outside that circle of possibility. The adjacent possible is a kind of shadow future, hovering on the edges of the present state of things, a map of all the ways in which the present can reinvent itself.

The strange and beautiful truth about the adjacent possible is that its boundaries grow as you explore them. Each new combination opens up the possibility of other new combinations. Think of it as a house that magically expands with each door you open. You begin in a room with four doors, each leading to a new room that you haven’t visited yet. Once you open one of those doors and stroll into that room, three new doors appear, each leading to a brand-new room that you couldn’t have reached from your original starting point. Keep opening new doors and eventually you’ll have built a palace.

Basic fatty acids will naturally self-organize into spheres lined with a dual layer of molecules, very similar to the membranes that define the boundaries of modern cells. Once the fatty acids combine to form those bounded spheres, a new wing of the adjacent possible opens up, because those molecules implicitly create a fundamental division between the inside and outside of the sphere. This division is the very essence of a cell. Once you have an “inside,” you can put things there: food, organelles, genetic code.

The march of cultural innovation follows the same combinatorial pattern: Johannes Gutenberg, for instance, took the older technology of the screw press, designed originally for making wine, and reconfigured it with metal type to invent the printing press.

More recently, a graduate student named Brent Constantz, working on a Ph.D. that explored the techniques that coral polyps use to build amazingly durable reefs, realized that those same techniques could be harnessed to heal human bones. Several IPOs later, the cements that Mr. Constantz created are employed in most orthopedic operating rooms throughout the U.S. and Europe.

Mr. Constantz’s cements point to something particularly inspiring in Mr. Kauffman’s notion of the adjacent possible: the continuum between natural and man-made systems. Four billion years ago, if you were a carbon atom, there were a few hundred molecular configurations you could stumble into. Today that same carbon atom can help build a sperm whale or a giant redwood or an H1N1 virus, along with every single object on the planet made of plastic.

The premise that innovation prospers when ideas can serendipitously connect and recombine with other ideas may seem logical enough, but the strange fact is that a great deal of the past two centuries of legal and folk wisdom about innovation has pursued the exact opposite argument, building walls between ideas. Ironically, those walls have been erected with the explicit aim of encouraging innovation. They go by many names: intellectual property, trade secrets, proprietary technology, top-secret R&D labs. But they share a founding assumption: that in the long run, innovation will increase if you put restrictions on the spread of new ideas, because those restrictions will allow the creators to collect large financial rewards from their inventions. And those rewards will then attract other innovators to follow in their path.

Circa 1450, Johannes Gutenberg (1400 – 1468) inventor of printing examines a page from his first printing press.

The problem with these closed environments is that they make it more difficult to explore the adjacent possible, because they reduce the overall network of minds that can potentially engage with a problem, and they reduce the unplanned collisions between ideas originating in different fields. This is why a growing number of large organizations—businesses, nonprofits, schools, government agencies—have begun experimenting with more open models of idea exchange.

Organizations like IBM and Procter & Gamble, who have a long history of profiting from patented, closed-door innovations, have embraced open innovation platforms over the past decade, sharing their leading-edge research with universities, partners, suppliers and customers. Modeled on the success of services like Twitter and Flickr, new Web startups now routinely make their software accessible to programmers who are not on their payroll, allowing these outsiders to expand on and remix the core product in surprising new ways.

Earlier this year, Nike announced a new Web-based marketplace it calls the GreenXchange, where it has publicly released more than 400 of its patents that involve environmentally friendly materials or technologies. The marketplace is a kind of hybrid of commercial self-interest and civic good. This makes it possible for outside firms to improve on those innovations, creating new value that Nike might ultimately be able to put to use itself in its own products.

In a sense, Nike is widening the network of minds who are actively thinking about how to make its ideas more useful, without adding any more employees. But some of its innovations might well turn out to be advantageous to industries or markets in which it has no competitive involvement whatsoever. By keeping its eco-friendly ideas behind a veil of secrecy, Nike was holding back ideas that might, in another context, contribute to a sustainable future—without any real commercial justification.

A hypothetical scenario invoked by the company at the launch of the GreenXchange would have warmed the heart of Stephen Jay Gould: perhaps an environmentally-sound rubber originally invented for use in running shoes could be adapted by a mountain bike company to create more sustainable tires. Apparently, Gould’s tires-to-sandals principle works both ways. Sometimes you make footwear by putting tires to new use, and sometimes you make tires by putting footwear to new use.

There is a famous moment in the story of the near-catastrophic Apollo 13 mission—wonderfully captured in the Ron Howard film—in which the mission control engineers realize they need to create an improvised carbon dioxide filter, or the astronauts will poison the lunar module atmosphere with their own exhalations before they return to Earth. The astronauts have plenty of carbon “scrubbers” onboard, but these filters were designed for the original, damaged spacecraft and don’t fit the ventilation system of the lunar module they are using as a lifeboat to return home. Mission control quickly assembles a “tiger team” of engineers to hack their way through the problem.

In the movie, Deke Slayton, head of flight crew operations, tosses a jumbled pile of gear on a conference table: hoses, canisters, stowage bags, duct tape and other assorted gadgets. He holds up the carbon scrubbers. “We gotta find a way to make this fit into a hole for this,” he says, and then points to the spare parts on the table, “using nothing but that.”

The space gear on the table defines the adjacent possible for the problem of building a working carbon scrubber on a lunar module. (The device they eventually concocted, dubbed the “mailbox,” performed beautifully.) The canisters and nozzles are like the ammonia and methane molecules of the early Earth, or those Toyota parts heating an incubator: They are the building blocks that create—and limit—the space of possibility for a specific problem. The trick to having good ideas is not to sit around in glorious isolation and try to think big thoughts. The trick is to get more parts on the table.

Steven Johnson is the author of seven books, including “The Invention of Air.” This essay is adapted from “Where Good Ideas Come From: The Natural History of Innovation.”


Full article and photo:

Lost libraries

The strange afterlife of authors’ book collections

A few weeks ago, Annecy Liddell was flipping through a used copy of Don DeLillo’s ”White Noise” when she saw that the previous owner had written his name inside the cover: David Markson. Liddell bought the novel anyway and, when she got home, looked the name up on Wikipedia.

Markson, she discovered, was an important novelist himself–an experimental writer with a cult following in the literary world. David Foster Wallace considered Markson’s ”Wittgenstein’s Mistress”–a novel that had been rejected by 54 publishers–”pretty much the high point of experimental fiction in this country.” When it turned out that Markson had written notes throughout Liddell’s copy of ”White Noise,” she posted a Facebook update about her find. ”i wanted to call him up and tell him his notes are funny, but then i realized he DIED A MONTH AGO. bummer.”

The news of Liddell’s discovery quickly spread through Facebook and Twitter’s literary districts, and Markson’s fans realized that his personal library, about 2,500 books in all, had been sold off and was now anonymously scattered throughout The Strand, the vast Manhattan bookstore where Liddell had bought her book. And that’s when something remarkable happened: Markson’s fans began trying to reassemble his books. They used the Internet to coordinate trips to The Strand, to compile a list of their purchases, to swap scanned images of his notes, and to share tips. (The easiest way to spot a Markson book, they found, was to look for the high-quality hardcovers.) Markson’s fans told stories about watching strangers buy his books without understanding their origin, even after Strand clerks pointed out Markson’s signature. They also started asking questions, each one a variation on this: How could the books of one of this generation’s most interesting novelists end up on a bookstore’s dollar clearance carts?

What Markson’s fans had stumbled on was the strange and disorienting world of authors’ personal libraries. Most people might imagine that authors’ libraries matter–that scholars and readers should care what books authors read, what they thought about them, what they scribbled in the margins. But far more libraries get dispersed than saved. In fact, David Markson can now take his place in a long and distinguished line of writers whose personal libraries were quickly, casually broken down. Herman Melville’s books? One bookstore bought an assortment for $120, then scrapped the theological titles for paper. Stephen Crane’s? His widow died a brothel madam, and her estate (and his books) were auctioned off on the steps of a Florida courthouse. Ernest Hemingway’s? To this day, all 9,000 titles remain trapped in his Cuban villa.

The issues at stake when libraries vanish are bigger than any one author and his books. An author’s library offers unique access to a mind at work, and their treatment provides a look at what exactly the literary world decides to value in an author’s life. John Wronoski, a longtime book dealer in Cambridge, has seen the libraries of many prestigious authors pass through his store without securing a permanent home. ”Most readers would see these names and think, ’My god, shouldn’t they be in a library?’” Wronoski says. ”But most readers have no idea how this system works.”

The literary world is full of treasures and talismans, not all of them especially literary–a lock of Byron’s hair has been sold at auction; Harvard has archived John Updike’s golf score cards.

For private collectors and university libraries, though, the most important targets are manuscripts and letters and research materials–what’s collectively known as an author’s papers–and rare, individually valuable books. In the first category, especially, things can get expensive. The University of Texas’s Harry Ransom Center recently bought Bob Woodward and Carl Bernstein’s papers for $5 million and Norman Mailer’s for $2.5 million. Compared to the papers, the author’s own library takes a back seat. ”An author’s books are important,” says Tom Staley, the Ransom Center’s director, ”but they’re no substitute for the manuscripts and the correspondence. The books are gravy.”

Updike would seem to have agreed. After his death in 2009, Harvard’s Houghton Library bought Updike’s archive, more than 125 shelves of material that he assembled himself. Updike chose to include 1,500 books, but that number is inflated by his own work–at least one copy of every edition of every book in every language it was issued. ”He was not so comprehensive in the books that he read,” says Leslie Morris, Harvard’s curator for the Updike archive. In fact, Updike was known to donate old books to church book sales and to hand them out to friends’ wives. Late in life, he made a deal with Mark Stolle, who owns a bookstore in Manchester-by-the-Sea. ”He would call me once his garage was filled,” Stolle remembers, ”and I would go over and buy them.”

While he didn’t seem to value them, Updike’s books begin to show how and why an author’s library does matter. In his copy of Tom Wolfe’s ”A Man in Full,” which was one of Stolle’s garage finds, Updike wrote comments like ”adjectival monotony” and ”semi cliché in every sentence.” A comparison with Updike’s eventual New Yorker review suggests that authors will write things in their books that they won’t say in public.

An author’s library, like anyone else’s, reveals something about its owner. Mark Twain loved to present himself as self-taught and under-read, but his carefully annotated books tell a different story. Books can offer hints about an author’s social and personal life. After David Foster Wallace’s death in 2008, the Ransom Center bought his papers and 200 of his books, including two David Markson novels that Wallace not only annotated, but also had Markson sign when they met in New York in 1990. Most of all, though, authors’ libraries serve as a kind of intellectual biography. Melville’s most heavily annotated book was an edition of John Milton’s poems, and it proves he reread ”Paradise Lost” while struggling with ”Moby-Dick.”

And yet these libraries rarely survive intact. The reasons for this can range from money problems to squabbling heirs to poorly executed auctions. Twain’s library makes for an especially cringe-worthy case study because, unlike a lot of now-classic authors, he saw no ebb in his reputation–and, thus, no excuse in the handling of his books. In 1908, Twain donated 500 books to the library he helped establish in Redding, Conn. After Twain’s death in 1910, his daughter, Clara, gave the library another 1,700 books. The Redding library began circulating Twain’s books, many of which contained his notes, and souvenir hunters began cutting out every page that had Twain’s handwriting. This was bad enough, but in the 1950s the library decided to thin its inventory, unloading the unwanted books on a book dealer who soon realized he now possessed more than 60 titles annotated by Mark Twain. Today, academic libraries across the country own Twain books in which ”REDDING LIBRARY” has been stamped in purple ink.

But the 1950s also marked the start of a shift in the way many scholars and librarians appraised an author’s books. They began trying to reassemble the most famous authors’ libraries–or, in worst-case scenarios like Twain’s, to compile detailed lists of every book a writer had owned. The effort and ingenuity behind these lists can be astounding, as scholars will sift through diaries, receipts, even old library call slips. A good example is Alan Gribben’s ”Mark Twain’s Library: A Reconstruction,” which runs to two volumes and took nine years to complete.

This raises an obvious question: Why not make the list of an author’s books before dispersing them? The answer, usually, is time. Book dealers, Wronoski says, can’t assemble scholarly lists while also moving enough inventory to stay in business. When Wallace’s widow and his literary agent, Bonnie Nadell, sorted through his library, they sent only the books he had annotated to the Ransom Center. The others, more than 30 boxes’ worth, they donated to charity. There was no chance to make a list, Nadell says, because another professor needed to move into Wallace’s office. ”We were just speed skimming for markings of any kind.”

Still, the gap between the labor required on the front end and the back end can make such choices seem baffling and even–a curious charge to make when discussing archives–short-sighted. Libraries, for their part, must also allocate limited resources, and they do so based on a calculus of demand, precedent, and prestige. This means the big winners are historical authors (in the 1980s, Melville’s copy of Milton sold at an auction for $100,000) and those who fit into a library’s targeted specialties. ”We tend to focus on Harvard-educated authors,” Morris says. ”The Houghton Library is pretty much full and has been for the last 10 years.”

In David Markson’s case, the easiest explanation for why his books ended up at The Strand is that he wanted them to. Markson, who lived near the bookstore, would stop by three or four times a week. The Strand, in turn, hosted his book signings and maintained a table of his books, and Markson’s daughter, Johanna, says he frequently told her in his final years to take his books to The Strand. ”He said they’d take good care of us,” she says.

And so, after Johanna and her brother saved some books that were important to them–”I want my children to see what kind of reader their grandfather was,” Johanna says–a truck from The Strand picked up the rest, 63 boxes in all. Fred Bass, The Strand’s owner, says he had to break Markson’s library apart because of the size of his operation. ”We do it with most personal libraries,” Bass says. ”We don’t have room to set up special collections.”

Markson had sold books to The Strand before. In fact, over the years, he sold off his most valuable books and even small batches of his literary correspondence simply to make ends meet. Markson recalled in one interview that, when he asked Jack Kerouac to sign a book for him, Kerouac was so drunk he stabbed the pen through the front page. Bass said he personally looked through Markson’s books hoping to find items like this. ”But David had picked it pretty clean.”

Selling his literary past became a way for Markson to sustain his literary future. In ”Wittgenstein’s Mistress” and the four novels that followed, Markson abandoned characters and plots in favor of meticulously ordered allusions and historical anecdotes–a style he called ”seminonfictional semifiction.” That style, along with the skill with which he prosecuted it, explains both the size and the passion of Markson’s audience.

Markson’s late style also explains the special relevance of his library, and it’s a wonderful twist that these elements all came together in the campaign to crowdsource it. Through a Facebook group and an informal collection of blog posts, Markson’s fans have put together a representative sample of his books. The results won’t satisfy the scholarly completist, but they reveal the range of Markson’s reading–not just fiction and poetry, but classical literature, philosophy, literary criticism, and art history. They also illuminate aspects of Markson’s life (one fan got the textbooks Markson used while a graduate student) and his art (another got his copy of ”Foxe’s Book of Martyrs,” where Markson had underlined passages that resurface in his later novels). Most of all, they capture Markson’s mind as it plays across the page. In his copy of ”Agape Agape,” the final novel from postmodern wizard William Gaddis, Markson wrote: ”Monotonous. Tedious. Repetitious. One note, all the way through. Theme inordinately stale + old hat. Alas, Willie.”

Markson’s letters to and from Gaddis were one of the things he sold off–they’re now in the Gaddis collection at Washington University–but Johanna Markson says he left some papers behind. ”He always told us, ’When I die, that’s when I’ll be famous,’” she says, and she’s saving eight large bins full of Markson’s edited manuscripts, the note cards he used to write his late novels, and his remaining correspondence. A library like Ohio State’s, which specializes in contemporary fiction, seems like a good match. In fact, Geoffrey Smith, head of Ohio State’s Rare Books and Manuscripts Library, says he would have liked to look at Markson’s library, in addition to his papers. ”We would have been interested, to say the least,” Smith says.

But if Markson’s library–and a potential scholarly foothold–has been lost, other things have been gained. A dead man’s wishes have been honored. A few fans have been blessed. And an author has found a new reader. ”I’m glad I got that book,” Annecy Liddell says. ”I really wouldn’t know who Markson is if I hadn’t found that. I haven’t finished ‘White Noise’ yet but I’m almost done with ‘Wittgenstein’s Mistress’–it’s weird and great and way more fun to read.”

By Craig Fehrman is working on a book about presidents and their books.


Full article and photo:

The me-sized universe

Some parts of the cosmos are right within our grasp

If you happen to think about the universe during the course of your day, you will likely be overwhelmed.

The universe seems vast, distant, and unknowable. It is, for example, unimaginably large and old: The number of stars in our galaxy alone exceeds 100 billion, and the Earth is 4.5 billion years old. In the eyes of the universe, we’re nothing. We humans are tiny and brief. And much of the physics that drives the universe occurs on the other end of the scale, almost inconceivably small and fast. Chemical changes can occur faster than the blink of an eye, and atoms make the head of a pin seem like a mountain (really more like three Mount Everests).

Clearly, our brains are not built to handle numbers on this astronomical scale. While we are certainly a part of the cosmos, we are unable to grasp its physical truths. To call a number astronomical is to say that it is extreme, but also, in some sense, unknowable. We may recognize our relative insignificance, but leave dwelling on it to those equipped with scientific notation.

However, there actually are properties of the cosmos that can be expressed at the scale of the everyday. We can hold the tail of this beast of a universe–even if only for a moment. Shall we try?

Let’s begin at the human scale of time: It turns out that there is one supernova, a cataclysmic explosion of a star that marks the end of its life, about every 50 years in the Milky Way. The frequency of these stellar explosions fully fits within the life span of a single person, and not even a particularly long-lived one. So throughout human history, each person has likely been around for one or two of these bursts that can briefly burn brighter than an entire galaxy.

On the other hand, while new stars are formed in our galaxy at a faster rate, it is still nice and manageable, with about seven new stars in the Milky Way each year. So, over the course of an average American lifetime, each of us will have gone about our business while nearly 550 new stars were born.

But stars are always incomprehensibly large, right? Well, not always. Sometimes, near the end of a star’s life, it doesn’t explode. Instead, it collapses in on itself. Some of these are massive enough to become black holes, where space and time become all loopy. But just short of that, some stars collapse and become massive objects known as neutron stars. While these stars have incredible gravitational fields and can be detected from distances very far away, they are actually not very large. They are often only about 12 miles in diameter, which is about the distance from MIT to Wellesley College. While its mass is 500,000 times the mass of the Earth, a neutron star is actually very easy to picture, at least in terms of size.

Moving to the other end of the size spectrum, hydrogen atoms are unbelievably small: You would need to line up over 10 billion of them in a row to reach the average adult arm span. However, the wavelength of the energy a neutral hydrogen atom releases is right in our comfort zone: about 21 centimeters (or 8 inches). This is only about one-eighth the average height of a human being. This fact was even encoded pictorially on the plaques on the Pioneer probes, in order to show human height to any extraterrestrials that might eventually find these probes now hurtling out of the solar system, and who might be interested in how big we are.

And let’s not forget energy, though it might seem hard to find energetic examples on the human scale. For example, the sun, a fairly unimpressive star, releases over 300 yottajoules of energy each second, where yotta- is the highest prefix created in the metric system and is a one followed by 24 zeroes. Nonetheless, there are energy quantities we can handle. The most energetic cosmic rays–highly energetic particles of mysterious origin that come from somewhere deep in space–have about the same amount of energy as a pitcher throwing a baseball at 60 miles per hour. This is the low end of the speed of a knuckleball, which is one of the slowest pitches in baseball. While the fact that a tiny subatomic particle has that much energy is truly astounding, it’s no Josh Beckett fastball.

While these examples might seem few and far between, there is good news: The universe is actually becoming less impersonal. Through science and technology, we are getting better at bringing cosmic quantities to the human scale. For example, the number of stars in our Milky Way galaxy is less than half the total number of bits that can be stored on a Blu-ray disc. The everyday is slowly but surely inching towards the cosmic.

Yes, the universe is big and we are small. But we must treasure the exceptions, and see a little bit of the human in the cosmic, even if only for a moment.

Samuel Arbesman is a postdoctoral fellow in the Department of Health Care Policy at Harvard Medical School and is affiliated with the Institute for Quantitative Social Science at Harvard University. He is a regular contributor to Ideas.


Full article:

Boxing Lessons

I offer training in both philosophy and boxing. Over the years, some of my colleagues have groused that my work is a contradiction, building minds and cultivating rational discourse while teaching violence and helping to remove brain cells. Truth be told, I think philosophers with this gripe should give some thought to what really counts as violence.  I would rather take a punch in the nose any day than be subjected to some of the attacks that I have witnessed in philosophy colloquia.  However, I have a more positive case for including boxing in my curriculum for sentimental education. 

Western philosophy, even before Descartes’ influential case for a mind-body dualism, has been dismissive of the body. Plato — even though he competed as a wrestler — and most of the sages who followed him, taught us to think of our arms and legs as nothing but a poor carriage for the mind.  In “Phaedo,” Plato presents his teacher Socrates on his deathbed as a sort of Mr. Spock yearning to be free from the shackles of the flesh so he can really begin thinking seriously. In this account, the body gives rise to desires that will not listen to reason and that becloud our ability to think clearly.
In much of Eastern philosophy, in contrast, the search for wisdom is more holistic. The body is considered inseparable from the mind, and is regarded as a vehicle, rather than an impediment, to enlightenment. The unmindful attitude towards the body so prevalent in the West blinkers us to profound truths that the skin, muscles and breath can deliver like a punch.

While different physical practices may open us to different truths, there is a lot of wisdom to be gained in the ring. Socrates, of course, maintained that the unexamined life was not worth living, that self-knowledge is of supreme importance. One thing is certain: boxing can compel a person to take a quick self-inventory and gut check about what he or she is willing to endure and risk. As Joyce Carol Oates observes in her minor classic, “On Boxing”:

Boxers are there to establish an absolute experience, a public accounting of the outermost limits of their beings; they will know, as few of us can know of ourselves, what physical and psychic power they possess — of how much, or how little, they are capable.

Though the German idealist philosopher G.W.F. Hegel (1770-1831) never slipped on the gloves, I think he would have at least supported the study of the sweet science. In his famous Lord and Bondsman allegory,[1] Hegel suggests that it is in mortal combat with the other, and ultimately in our willingness to give up our lives, that we rise to a higher level of freedom and consciousness. If Hegel is correct, the lofty image that the warrior holds in our society has something to do with the fact that in her willingness to sacrifice her own life, she has escaped the otherwise universal choke hold of death anxiety. Boxing can be seen as a stylized version of Hegel’s proverbial trial by battle and as such affords new possibilities of freedom and selfhood.

Viewed purely psychologically, practice in what used to be termed the “manly art” makes people feel more at home in themselves, and so less defensive and perhaps less aggressive. The way we cope with the elemental feelings of anger and fear determines to no small extent what kind of person we will become. Enlisting Aristotle, I shall have more to say about fear in a moment, but I don’t think it takes a Freud to recognize that many people are mired in their own bottled up anger. In our society, expressions of anger are more taboo than libidinal impulses. Yet, as our entertainment industry so powerfully bears out, there is plenty of fury to go around. I have trained boxers, often women, who find it extremely liberating to learn that they can strike out, throw a punch, express some rage, and that no one is going to die as a result.

And let’s be clear, life is filled with blows. It requires toughness and resiliency. There are few better places than the squared circle to receive concentrated lessons in the dire need to be able to absorb punishment and carry on, “to get off the canvas” and “roll with the punches.” It is little wonder that boxing, more than any other sport, has functioned as a metaphor for life. Aside from the possibilities for self-fulfillment, boxing can also contribute to our moral lives.

In his “Nicomachean Ethics,” Aristotle argues that the final end for human beings is eudaimonia ─ the good life, or as it is most often translated, happiness. In an immortal sentence Aristotle announces, “The Good of man (eudaimonia) is the active exercise of his soul’s faculties in conformity with excellence or virtue, or if there be several human excellences or virtues, in conformity with the best and most perfect among them.”[2]

A few pages later, Aristotle acknowledges that there are in fact two kinds of virtue or excellence, namely, intellectual and moral.[3] Intellectual excellence is simple book learning, or theoretical smarts. Unlike his teacher Plato and his teacher’s teacher, Socrates, Aristotle recognized that a person could know a great deal about the Good and not lead a good life. “With regard to excellence,” says Aristotle, “it is not enough to know, but we must try to have and use it.” [4]

Aristotle offers a table of the moral virtues that includes, among other qualities, temperance, justice, pride, friendliness and truthfulness. Each semester when I teach ethics, I press my students to generate their own list of the moral virtues. “What,” I ask, “are the traits that you connect with having character?”  Tolerance, kindness, self-respect, creativity, always make it on to the board, but it is usually only with prodding that courage gets a nod. And yet, courage seems absolutely essential to leading a moral life. After all, if you do not have mettle, you will not be able to abide by your moral judgments.  Doing the right thing often demands going down the wrong side of the road of our immediate and long-range self-interests. It frequently involves sacrifices that we do not much care for, sometimes of friendships, or jobs; sometimes, as in the case with Socrates, even of our lives. Making these sacrifices is impossible without courage.

According to Aristotle, courage is a mean between rashness and cowardliness;[5] that is, between having too little trepidation and too much. Aristotle reckoned that in order to be able to hit the mean, we need practice in dealing with the emotions and choices corresponding to that virtue.  So far as developing grit is concerned, it helps to get some swings at dealing with manageable doses of fear. And yet, even in our approach to education, many of us tend to think of anything that causes a shiver as traumatic.  Consider, for example, the demise of dodge ball in public schools. It was banned because of the terror that the flying red balls caused in some children and of the damage to self-esteem that might come with always being the first one knocked out of the game. But how are we supposed to learn to stand up to our fears if we never have any supervised practice in dealing with the jitters? Of course, our young people are very familiar with aggressive and often gruesome video games that simulate physical harm and self-defense, but without, of course, any of the consequences and risks that might come with putting on the gloves.

Boxing provides practice with fear and with the right, attentive supervision, in quite manageable increments. In their first sparring session, boxers usually erupt in “fight or flight” mode. When the bell rings, novices forget everything they have learned and simply flail away.  If they stick with it for a few months, their fears diminish; they can begin to see things in the ring that their emotions blinded them to before. More importantly, they become more at home with feeling afraid. Fear is painful, but it can be faced, and in time a boxer learns not to panic about the blows that will be coming his way.

While Aristotle is able to define courage, the study and practice of boxing can enable us to not only comprehend courage, but “to have and use” it. By getting into the ring with our fears, we will be less likely to succumb to trepidation when doing the right thing demands taking a hit. To be sure, there is an important difference between physical and moral courage. After all, the world has seen many a brave monster. The willingness to endure physical risks is not enough to guarantee uprightness; nevertheless, it can, I think contribute in powerful ways to the development of moral virtue.


[1] G.W.F. Hegel, “Phenomenology of Spirit,” Chapter 4.
[2] Aristotle, “Nicomachean Ethics,” Book I, Chapter 7.
[3] ibid., Book I, Chapter 13.
[4] ibid, Book X, Chapter 9.
[5] ibid, Book III, Chapter 7.

Gordon Marino is an active boxing trainer and professor of philosophy at St. Olaf College. He covers boxing for the Wall Street Journal, is the editor of “Ethics:The Essential Writings” (Modern Library Classics, 2010) and is at work on a book about boxing and philosophy.


Full article and photo :

State Security, Post-Soviet Style

Closing down independent political life, branding critics as ‘extremists.’

In Soviet days, every corner of the KGB was under the tight control of the Communist Party. In Vladimir Putin’s Russia, the FSB—the KGB’s main successor—is largely unsupervised by anyone. Mr. Putin, briefly the FSB’s boss in the late 1990s, gave the secret-police agency free rein after taking over as Russia’s president from the ailing Boris Yeltsin in 2000. The FSB’s license has continued under the Putin-steered presidency of Dmitry Medvedev. The agency’s autonomy has been a catastrophe for Russia and should be a source of grave concern for the West.

Mr. Yeltsin encouraged competition between Russia’s spooks, but—as Andrei Soldatov and Irina Borogan make clear in “The New Nobility,” a disturbing portrait of the agency—Mr. Putin has given the FSB (from its Russian acronym Federalnaya Sluzhba Bezopasnosti, or Federal Security Service) a near monopoly. Originally just a domestic security service, it has become a sprawling empire, with capabilities ranging from electronic intelligence-gathering to control of Russia’s borders and operations beyond them. “According to even cautious estimates, FSB personnel total more than 200,000,” the authors write. The FSB’s instincts are xenophobic and authoritarian, its practices predatory and incompetent.

Critics of Russia see the FSB as the epitome of the country’s lawlessness and corruption. But those inside the agency see themselves as the ultimate guardians of Russia’s national security, thoroughly deserving of the rich rewards they reap. Nikolai Patrushev, who succeeded Mr. Putin as the agency’s director in 2000 and who is now secretary of Russia’s Security Council, calls his FSB colleagues a “new nobility.” Mr. Soldatov and Ms. Borogan see a different parallel: They liken the FSB to the ruthless Mukhabarat, or religious police, found in Saudi Arabia and other Arab countries: impenetrable, corrupt and ruthless.

Few people are better placed than Mr. Soldatov and Ms. Borogan to write with authority on this subject. They run the website Agentura.Ru, a magpie’s nest of news and analysis that presents a well-informed view of the inner workings of this secret state. Given the fates that have befallen other investigative journalists in Russia in recent years, some might fear for the authors’ safety. But the publication of the “The New Nobility” in English is welcome; it should be essential reading for those who hold naïve hopes about Russia’s development or who pooh-pooh the fears of its neighbors.

The book provides a detailed history of the FSB’s ascendancy over the past decade. It describes how Mr. Putin turned to the agency to consolidate his power. (The authors do not share the notion, held by some Russia-watchers, that it was the FSB—in those days a demoralized and chaotic outfit—that actually put Mr. Putin into the top job.) We’re told that Mr. Putin gave the agency a seat at Russia’s “head table,” but “trough,” rather than table, might be more accurate.

The authors recount how the Russian government has made outright land grants in much sought-after areas to high-ranking FSB officials, who then build gaudy mansions down the road from their oligarch neighbors. “Whether in the form of valuable land, luxury cars, or merit awards, the perks afforded FSB employees (especially those in particularly good standing) offer significant means of personal advancement. Russia’s new security services are more than simply servants of the state—they are landed property owners and powerful players.”

Mr. Soldatov and Ms. Borogan also present a chilling account of how the FSB, along with the prosecutor’s office and the interior ministry, has closed down independent political life in Russia, intimidating bloggers and trade unionists, infiltrating and disrupting opposition parties, and tarring all critics of the regime as “extremists.”

The authors give skimpy treatment to the FSB’s downgraded but still important rivals within the Russian bureaucracy: the GRU military-intelligence service and the SVR, which retains the main responsibility for foreign espionage (including the maintenance of an extensive network of “sleeper” agents, such as those unmasked in the U.S. over the summer). “The New Nobility” is unbeatable for its depiction of today’s FSB, but the book might have paid more attention to the long-term debilitating effects of the agency’s corruption and nepotism: Those may contain the seeds of the FSB’s ultimate destruction.

Mr. Soldatov and Ms. Borogan rightly highlight the grim results of FSB power in Russia. Its counterterrorism efforts have been a fiasco. Russia faces a terrorist threat from alienated and brutalized Muslims in the North Caucasus that is far worse than it was in the Yeltsin years.

Greed, rather than selfless patriotism, has been the hallmark of Mr. Patrushev’s “new nobility.” The FSB may indeed be in some respects as dreadful as the indolent, spendthrift and brutal Russian aristocracy toppled in the Bolshevik revolution. But that is presumably not the parallel that the grand-duke of spookdom had in mind.

Mr. Lucas is the international editor of the Economist and the author of “The New Cold War: Putin’s Russia and the Threat to the West.”


Full article and photo :

The case against aid

The world’s humanitarian aid organizations may do more harm than good, argues Linda Polman

In 1859, a Swiss businessman named Henry Dunant took a business trip to Italy, where he happened upon the aftermath of a particularly bloody battle in the Austro-Sardinian War. Tens of thousands of soldiers were left dead or wounded on the battlefield, with no medical attention. He was so shaken by the experience that he went on to found what is known today as the International Committee of the Red Cross.

Today, in the vocabulary of war, the ICRC and other aid organization like it are known as the good guys in a world full of bad guys. They swarm into refugee camps all over the world with tents, potable water, flour, and medicine, providing relief and disregarding politics.

But what if those relief efforts ultimately help fighters regain their strength and return to battle, prolonging a terrible war? What if such aid projects are hijacked by genocidal despots to swell their own coffers? What if cynical leaders have learned how to manufacture humanitarian disasters just to attract aid money? And what if the aid groups know all this, but turn a blind eye so that they can compete for a slice of a $160 billion industry?

“The Crisis Caravan,” a new book by journalist Linda Polman, joins a long tradition of exposes written by aid skeptics, many of whom are insiders to the business. Polman was not privy to the inner circle of any aid group, so she often relies on anecdotes told by unnamed sources to make her case. Nevertheless, she gives some powerful examples of unconscionable assistance: How the international community fed Hutu fighters who had committed genocide in Rwanda, and who then continued their violent campaigns from the UN-funded refugee camps; how the Ethiopian government manufactured a famine, and then used aid groups to lure people away from their homes toward a life of forced labor. In Polman’s world, these are not exceptions, but the rule in a world where aid workers have become enablers of the very atrocities they seek to relieve.

Polman, who is based in Amsterdam, spoke to Ideas by telephone from France, and later by telephone from Norway.

IDEAS: What made you so disillusioned by aid work?

POLMAN: I was living in Sierra Leone in West Africa in 2000, 2001, when the peace agreement was signed between the government and the RUF….I was a correspondent for a Dutch newspaper and Dutch radio, covering the war and the UN operations that [were] trying to lure the country out of the hands of the rebels. All the time I was there, the country was in total darkness. There was no electricity. There were no radios. With the peace accord, the aid budget was released for Sierra Leone, and with the release of the aid budget, the caravan of aid was released….In a very short time, there were over 200 NGOS moved into the country.

Everything changed. For the first couple of days, I was happy with that. I thought the country was going to be rescued. But because I knew the country quite well, I saw it was the people I considered the bad guys — the political elites who were responsible for the war — they were the ones who had access to the aid. I thought, this can’t be right. That’s when I started to research what happens in other countries. It is always what happens. It is always the elites and the strongmen who profit.

IDEAS: Your book says that food aid is always used as a weapon of war by the very fighters that create humanitarian disasters in the first place. Is aid always bad? Would the world be a better place without it?

POLMAN: I believe that aid could be given in a much more efficient and less dangerous way….After every humanitarian intervention, the aid organizations analyze what went well and what went wrong. Every analysis…says the weakest point is that the aid organizations are not cooperating well enough, which makes them vulnerable to abuse of the aid.

IDEAS: You don’t cite any examples of good aid projects. Is anybody out there doing it right?

POLMAN: I know of an orphanage in Haiti that has been there for the past 35 years. It has proved over the past 35 years that it is doing a good job. But if the humanitarian world decides en masse to move into one war zone where the bad guys are waiting for them with open arms, they should expect many problems and many instances of abuse.

IDEAS: You talk in your book about how Florence Nightingale eventually developed a philosophy that we should just let wars be as terrible as possible, so that people would stop having them. Would there be less war without aid?

POLMAN: We don’t know, because we never tried to stop aid and then count the amount of wars, or count the amount of days that wars go on. But the thoughts of Florence Nightingale make sense to me. The cost…of the war should be left in the hands of the people who want the war. She thought that if you make it easier for warmongers to have their wars, then you prolong them and make them more severe.

IDEAS: A central tenet of aid workers is political neutrality. In the book, you write that this is often a farce. Should aid take sides in a war? Would it be more effective if it did?

POLMAN: The reality is that aid is not being given a choice. Aid is being used by parties that are at war with each other. Even if aid wants to be neutral, the choice is made for them….If an aid organization cannot decide itself how to distribute aid, when to distribute aid, to whom to distribute aid, if the aid organization doesn’t have the power to make decisions about its own aid, you can do two things. You can say, “Well, that is just reality.” Or you can say, “We will not deliver the aid.”…Medecins Sans Frontieres [Doctors Without Borders] does it sometimes. Sometimes they make the moral stance, and sometimes they don’t.

IDEAS: What is the worst example of abuse of aid that you saw?

POLMAN: In Sierra Leone, I realized that the rebel soldiers who had been hacking off people’s hands and feet, they actually could explain to me how to manipulate the aid system….They explained to me that for 10 years, all those years they were fighting and the West didn’t want to hear about their war. It was only after they started to amputate people, more people and more people, that the international community was taking notice of their war. Those simple rebel soldiers in Africa could explain to me how that aid system works. That alarmed me….

A Security Council report this year concluded that up to half of the World Food program money — $485 million per year — for Somalia is diverted from the people who actually need it, to a web of corrupt contractors, Islamic militants, and local UN staff members who are also involved in this scheme. We can shrug our shoulders about $245 million a year, but in Somalia, this is a lot of money and it is fueling conflict, and it is fueling the wrong people.

Farah Stockman, foreign affairs reporter for the Boston Globe, also runs an educational program for street children in Kenya.


Full article:

‘Delusions of Gender’ argues that faulty science is furthering sexism


How Our Minds, Society, and Neurosexism Create Difference

By Cordelia Fine

About halfway through this irreverent and important book, cognitive psychologist Cordelia Fine offers a fairly technical explanation of the fMRI, a common kind of brain scan. By now, everyone is familiar with these head-shaped images, with their splashes of red and orange and green and blue. But far fewer know what those colors really mean or where they come from.

It’s not as if these machines are taking color videos of the human brain in action — not even close. In fact, these high-tech scanners are gathering data several steps removed from brain activity and even further from behavior. They are measuring the magnetic quality of hemoglobin, as a proxy for the blood oxygen being consumed in particular regions of the brain. If the measurement is different from what one would expect, scientists slap some color on that region of the map: hot, vibrant shades such as red if it’s more than expected; cool, subdued tones if it’s less.

Fine calls this “blobology”: the science — or art — of creating images and then interpreting them as if they have something to do with human behavior. Her detailed explanation of brain-scanning technology is essential to her argument, as it conveys a sense of just how difficult it is to interpret such raw data. She isn’t opposed to neuroscience or brain imaging; quite the opposite. But she is ardently against making authoritative interpretations of ambiguous data. And she’s especially intolerant of any intellectual leap from analyzing iffy brain data to justifying a society stratified by gender. Hence her title, “Delusions of Gender,” which can be read as an intentional slur on the scientific minds perpetrating this deceit.

Fine gives these scientists no quarter, and her beef isn’t just with brain scanners. Consider her critique of a widely cited study of babies’ gazes, conducted when the infants were just a day and a half old. The study found that baby girls were much more likely to gaze at the experimenter’s face, while baby boys preferred to look at a mobile. The scientists took these results as evidence that girls are more empathic than boys, who are more analytic than girls — even without socialization. The problem, not to put too fine a point on it, is that it’s a lousy experiment. Fine spends several pages systematically discrediting the study, detailing flaw after flaw in its design. Again, it’s a somewhat technical, methodological discussion, but an important one, especially since this study has become a cornerstone of the argument that boys and girls have a fundamental difference in brain wiring.

By now, you should be getting a feeling for the tone and texture of this book. Fine offers no original research on the brain or gender; instead, her mission is to demolish the sloppy science being used today to justify gender stereotypes — which she labels “neurosexism.” She is no less merciless in attacking “brain scams,” her derisive term for the many popular versions of the idea that sex hormones shape the brain, which then shapes behavior and intellectual ability, from mathematics to nurturance.

Two of her favorite targets are John Gray, author of the “Men Are From Mars, Women Are From Venus” books, and Louann Brizendine, author of “The Female Brain” and “The Male Brain.” Fine’s preferred illustration of Gray’s “neurononsense” is his discussion of the brain’s inferior parietal lobe, or IPL. The left IPL is more developed in men, the right IPL in women, which for Gray illuminates a lot: He says this anatomical difference explains why men become impatient when women talk too long and why women are better able to respond to a baby crying at night. Fine dismisses such conclusions as nothing more than “sexism disguised in neuroscientific finery.”

Gray lacks scientific credentials. Brizendine has no such excuse, having been trained in science and medicine at Harvard, Berkeley and Yale. And Fine saves her big guns — and her deepest contempt — for her. For the purposes of this critique, Fine fact-checked every single citation in “The Female Brain,” examining every study that Brizendine used to document her argument that male and female brains are fundamentally different. Brizendine cited hundreds of academic articles, making the text appear authoritative to the unwary reader. Yet on closer inspection, according to Fine, the articles are either deliberately misrepresented or simply irrelevant.

“Neurosexism” is hardly new. Fine traces its roots to the mid-19th century, when the “evidence” for inequality included everything from snout elongation to “cephalic index” (ratio of head length to head breadth) to brain weight and neuron delicacy. Back then, the motives for this pseudoscience were transparently political: restricting access to higher education and, especially, the right to vote. In a 1915 New York Times commentary on women’s suffrage, neurologist Charles Dana, perhaps the most illustrious brain scientist of his time, catalogued several differences between men’s and women’s brains and nervous systems, including the upper half of the spinal cord. These differences, he claimed, proved that women lack the intellect for politics and governance.

None of this was true, of course. Not one of Dana’s brain differences withstood the rigors of scientific investigation over time. And that is really the main point that Fine wants to leave the reader pondering: The crude technologies of Victorian brain scientists may have been replaced by powerful brain scanners such as the fMRI, but time and future science may judge imaging data just as harshly. Don’t forget, she warns us, that wrapping a tape measure around the head was once considered modern and scientifically sophisticated. Those seductive blobs of color could end up on the same intellectual scrap heap.

Wray Herbert’s book “On Second Thought: Outsmarting Your Mind’s Hard-Wired Habits” has just been published.


Full article:

Five myths about prostitution

Last weekend, Craigslist, the popular provider of Internet classified advertising, halted publication of its “adult services” section. The move followed criticism from law enforcement officials across the country who have accused the site of facilitating prostitution on a massive scale. Of course, selling sex is an old business — most say the oldest. But as the Craigslist controversy proves, it’s also one of the fastest changing. And as a result, most people’s perceptions of the sex trade are wildly out of date.

1. Prostitution is an alleyway business.

It once was, of course. In the late 1800s, as Northern cities boomed, the sex trade in America became synonymous with the seedy side of town. Men who wanted to find prostitutes combed alleys behind bars, dimly lit parks and industrial corridors. But today, only a few big cities, such as Los Angeles and Miami, still have a thriving outdoor street market for sex. New York has cleaned up Times Square, Chicago’s South Loop has long since gentrified, and even San Francisco’s infamous Tenderloin isn’t what it used to be.

These red-light districts waned in part because the Internet became the preferred place to pick up a prostitute. Even the most down-and-out sex worker now advertises on Craigslist (or did until recently), as well as on dating sites and in online chat forums. As a result, pimps’ role in the sex economy has been diminished. In addition, the online trade has helped bring the sex business indoors, with johns and prostitutes increasingly meeting up in bars, in hotels, in their own homes or in apartments rented by groups of sex workers. All this doesn’t mean a john can’t get what he’s looking for in the park, but he had better be prepared to search awhile.

Although putting numbers on these trends is difficult, the transition from the streets to the Internet seems to have been very rapid. In my own research on sex workers in New York, women who in 1999 worked mostly outdoors said that by 2004, demand on the streets had decreased by half.

2. Men visit sex workers for sex.

Often, they pay them to talk. I’ve been studying high-end sex workers (by which I mean those who earn more than $250 per “session”) in New York, Chicago and Paris for more than a decade, and one of my most startling findings is that many men pay women to not have sex. Well, they pay for sex, but end up chatting or having dinner and never get around to physical contact. Approximately 40 percent of high-end sex worker transactions end up being sex-free. Even at the lower end of the market, about 20 percent of transactions don’t ultimately involve sex.

Figuring out why men pay for sex they don’t have could sustain New York’s therapists for a long time. But the observations of one Big Apple-based sex worker are typical: “Men like it when you listen. . . . I learned this a long time ago. They pay you to listen — and to tell them how great they are.” Indeed, the high-end sex workers I have studied routinely see themselves as acting the part of a counselor or a marriage therapist. They say their job is to feed a man’s need for judgment-free friendship and, at times, to help him repair his broken partnership. Little wonder, then, that so many describe themselves to me as members of the “wellness” industry.

3. Most prostitutes are addicted to drugs or were abused as children.

This was once the case, as a host of research on prostitution long ago confirmed. But the population of women choosing sex work has changed dramatically over the past decade. High-end prostitutes of the sort Eliot Spitzer frequented account for a greater share of the sex business than they once did. And as Barnard College’s Elizabeth Bernstein has shown, sex workers today tend to make a conscious decision to enter the trade — not as a reaction to suffering but to earn some quick cash. Among these women, Bernstein’s research suggests, prostitution is viewed as a part-time job, one that grants autonomy and flexibility.

These women have little in common with the shrinking number of sex workers who still work on the streets. In a 2001 study of British prostitutes, Stephanie Church of Glasgow University found that those working outdoors “were younger, involved in prostitution at an earlier age, reported more illegal drug use, and experienced significantly more violence from their clients than those working indoors.”

4. Prostitutes and police are enemies.

When it comes to the sex trade, police officers have in recent decades functioned as quasi-social workers. Peter Moskos’s recent book, “Cop in the Hood: My Year Policing Baltimore’s Eastern District,” describes how police often play counselor to sex workers, drug dealers and a host of other illegal moneymakers. In my own work, I’ve found that cops are among the most empathetic and helpful people sex workers meet on the job. They typically hand out phone numbers for shelters, soup kitchens and emergency rooms, and they tend to demonstrate a great deal of sympathy for women who have been abused. Instead of arresting an abused sex worker, police officers will usually let her off with a warning and turn their attention to finding her abusive client.

Unfortunately, officers say it is becoming more difficult to help such women; as they move indoors, it is simply more difficult to locate them. Of course, many big-city mayors embrace this same turn of events, since the rate of prostitution-related arrests drops precipitously when cops can’t find anyone to nab. But for police officers, it makes day-to-day work quite challenging.

Officers in Chicago and New York who once took pride in helping women exit the sex trade have told me about their frustration. Abusive men can more easily rob or hurt a sex worker in a building than on the street, they say. And while cops may receive a call about an overheard disturbance, the vague report to 911 is usually not enough to pinpoint the correct apartment or hotel room. There are few things more dispiriting, they say, than hearing of a woman’s cries for help and being unable to find her.

5. Closing Craigslist’s “adult services” section will significantly affect the sex trade.

Although Craigslist offered customers an important means to connect with sellers of sexual services, its significance has probably been exaggerated.

Even before the site’s “adult services” section was shut down, it was falling out of favor among many users. Adolescent pranksters were placing ads as hoaxes. And because sex workers knew that cops were spending a lot of time responding to ads, they were increasingly hesitant to answer solicitations. I found that 80 percent of the men who contacted women via Craigslist in New York never consummated their exchange with a meeting.

How the sex trade will evolve from here is anyone’s guess, but the Internet is vast, and already we are seeing increasing numbers of sex workers use Twitter and Facebook to advertise their services. Apparently, the desire to reveal is sometimes greater than the desire to conceal.

Sudhir Venkatesh is a professor of sociology at Columbia University and the author of “Gang Leader for a Day: A Rogue Sociologist Takes to the Streets.”


Full article:

Salvation in Small Steps

With the collapse of various ideologies and totalizing nostrums, human rights became ever more important in world affairs. Brendan Simms reviews Mr. Moyn’s “The Last Utopia: Human Rights in History.”

In their classic essay collection, “The Invention of Tradition” (1983), the historians Eric Hobsbawm and Terence Ranger showed how many features of British society that seem to be rooted in time immemorial, such as public-school rituals and royal ceremonials, are actually of recent provenance. Similarly, in “The Last Utopia,” Samuel Moyn challenges the notion that something now so well-established as the idea of human rights—foundational rights that individuals possess against enslavement, religious oppression, political imprisonment and other brutalities of arbitrary governments—had its origins in the remote past. This “celebratory” approach, he charges, uses history to “confirm the inevitable rise” of human rights “rather than register the choices that were made and the accidents that happen.” The truth, Mr. Moyn shows, is that human rights, as we understand them today, are a “recent and contingent” development.

Mr. Moyn quickly disposes of the idea that human rights originated with the Greeks, who after all kept slaves, or even with the French revolutionaries of the late 18th century, whose “Rights of Man” led to the Terror. More controversially, Mr. Moyn denies that the experience of World War II and the Holocaust produced a decisive shift in our understanding of how to guard against systematic assaults on human life and dignity. Admittedly, the United Nations did issue the Universal Declaration of Human Rights in 1948, but this document led only to a cul-de-sac; it had few practical effects. Nor did the concept come riding in on the back of the anticolonialism sweeping the world in the 1950s and 1960s, which was focused on self-determination, not individual rights.

The breakthrough, Mr. Moyn argues, came only in the 1970s. This decade saw the Jackson-Vanik amendment of 1974, which tied U.S. trade with the Soviet Union to the right of Soviet citizens to emigrate. It was followed in 1975 by the Helsinki Accords, which required that the signatories, including the Soviet Union, respect “freedom of thought, conscience, religion [and] belief,” to quote the accord itself.

Such principles were soon used by Eastern European and Soviet dissidents to challenge the logic of the Soviet empire itself. The charisma of various figures—Natan Sharansky, Andrei Sakharov, Václav Havel, Adam Michnik—gave human rights the aspect of an international “cause,” and in 1977 Amnesty International—whose work on behalf of political prisoners epitomized the new focus on individual rights—was awarded the Nobel Peace Prize. Soon after, the administration of Jimmy Carter made human rights an integral part of official American policy, insisting that they be respected not only by the hostile Soviet Union but also by allied powers such as South Korea, though Mr. Carter was much softer on the shah’s Iran. In this way, as Mr. Moyn puts it, “human rights were forced to move not only from morality to politics, but also from charisma to bureaucracy.”

The reasons for this shift were numerous. Human rights had always been a part of the West’s Cold War policies, but their force had been blunted by the continued existence of European empires and, later, by the U.S. presence in Vietnam, where the brutality of war made it hard for America to serve as a moral arbiter. After decolonization and the withdrawal from Indochina, however, the battle was rejoined to devastating effect. The Soviet Union, a virtual police state, had nowhere to hide. Meanwhile, the experience of a decade or more of African and Asian independence had hardly been an advertisement for the moral purity of newly “free” states, where rights could be newly violated. A political consensus began to form that crossed party divides. “We’ll be against the dictator’s you don’t like the most,” Sen. Daniel Patrick Moynihan told a rival, “if you’ll be against the dictators we don’t like the most.”

Most important of all, however, was the intellectual and emotional effect of the collapse of alternative ideologies. Over the course of 70 years or so communism, anticolonialism and even the grandiose designs of the West’s expanded welfare states had failed to deliver on their bright promises. Human rights, Mr. Moyn claims, were thus “the last utopia.” Unlike the totalizing nostrums of the past, they offered salvation in small, manageable steps—”saving the world one individual at a time,” as one activist put it.

The arguments in “The Last Utopia” are persuasive, but the book is not without its problems. It is true that Mr. Moyn’s past rights-champions did not advocate the utopian program of the 1970s in every respect, but they were less far off than he concedes. The Cold Warriors behind the European Convention on Human Rights in 1950, for example, were surely close to Mr. Carter in the late 1970s in their insistence on political and civil rights rather than the broad spectrum of so-called social and economic rights demanded by the political left.

Mr. Moyn exposes the political motivations behind much of human-rights history—the supporters of the “humanitarian” interventions of the 1990s, for instance, cited human rights as a pedigree for their preferred policies. But his own views occasionally surface. We are never told why it is “disturbing” that the Reagan era saw an “assimilation of human rights to the inadequately developed program of ‘democracy promotion’ “; after all, the administration’s support for dissident groups in Eastern Europe throughout the 1980s did much to undermine Soviet autocracy there. Nor is it obvious that neoconservative arguments about the universality of human rights have had “many tragic consequences.” No matter. The triumph of “The Last Utopia” is that it restores historical nuance, skepticism and context to a concept that, in the past 30 years, has played a large role in world affairs.

Mr. Simms, a professor of international relations at Cambridge University, is the author of “Three Victories and a Defeat: The Rise and Fall of the First British Empire.”


Full article and photo:

Politics and the Cult of Sentimentality

Wilde said that sentimentality is the desire to have the luxury of emotion without paying for it.

When, as in my case, you have identified what you think is a social trend—the increasing sentimentality of public discourse, which brings with it disastrous practical consequences—you begin to see examples of it everywhere.

On Thursday of last week, for example, I happened to be reading an article in Le Monde while waiting for a plane at Charles de Gaulle airport. The article took up a whole page and was titled “Las Vegas Inferno.” “Inferno” was written in letters an inch tall.

I hold no particular brief for Las Vegas. I would like to see it, but only in the sense that I wanted to see North Korea (and did): One should experience all that one can of the world, and Las Vegas is surely unique.

The inferno of the article was that of the homeless of the city, 300-500 of whom live in the concrete-and-steel tunnels built in the 1970s as drains for the torrential rains that often afflict Nevada. The article says of the people who live in them that they are “the poorest of the poor, poverty-stricken rejects in the entrails of the gilded city.”

Poverty-stricken rejects in the entrails of the gilded city: The words suggest a terrible and cruel injustice done to them. But who, exactly, has rejected them, and thereby forced them into the entrails? This way of putting it inevitably turns them into victims of a cruel world.

Three cases are mentioned—those of Craig, David and Medina. Craig has lived in the tunnels for five years, and his belongings have been washed away three times in the past few months. His food is paid for with food stamps; he gathers money left behind in the one-armed bandits in the casinos above-ground to buy cannabis—”my only drug,” he says. No further details are offered as to why he resorted to living in the tunnels in the first place.

David, who has a long scar on his face that is ravaged by alcohol, came to Las Vegas attracted by “the eldorado of greenbacks and the promise of endless job opportunities.” Then, in the words of the article, he knew “that slow decline when gambling debts become insurmountable and drugs replace friends.”

On this view of things, the gambling debts and the drugs that replaced friends had an existence independent of his behavior. They had agency in his life, unlike him. The debts came and took his money away and the drugs arrived and forced him to take them, contrary to the wishes of his friends. David is therefore a victim, and nothing but a victim.

Medina, aged 36, is an Indian woman, and she has recently escaped the tunnels. Her beauty has been destroyed by “abuse and maltreatment.” She has five children, whom she hardly knows. I hope I shall not be accused of cultural insensitivity when I write that she must nevertheless have known where they came from.

It was her lover, Manny, “who first dragged me down there into the tunnels.” She thought at first that he was going to kill her, but she went nonetheless, and they stayed there a year. New building works in the tunnels rendered their situation untenable (though previously in the article we have learned that there are 300 kilometers of such tunnels to choose from) and “then I believed that I wanted to see my children again.”

What is startling about all this is that the author of the article evinces no curiosity about how the three came to be in the situation he describes. Why not? The questions to ask are so obvious that one must wonder why he did not ask them.

Part of the problem is that he sees Las Vegas as a manifestation of “the American Dream,” though actually it is a perversion of that dream, and he wants to demonstrate the badness or cruelty of that dream. No doubt the authentic dream—that of individuals endlessly free to reinvent and advance themselves—also has a dark side, as American literature records. But the tunnels under Las Vegas are not it.

The main reason that the author does not ask the obvious questions is that to have done so would have been to reduce the sentimental reaction that he wanted to evoke in his readers. And a little reflection shows that this reaction depended on a rather cruel premise: that if people are to any considerable extent the authors of their own misfortunes, we should exclude them from our pity. Instead, we turn them into the passive victims of circumstance.

Does it matter that we do this? I think that it does. Sentimentality allows us to congratulate ourselves on our own warmth and generosity of heart. Oscar Wilde said that sentimentality is the desire to have the luxury of emotion without paying for it. It turns the people on whom it is bestowed into objects. It attempts, often successfully, to disguise from them their own part in their downfall. It suggests solutions to problems that do not, because they cannot, work. Sentimentality is the ally of ever-expanding bureaucracy, for the more a solution doesn’t work, the more of it is needed.

Theodore Dalrymple is the pen name of Anthony Daniels. His latest book is “Spoilt Rotten: The Toxic Cult of Sentimentality” (Gibson Square, 2010).


Full article:

Forget What You Know About Good Study Habits

Every September, millions of parents try a kind of psychological witchcraft, to transform their summer-glazed campers into fall students, their video-bugs into bookworms. Advice is cheap and all too familiar: Clear a quiet work space. Stick to a homework schedule. Set goals. Set boundaries. Do not bribe (except in emergencies).

And check out the classroom. Does Junior’s learning style match the new teacher’s approach? Or the school’s philosophy? Maybe the child isn’t “a good fit” for the school.

Such theories have developed in part because of sketchy education research that doesn’t offer clear guidance. Student traits and teaching styles surely interact; so do personalities and at-home rules. The trouble is, no one can predict how.

Yet there are effective approaches to learning, at least for those who are motivated. In recent years, cognitive scientists have shown that a few simple techniques can reliably improve what matters most: how much a student learns from studying.

The findings can help anyone, from a fourth grader doing long division to a retiree taking on a new language. But they directly contradict much of the common wisdom about good study habits, and they have not caught on.

For instance, instead of sticking to one study location, simply alternating the room where a person studies improves retention. So does studying distinct but related skills or concepts in one sitting, rather than focusing intensely on a single thing.

“We have known these principles for some time, and it’s intriguing that schools don’t pick them up, or that people don’t learn them by trial and error,” said Robert A. Bjork, a psychologist at the University of California, Los Angeles. “Instead, we walk around with all sorts of unexamined beliefs about what works that are mistaken.”

Take the notion that children have specific learning styles, that some are “visual learners” and others are auditory; some are “left-brain” students, others “right-brain.” In a recent review of the relevant research, published in the journal Psychological Science in the Public Interest, a team of psychologists found almost zero support for such ideas. “The contrast between the enormous popularity of the learning-styles approach within education and the lack of credible evidence for its utility is, in our opinion, striking and disturbing,” the researchers concluded.

Ditto for teaching styles, researchers say. Some excellent instructors caper in front of the blackboard like summer-theater Falstaffs; others are reserved to the point of shyness. “We have yet to identify the common threads between teachers who create a constructive learning atmosphere,” said Daniel T. Willingham, a psychologist at the University of Virginia and author of the book “Why Don’t Students Like School?”

But individual learning is another matter, and psychologists have discovered that some of the most hallowed advice on study habits is flat wrong. For instance, many study skills courses insist that students find a specific place, a study room or a quiet corner of the library, to take their work. The research finds just the opposite. In one classic 1978 experiment, psychologists found that college students who studied a list of 40 vocabulary words in two different rooms — one windowless and cluttered, the other modern, with a view on a courtyard — did far better on a test than students who studied the words twice, in the same room. Later studies have confirmed the finding, for a variety of topics.

The brain makes subtle associations between what it is studying and the background sensations it has at the time, the authors say, regardless of whether those perceptions are conscious. It colors the terms of the Versailles Treaty with the wasted fluorescent glow of the dorm study room, say; or the elements of the Marshall Plan with the jade-curtain shade of the willow tree in the backyard. Forcing the brain to make multiple associations with the same material may, in effect, give that information more neural scaffolding.

“What we think is happening here is that, when the outside context is varied, the information is enriched, and this slows down forgetting,” said Dr. Bjork, the senior author of the two-room experiment.

Varying the type of material studied in a single sitting — alternating, for example, among vocabulary, reading and speaking in a new language — seems to leave a deeper impression on the brain than does concentrating on just one skill at a time. Musicians have known this for years, and their practice sessions often include a mix of scales, musical pieces and rhythmic work. Many athletes, too, routinely mix their workouts with strength, speed and skill drills.

The advantages of this approach to studying can be striking, in some topic areas. In a study recently posted online by the journal Applied Cognitive Psychology, Doug Rohrer and Kelli Taylor of the University of South Florida taught a group of fourth graders four equations, each to calculate a different dimension of a prism. Half of the children learned by studying repeated examples of one equation, say, calculating the number of prism faces when given the number of sides at the base, then moving on to the next type of calculation, studying repeated examples of that. The other half studied mixed problem sets, which included examples all four types of calculations grouped together. Both groups solved sample problems along the way, as they studied.

A day later, the researchers gave all of the students a test on the material, presenting new problems of the same type. The children who had studied mixed sets did twice as well as the others, outscoring them 77 percent to 38 percent. The researchers have found the same in experiments involving adults and younger children.

“When students see a list of problems, all of the same kind, they know the strategy to use before they even read the problem,” said Dr. Rohrer. “That’s like riding a bike with training wheels.” With mixed practice, he added, “each problem is different from the last one, which means kids must learn how to choose the appropriate procedure — just like they had to do on the test.”

These findings extend well beyond math, even to aesthetic intuitive learning. In an experiment published last month in the journal Psychology and Aging, researchers found that college students and adults of retirement age were better able to distinguish the painting styles of 12 unfamiliar artists after viewing mixed collections (assortments, including works from all 12) than after viewing a dozen works from one artist, all together, then moving on to the next painter.

The finding undermines the common assumption that intensive immersion is the best way to really master a particular genre, or type of creative work, said Nate Kornell, a psychologist at Williams College and the lead author of the study. “What seems to be happening in this case is that the brain is picking up deeper patterns when seeing assortments of paintings; it’s picking up what’s similar and what’s different about them,” often subconsciously.

Cognitive scientists do not deny that honest-to-goodness cramming can lead to a better grade on a given exam. But hurriedly jam-packing a brain is akin to speed-packing a cheap suitcase, as most students quickly learn — it holds its new load for a while, then most everything falls out.

“With many students, it’s not like they can’t remember the material” when they move to a more advanced class, said Henry L. Roediger III, a psychologist at Washington University in St. Louis. “It’s like they’ve never seen it before.”

When the neural suitcase is packed carefully and gradually, it holds its contents for far, far longer. An hour of study tonight, an hour on the weekend, another session a week from now: such so-called spacing improves later recall, without requiring students to put in more overall study effort or pay more attention, dozens of studies have found.

No one knows for sure why. It may be that the brain, when it revisits material at a later time, has to relearn some of what it has absorbed before adding new stuff — and that that process is itself self-reinforcing.

“The idea is that forgetting is the friend of learning,” said Dr. Kornell. “When you forget something, it allows you to relearn, and do so effectively, the next time you see it.”

That’s one reason cognitive scientists see testing itself — or practice tests and quizzes — as a powerful tool of learning, rather than merely assessment. The process of retrieving an idea is not like pulling a book from a shelf; it seems to fundamentally alter the way the information is subsequently stored, making it far more accessible in the future.

Dr. Roediger uses the analogy of the Heisenberg uncertainty principle in physics, which holds that the act of measuring a property of a particle alters that property: “Testing not only measures knowledge but changes it,” he says — and, happily, in the direction of more certainty, not less.

In one of his own experiments, Dr. Roediger and Jeffrey Karpicke, also of Washington University, had college students study science passages from a reading comprehension test, in short study periods. When students studied the same material twice, in back-to-back sessions, they did very well on a test given immediately afterward, then began to forget the material.

But if they studied the passage just once and did a practice test in the second session, they did very well on one test two days later, and another given a week later.

“Testing has such bad connotation; people think of standardized testing or teaching to the test,” Dr. Roediger said. “Maybe we need to call it something else, but this is one of the most powerful learning tools we have.”

Of course, one reason the thought of testing tightens people’s stomachs is that tests are so often hard. Paradoxically, it is just this difficulty that makes them such effective study tools, research suggests. The harder it is to remember something, the harder it is to later forget. This effect, which researchers call “desirable difficulty,” is evident in daily life. The name of the actor who played Linc in “The Mod Squad”? Francie’s brother in “A Tree Grows in Brooklyn”? The name of the co-discoverer, with Newton, of calculus?

The more mental sweat it takes to dig it out, the more securely it will be subsequently anchored.

None of which is to suggest that these techniques — alternating study environments, mixing content, spacing study sessions, self-testing or all the above — will turn a grade-A slacker into a grade-A student. Motivation matters. So do impressing friends, making the hockey team and finding the nerve to text the cute student in social studies.

“In lab experiments, you’re able to control for all factors except the one you’re studying,” said Dr. Willingham. “Not true in the classroom, in real life. All of these things are interacting at the same time.”

But at the very least, the cognitive techniques give parents and students, young and old, something many did not have before: a study plan based on evidence, not schoolyard folk wisdom, or empty theorizing.

Benedict Carey, New York Times


Full article and photo :

Ephemera in Full

The Sage of Baltimore was not always so sagacious

H.L. Mencken (1880-1956) is a revered figure in the history of American letters, and understandably so. But after enduring the heavy weather of these two Library of American volumes—a gathering of Mencken essays and journalism originally published between 1919 and 1927 in a series of books called “Prejudices”—I am beginning to have my doubts. I remember Tom Wolfe once telling me that Mencken was one of the greatest stylists of the English language, alongside Malcolm Muggeridge, and George Orwell. Again, it is hard to disagree. But I strongly suggest that Mr. Wolfe purge the “Prejudices” from his library.


H.L. Mencken at his writing desk in the mid-1940s.

Some of the pieces in the “Prejudices” series—there were six volumes in all—are very good, as I had remembered. In particular, Mencken’s essay on William Jennings Bryan, the prairie populist and endless presidential candidate, remains a classic and well worth re-reading. But the vast majority of the pieces in “Prejudices” are tedious and ephemeral, even terrible at times.

Anyone seeking the reasons for Mencken’s high reputation would do better by turning to Huntington Cairns’s “The American Scene” (1965), an anthology that judiciously selects from Mencken’s autobiographical works, his writings on the American language and his various superb efforts at reportage, including his famous account of the 1925 Scopes Trail, in which fundamentalist religion famously butted heads with evolutionary theory.

Cairns, it is true, included some flatulent “Prejudices” essays in his anthology, but with explanations of their origin—either from Mencken or from Cairns himself—along with the dates of the essays’ original publication. There are no dates included in the Library of America volumes and no contextual introductions to the pieces offered. Much of the time we have no idea what Mencken is shouting about. He comes off as a gasbag.

The appendix to the first Library of America volume includes a selection from Mencken’s posthumous “My Life as Author and Editor” in which he comments on the “Prejudices” series. He tells us, quoting himself, that the first series, published in 1919, was “a stinkpot designed to ‘keep the animals perturbed.’ ” But he confesses that the collection contained “light stuff, chiefly rewritten from the Smart Set,” the magazine that Mencken edited with George Jean Nathan from 1914 to 1923. “The real artillery fire,” Mencken wrote, “will begin a bit later.” Where it did begin it was often off the mark—for instance, not 200,000 soldiers dead in the Civil War, as he says, but 621,000.

Mencken admits that the pieces in “Prejudices: Second Series” (1920) are not original. He was still larding up what he considered important essays with “surplus material left out of the 1922 revision of In Defense of Women” and other writings, including, as he put it, “reworkings of my Smart Set reviews and my contributions to ‘Répétition Générale.’ ”

The “Répétition Général” that Mencken mentions was a running Smart Set feature offering facetious definitions of trends and types and brief editorial comments. To take an example not included in the Library of America volumes: “The Bald-Headed Man: The man with a bald head, however eminent his position, always feels slightly ill at ease in the presence of a man whose dome is still well thatched.” Clearly much of the material in the Smart Set was not of great weight.

Mencken continued such rewrites and regurgitations for an additional four “Prejudices.” He is at his worst when he writes on what he considers important topics: the South, farmers, the national letters, the American character.

It is always amusing to call a farmer “a prehensile moron.” Or to compare a politician to “an honest burglar.” But often Mencken simply falls into a gimmick. He strings together absurd similes, preposterous comparisons and long lists, and there is an enormous amount of repetition. After a while, it all becomes tiresome.

H.L. Mencken: Prejudices


In the essay “On Being an American,” he writes that a man who has to make a living in the U.S. must keep in mind that “the Republic has never got half enough bond salesmen, quack doctors, ward leaders, phrenologists, Methodist evangelists, circus clowns, magicians, soldiers, farmers, popular song writers, moonshine distillers, forgers of gin labels, mine guards, detectives, spies, snoopers, and agents provocateurs.” One gets the point quickly, and yet he goes on an on.

Later, after running a sentence for 17 lines, he ends by referring to “thousands [of Americans] who put the Hon. Warren Gamaliel Harding beside Friedrich Barbarossa and Charlemagne, and hold the Supreme Court to be directly inspired by the Holy Spirit, and belong ardently to every Rotary Club, Ku Klux Klan, and anti-Saloon League, and choke with emotion when the band plays ‘The Star-Spangled Banner,’ and believe with the faith of little children that one of Our Boys, taken at random, could dispose in a fair fight of ten Englishmen, twenty Germans, thirty Frogs, forty Wops, fifty Japs, or a hundred Bolsheviki.” There is a lot of padding here.

In the same essay he says that the American belief in the good life or progress or happy landings or something “is not shared by most reflective foreigners, as anyone may find out by looking into such a book as Ferdinand Kürnberger’s ‘Der Amerikamünde,’ Sholom Asche’s ‘America,’ Ernest von Wolzogen’s ‘Ein Dichter in Dollarica,’ W.L. George’s ‘Hail, Columbia!’, Annalise Schmidt’s ‘Der Amerikanische Mensch’ or Sienkiewicz’s ‘After Bread,’ or by hearkening unto the confidences, if obtainable, of such returned immigrants as Georges Clemenceau, Knut Hamsun, George Santayana, Clemens von Pirquet, John Masefield, and Maxim Gorky and, via the ouija board, Antonin Dvorak, Frank Wedekind and Edwin Klebs.” Such strings of slightly ominous names could be seen as part of the “artillery fire” Mencken referred to in his posthumous reflections.

As I say, Mencken was a superb reporter, and when he stuck to reporting he was an original. In “Prejudices: Fifth Series,” he was running out of steam, but then comes his incomparable “In Memoriam: W.J.B.” It begins: “Has it been duly marked by historians that the late William Jennings Bryan’s last secular act on this globe of sin was to catch flies?”

Mencken takes us to the Dayton, Tenn., monkey trial, reporting on Bryan’s confrontation with Clarence Darrow, and his eyes are wide open: Bryan “liked people who sweated freely, and were not debauched by the refinements of the toilet.” Bryan makes “progress up and down the Main Street of little Dayton, surrounded by gaping primates from the uplands. . . . There stood the man who had been thrice a candidate for the Presidency of the Republic—there he stood in the glare of the world, uttering stuff that a boy of eight would laugh at! The artful Darrow led him on.”

The next essay in the collection is even better. In “The Hills of Zion,” Mencken actually attends a meeting of locals who speak in tongues and sweat a lot. “The heap of mourners was directly before us. They bounced into us as they cavorted. The smell that they radiated, flooding there in that obscene heat, half- suffocated us. Not all of them, of course, did the thing in the grand manner. Some merely moaned and rolled their eyes.” It is all here, even Mencken’s speculations of a lewd nature.

Mencken was the first celebrity intellectual. Mass communications was in place, and he was present to take advantage of it. He was a brilliant stylist, and when he stuck to reporting, Tom Wolfe had him right. But not in these pieces, and not in his crank diatribes. He flourished in the first quarter of the century, but I doubt there would be room in America for him now. His prose style aside, he was an independent mind. There are only two camps today, and he would be in neither.

Mr. Tyrrell, a syndicated columnist, is editor in chief of The American Spectator. His current book is “After The Hangover: The Conservatives” Road to Recovery,” published by Thomas Nelson.


Full article and photos:

Mark Pilkington’s top 10 books about UFOs

Down to earth accounts … Alien Parking sign in Roswell, New Mexico.

Mark Pilkington is a writer with a fascination for the further shores of culture, science and belief. He also publishes books as Strange Attractor Press. In Mirage Men subject’s history and meeting former air force and intelligence insiders, Pilkington concludes that instead of covering up tales of UFO crashes and alien visitors, the US military and intelligence services have been promoting them all along as part of their cold war counter-intelligence operations.

“The UFO arena acts as a kind of vivarium for a range of psychological, sociological and anthropological experiences, beliefs, conditions and behaviours. They remind us that the Unknown and the Other are still very much at large in our modern world, and provide us with a fascinating glimpse of folklore in action. A tiny few UFO reports also still present us with genuine mysteries.

“The first book about UFOs as we know them was The Flying Saucer, a 1948 novel by British former spy Bernard Newman. I’m not sure how many UFO books have been written since then, but I’d guess that it’s well over 1000. Here, in chronological order, are 10 that I can recommend as either informative, entertaining, puzzling or all three at once.”

1. The Report on Unidentified Flying Objects by Edward J Ruppelt

An insider’s account of the crucial, early days of the UFO story, by the man who headed the US Air Force’s official UFO investigation from 1951 to 1953. Ruppelt documents shifting Air Force attitudes to the phenomenon, which ranged from aggressive denial to apparent endorsement of alien visitation in an infamous 1952 Life magazine article. In a revised edition, published in 1960, Ruppelt was more dismissive of the subject. He died the same year, aged 37.

2. Flying Saucer Pilgrimage by Bryant and Helen Reeve

A charming glimpse into the early days of the UFO culture, when the lines between spiritualism, occultism and ufology were largely indistinguishable. The Reeves travelled the US in search of “the Saucerers”, meeting many key figures of the time before making contact with real Space People via the wonders of Outer Space Communication (OSC) and a portable tape recorder. Many important questions are answered: How do we look to the space people? Do they believe in Jesus Christ? Is this civilisation ending?

3. Flying Saucers: A Modern Myth of Things Seen in the Sky by Carl Jung

It was only natural that the Swiss mystic and philosopher-shrink, fascinated by anomalous experiences, should turn his attention to the UFO mystery. Considering UFOs as a “visionary rumour” and a manifestation of the mythic unconscious, Jung compares the perfect circle of the flying disc to the mandala, notes the dreamlike impossibility of many reports and presciently recognises the deep spiritual pull that the UFO would exert over the next half century.

4. The UFO Experience By J Allen Hynek

Astronomer Hynek was an air force consultant on UFOs for much of his life, and over time transformed from something of a Doubting Thomas to a St Paul. He’s regarded as a saint in UFO circles, largely for this book, a sober yet sympathetic overview of the UFO problem that excoriates the US Air Force for their failure to treat the phenomenon seriously. Hynek devised the “Close Encounters” system for categorising UFO sightings, and has a cameo during the cosmic disco climax of Spielberg’s blockbusting film (that’s him with the pipe looking like Colonel Sanders).

5. The Mothman Prophecies by John Keel

Merging unconscious deceptions with deliberate fictions, many of the wilder UFO books would have even the most intrepid postmodernists cowering behind the sofa. Keel, however, was a two-fisted trickster who knew exactly what he was doing and this reads like Thomas Pynchon crossed with Philip K Dick channelling HP Lovecraft. In the late 1960s Point Pleasant, West Virginia was plagued by bizarre entities, UFO sightings and robotic, jelly-fixated Men in Black; Keel investigated only to find himself in too deep and the town doomed to real-life disaster.

6. Messengers of Deception by Jacques Vallée

An intriguing, disconcerting book from one of the field’s most progressive thinkers. Vallée, a French astronomer and computer scientist who worked with J Allen Hynek, became entangled in bizarre mind games while investigating UFO cults in the 1970s. Amongst others, Vallée encountered HIM (Human Individual Metamorphosis), led by “Bo and Peep” who would steer the Heaven’s Gate group to their collective death two decades later.

7. Report on Communion by Ed Conroy

Whitley Strieber’s Communion is one of the 20th century’s great literary mysteries and Conroy’s spinoff is just as curious. A hard-nosed investigative journalist, Conroy examined Strieber’s alleged alien abduction experiences and odd life story while also researching the history of UFOs and its parallels in folkloric encounter narratives. In a testament to the power of UFOria and the allure of the Other, by the end of the book he’s being buzzed by shape-shifting helicopters and wondering whether he too has had contact with the Visitors.

8. Remarkable Luminous Phenomena in Nature by William Corliss

One of at least 18 hardback volumes of anomalies collected by this modern-day Charles Fort. Ball lightning (miniature, giant, black, object-penetrating and ordinary), bead lightning, lightning from clear skies, pillars of light, glowing owls, luminous bubbles, oceanic light wheels, earthquake lights, marsh gas, unusual auroras, glowing fogs. And that’s just for starters. I love this book.

9. The Trickster and the Paranormal by George Hansen

Hansen, a former professional laboratory parapsychologist, provides illumination, insight and perspective on the wider paranormal research field, UFOs included. Drawing on folklore, anthropology, literary theory and sociology, Hansen points out the integral, destabilising role of Trickster archetypes in human society. While dwelling predominantly amongst its esoteric fringes, the Trickster can also be seen lurking in the corridors of political, military and corporate power.

10. Out of the Shadows by David Clarke and Andy Roberts

A rock-solid history of the UFO phenomenon in Britain by two of our most reliable and indefatigable researchers. Clarke and Roberts work from interviews and official documentation detailing everything from genuine aerial mysteries during the second world war (investigated for the RAF by the Goon Show’s Michael Bentine) to the cold war follies of 1980’s Rendlesham Forest incident. Serious UFO research as it should be done.


Full article and photo:

The pursuit of evil

A complicated man, obsessed by his search for justice

Driven by memory

Simon Wiesenthal: The Life and Legends. By Tom Segev. Doubleday; 482 pages; $35. Jonathan Cape; £25.

AMONG the 300,000 pieces of paper in Simon Wiesenthal’s private archive is a letter from a Holocaust survivor explaining why he had ceased to believe in God. In Tom Segev’s description: “God had allowed SS troops to snatch a baby from his mother and then use it as a football. When it was a torn lump of flesh they tossed it to their dogs. The mother was forced to watch. Then they ripped off her blouse and made her use it to clean the blood off their boots.”

What made a man who survived three concentration camps cancel plans after the war to move first to America, then Israel, and instead devote his life in Vienna to amassing and immersing himself in memories that most survivors spent the rest of their days trying to forget? The famed “Nazi hunter” tended to guard his emotions by wrapping them in anecdotes for public consumption: he would talk of a girl he had seen being marched towards a mass grave whose desperate look seemed to say “Don’t forget us”. In his need to protect sources and conceal his work with government agencies, including Israel’s Mossad, such anecdotes spun out of control, multiplying into many different versions in his books and interviews.

Mr Segev, justly celebrated for his histories of formative moments of the state of Israel, is as careful a biographer as he is an historian, and he excels at teasing apart these conflicting tales. The picture that emerges is often unflattering. Wiesenthal comes across as a self-important busybody, obsessed with titles and recognition, squabbling with rivals. “Contrary to the myth he spun around himself”, Mr Segev writes, “he never operated a worldwide dragnet” but ran a virtually one-man show out of a cramped office.

Yet Mr Segev also comes to his subject’s defence when warranted. In the bitterly contested matter of the capture of Adolf Eichmann, for instance, he finds that Wiesenthal does deserve much credit for bringing the Nazi war criminal’s hideout in Argentina to the attention of the Israeli, German and American authorities, who took years to act—though he also describes Wiesenthal’s subtle manoeuvres to increase his share of the glory afterwards. As to Wiesenthal’s defence of Kurt Waldheim, the Austrian president who turned out to have lied about his war record—a controversy that cost Wiesenthal the Nobel peace prize—Mr Segev dissects his behaviour and finds it explicable, if not excusable.

The ultimate judgment is a compassionate one. Wiesenthal was driven, Mr Segev concludes, by guilt at not only having survived, but having had an easier time than most European Jews during much of the war. His mythomania may have been a way to burnish his image. But it also served in his pursuit of justice. His reputation, as well as a superb memory and a knack for networking, made him a magnet for countless scraps of information about suspected war criminals which he passed on to the authorities, badgering them relentlessly to make arrests. He battled official indifference, anti-Semitic attacks and, for many years, a chronic lack of funds.

How many Nazis he really helped to jail is impossible to say. More important was what he did to bring the Holocaust’s victims and their horrendous memories to worldwide attention. Gripping yet sober, this meticulous portrait of a complicated man is unlikely to be bettered.


Full article and photo:

The Labor of Living

Founders of company towns ranged from dreamy idealists to fast-buck Freddies.

When Tennessee Ernie Ford gave the full weight of his bass-baritone to “Sixteen Tons” and boomed that he owed his soul to the company store, the phrase evoked images of stooped miners living in tar-paper shacks under what Hardy Green calls the “super-exploitative conditions of life in a coal-mining company town.” But Mr. Green shows, in “The Company Town,” that such communities have also been social experiments, alternative forms of capitalist enterprise that encompassed everything from prophet-blaring to profit-sharing.

Typically in a company town, “one business exerts a Big Brother-like grip over the population,” providing employment, domicile, entertainment and even governance. Founders of such settlements have ranged from dreamy idealists to fast-buck Freddies; the fruits of their labor stretch from Hershey, Pa., to Gary, Ind.

Taking in textile, coal, oil, lumber and appliance-manufacturing towns, Mr. Green’s survey is a useful one, though the early utopian ventures he profiles are far more interesting than his pallid examples from the postwar era. Classic company towns could not withstand automobiles and suburbanization. No one owes his soul to an industrial park or a corporate campus.

Francis Cabot Lowell, eponym of Mr. Green’s first subject, the textile mill town in Massachusetts, had been appalled by an 1811 visit to Manchester, England, which he found air-blackened and overrun with “beggars and thieves.” His mills, he resolved, would improve their workers (as well as his bottom line). So Lowell employed young women drawn from the surrounding farms, who lived in boarding houses run by stern matrons “who made sure the girls were in by 10 p.m.” With libraries, lectures by the likes of Ralph Waldo Emerson and even a literary journal, Lowell had “a lively intellectual and cultural scene.”

Yet there was a trade-off. Once serenaded by birdsong, the mill girls now woke, worked and worried to the peal of factory bells. This was no life for young women who styled themselves “daughters of freemen,” so turnover in Lowell was high. Paternalism along the Merrimack waned once this work force of native milkmaids was replaced by Irish immigrants. The boarding houses were sold off and deteriorated into tenements, and strikes replaced strophes.

The textile industry shifted southward to places such as Kannapolis, N.C., where Charlie Cannon of Cannon Mills ran what one janitor called “a one-man town, but he’s a good man.” Cannon Mills, the “largest manufacturer of towels in the world,” built thousands of “tidy clapboard” houses for its workers in a town whose trash, police and fire departments were all under Cannon’s purview. Mr. Charlie, as he was known, hated the New Deal, though he did not object to state intervention in the form of National Guardsmen busting strikes.

The prince of “The Company Town” is Milton Hershey, the chocolatier who dreamed of a city with “no poverty, no nuisances, no evil.” Hershey, Pa., located far from the candy-craving crowd but surrounded by dairy farms, clean water and “industrious folk,” was informed by the Mennonite values of Milton’s childhood. Milton frowned on drunkenness and immorality on Chocolate Avenue. He despised the uniformity of other planned communities, so his streets, he decided, would exhibit more variety than his candy bars. Hershey provided a zoo, a library, a golf course, free schools, a model orphanage and a “cornucopia of benefits” for workers. “Employees seemed to find contentment in the well-appointed town of Hershey,” writes Mr. Green. He also singles out Maytag (Newton, Iowa), bane of appliance repairmen, and Hormel (Austin, Minn.), bane of gourmets, as companies that, in their heyday, cultivated a sense of community.

The idealistic company towns shared “an initial vision and daily existence articulated and elaborated by a capitalist father figure,” Mr. Green says, but when that figure died so, usually, did the dream. Idealism is seldom heritable. The Indiana steel city named after Elbert H. Gary—the chairman of U.S. Steel when it essentially built the city in the early 1900s—seems no longer modeled on its founder’s “Sunday school principles.” The least nurturing company towns were based on industries that relied on immigrants or the “cracker proletariat,” especially mining. Contra John Denver, West Virginia, at least for miners, was not almost Heaven.

Although Mr. Green ignores Washington, D.C., the company town that never recedes, he shines a harsh light on Oak Ridge, Tenn., created by the feds via “arguably the United States’ most astounding and disruptive exercise of eminent domain.” Thrown up in 1943 as a laboratory for the Manhattan Project and populated by 80,000 newcomers, Oak Ridge was a case of government gone wild, as thousands of Tennesseans—including families who had farmed the land for generations—were booted out by Uncle Sam with two weeks’ notice. Oak Ridge was a monument to statism, with informants, armed guards, “federally financed schools” and cheaply made public housing. Workers ate “mediocre food served in grim cafeterias” and toiled without “any sense of participation in the larger win-the-war effort,” since the Manhattan Project was conducted in the strictest secrecy.

The sootiest coal camp sounds like paradise by contrast. As Mr. Green writes, company towns, whether run by “utopian paternalist or exploitative despot,” were constrained to some degree by the market. Oak Ridge, wholly a creature of the federal government, was beyond any such discipline.

But if Uncle Sam’s company towns were gray and regimented, the company towns overseen by Milton Hershey, Francis Cabot Lowell and even Charlie Cannon were communities enlivened by quirks and passions and idiosyncratic visions. Edens? Hardly. But they had soul, and you can neither buy nor sell that at the company store.

Mr. Kauffman’s books include the recently published “Bye Bye, Miss American Empire” (Chelsea Green).


Full article and photo:


If Irma S. Rombauer hadn’t used the phrase more than 70 years ago, the ideal title for this engaging little volume—half cookbook, half culinary sermon—might have been “The Joy of Cooking.” Ken Albala (a writer and history professor specializing in culinary matters) and Rosanna Nafziger (a Mennonite farm girl turned chef and editor) share a genuine love for the act of cooking itself, as well as the results. “It’s time to take back the kitchen,” they declare, “time to unlock the pantry, to venture once again into our cellars and storehouses, and break free from the golden shackles of convenient, ready-made, industrial food.

” Time is the key word. Almost all the recipes on offer here, from pickles to pastry, are doable in the humblest of kitchens, but they require extra time. To take two extreme examples: the three-to-four-week fermentation period for homemade sauerkraut and the seven-day start-up process for sourdough bread. But what you spend in time you may save on equipment. Two or three good knives, some cast-iron skillets and a few other items—leave the Cuisinart unplugged—are all it takes to turn out most of the “traditional food” recipes contained in this winningly contrarian volume. One of the most appealing: a medieval pork pie that is “much more interesting” than its latter-day incarnations. Dried fruit—”prunes, figs, or even apricots chopped into small pieces”—add “immeasurable depth and texture to the filling,” the authors write.

A question looms over this book, though. How many of us are really interested in baking our daily bread daily? “The Lost Art of Real Cooking” is best for that rainy weekend or vacation lull when lengthy meal-preparation and a brief but satisfying dinner-table pay-off actually sound like fun.

Aram Bakshian Jr., Wall Street Journal


Full article:


Leo Strauss, Back and Better Than Ever

When President George W. Bush ordered the invasion of Iraq in 2003, conspiracy theorists suspected that a puppet master was behind him. No, not Dick Cheney. The alleged puppeteer was the late Leo Strauss.

The famous professor of political philosophy, who died in 1973, had many disciples in the Bush administration, and journalists had frequently misquoted Strauss as arguing that “one must make the whole globe democratic.” Opponents of the war who were looking for a more sinister scapegoat than faulty intelligence about Saddam Hussein’s weapons of mass destruction put two and two together: Strauss had given his pupils an imperialist itch, and now that they were in power, they were scratching it.

Thanks to the Leo Strauss Center at the University of Chicago, where Strauss taught from 1949 to 1967, this myth will soon face stricter scrutiny. The center is uploading to its website written and audio recordings of Strauss’s lectures, many made by graduate students in the 1950s and 60s. Eventually, students world-wide will be able to take courses by Strauss, free of charge.

“[Our project] vastly increases the amount of material available,” says the center’s director, Nathan Tarcov, who adds that records of Strauss’s lectures are two-to-three times as long as his published works. As it remasters and updates the original material, the center is discovering forgotten jewels. “There’s one course on [Thomas] Hobbes where… the tape is twice as long as the transcript because so much had been omitted,” Mr. Tarcov says.

Greater familiarity with Strauss’s lectures may demolish this myth of him as a neoconservative Svengali. Instead, people may come to recognize him as, among other things, an engaging teacher.

Students loved Strauss because he rebelled against his profession’s norms, especially historicism—the belief that all thought is the product of its time and place. Aristotle, historicism contends, believed the Greek city-state was the best regime because he lived in one. His insights are inapplicable to a modern liberal democracy.

This tenet still infects political science today, causing students excruciating boredom in their (typically, required) classes on political theory. Why should students care about Plato if they’re taught that his philosophy is obsolete?

Listening to the tapes, you hear Strauss’s different approach. He believes that thought—at least by great minds—can transcend its time and place. In other words, he believes there is such a thing as truth.

Instead of cataloging philosophers for rows of classroom note takers, he throws students into an ongoing argument: How should we live? He forces students not merely to study political philosophy but to engage in it.

For example, in one class he asks whether a leader should have guiding principles or he should judge each situation independently.

On the tape, we hear Mr. Levy, a student, ask cheekily, “Did Montgomery have to know anything about Aristotle to win the battle of El Alamein?”

“That is an entirely different question,” Strauss replies—referring to Aristotle’s written works—”whether rules means rules to be found in this or that book.”

“I was just using that as an example,” Mr. Levy fires back.

“There was one thing I believe which was quite clear in the case of Montgomery,” Strauss responds, “that he had to win it…. [I]n the case of politics as distinguished from generalship, the end is somewhat more complicated…the political good consists of a number of ingredients which cannot be reduced to the simple formula, victory.”

Not that Strauss’s relationship with his students was antagonistic. In fact, he spent so much time answering students’ questions that his class often ran past its allotted time. “At times a course went on for so long that Mrs. Strauss had to come in and stop it,” says Werner Dannhauser, a former student of Mr. Strauss.

The reason for Strauss’s energetic exchanges was that he took students seriously. “He said, ‘When you’re teaching always assume there is a silent student in the class who knows more than you do,'” remembers Roger Masters, another former student.

Once the Leo Strauss Center at Chicago finishes its work, today’s students will profit from these exchanges—that is, if it finishes its work. Although the center has paid for the remastered tapes with grants and donations, it still needs about $500,000 to complete editing the transcripts and digitizing Strauss’s papers. Mr. Tarcov plans to begin a fundraising campaign in September.

Here’s hoping he’s successful. Political scientists who refuse to bend to their field’s reigning ideology need a standard-bearer. And what a quizzical standard-bearer Strauss was: a chubby, balding little man with a thick German accent, a squeaky voice and a constant cigarette in his hand.

“You would not think that this man either in his appearance or in his speech would be a Pied Piper to students,” says Jenny Strauss Clay, his daughter. “It wasn’t for reasons of style or eloquence; it was for something else.”

It was for his love of political philosophy, which—despite critics’ objections—he believed to be more than an academic exercise. For him, it was a way of life. Soon, it will be so for hundreds more.

Mr. Bolduc, a former Robert L. Bartley fellow at the Journal, is a William F. Buckley fellow at the National Review Institute.”


Full article and photo:

Morality Check: When Fad Science Is Bad Science

Harvard University announced last Friday that its Standing Committee on Professional Conduct had found Marc Hauser, one of the school’s most prominent scholars, guilty of multiple counts of “scientific misconduct.” The revelation came after a three-year inquiry into allegations that the professor had fudged data in his research on monkey cognition. Since the studies were funded, in part, by government grants, the university has sent the evidence to the Feds.

The professor has not admitted wrongdoing, but he did issue a statement apologizing for making “significant mistakes.” And beyond his own immediate career difficulties, Mr. Hauser’s difficulties spell trouble for one of the trendiest fields in academia—evolutionary psychology.

Mr. Hauser has been at the forefront of a movement to show that our morals are survival instincts evolved over the millennia. When Mr. Hauser’s 2006 book “Moral Minds: How Nature Designed Our Universal Sense of Right and Wrong” was published, evolutionary psychologist and linguist Steven Pinker proclaimed that his Harvard colleague was engaged in “one of the hottest new topics in intellectual life: The psychology and biology of morals.”

The cotton-top tamarin.

Not so long ago, the initial bloom already was off evolutionary psychology. The field earned a bad name by appearing to justify all sorts of nasty, rapacious behaviors, including rape, as successful strategies for Darwinian competition. But the second wave of the discipline solved that PR problem by discovering that evolution favored those with a more progressive outlook. Mr. Hauser has been among those positing that our ancestors survived not by being ruthlessly selfish, but by cooperating, a legacy ingrained in our moral intuitions.

This progressive sort of evolutionary psychology is often in the news. NPR offered an example this week with a story titled “Teary-Eyed Evolution: Crying Serves a Purpose.” According to NPR, “Scientists who study evolution say crying probably conferred some benefit and did something to advance our species.”

What that “something” “probably” is no one seems to know, but that doesn’t dent the enthusiasm for trendy speculation. Crying signals empathy, one academic suggested, And as NPR explained, “our early ancestors who were most empathic probably thrived because it helped them build strong communities, which in turn gave them protection and support.” Note the word “probably,” which means the claim is nothing but a guess.

Christopher Ryan is co-author of the recent book “Sex at Dawn,” itself an exercise in plumbing our prehistoric survival strategies for explanations of the modern human condition. But he is well aware of the limits of evolutionary psychology. “Many of the most prominent voices in the field are less scientists than political philosophers,” he cautioned last summer at the website of the magazine Psychology Today.

Evolutionary psychologists tell elaborate stories explaining modern life based on the conditions and circumstances of our prehistoric ancestors—even though we know very little about those factors. “Often, the fact that their story seems to make sense is the only evidence they offer,” Mr. Ryan wrote. “For them, it may be enough, but it isn’t enough if you’re aspiring to be taken seriously as a science.”

That’s where Mr. Hauser’s work comes in. We may not be able to access the minds or proto-societies of Homo habilis, but we can look at how the minds of modern apes and monkeys work, and extrapolate. Unlike the speculative tales that had become the hallmark of evolutionary psychology, primate research has promised to deliver hard science, the testing of hypotheses through experiments.

Mr. Hauser’s particular specialty has been in studying the cognitive abilities of New World monkeys such as the cotton-top tamarins of South America. He has cranked out a prodigious body of work, and bragged that his field enjoyed “exciting new discoveries uncovered every month, and rich prospects on the horizon,” He and his colleagues, Mr. Hauser proclaimed, were developing a new “science of morality.” Now his science is suspect.

As rumors swirled that Harvard was about to ding Mr. Hauser for scientific misconduct, prominent researchers in the field worried they would be tarnished by association. The science magazine Nature asked Frans de Waal—a primatologist at Emory University and author, most recently, of the widely read book “The Age of Empathy: Nature’s Lessons for a Kinder Society”—about what Mr. Hauser’s predicament meant for his discipline. He was blunt: “It is disastrous.”

Mr. Hauser had boldly declared that through his application of science, not only could morality be stripped of any religious hocus-pocus, but philosophy would have to step aside as well: “Inquiry into our moral nature will no longer be the proprietary province of the humanities and social sciences,” he wrote. Would it be such a bad thing if Hausergate resulted in some intellectual humility among the new scientists of morality?

It’s important to note that the Hauser affair also represents the best in science. When lowly graduate students suspected their famous boss was cooking his data, they risked their careers and reputations to blow the whistle on him. They are the scientists to celebrate.

Though there is no doubt plenty to learn from the evolutionary psychologists, when an intellectual fashion becomes a full-blown fad, it’s time to give it the gimlet eye.

Eric Felten, Wall Street Journal


Full article and photo:

New Law to Stop Companies from Checking Facebook Pages in Germany

Potential bosses will no longer be allowed to look at job applicants’ Facebook pages, if a new law comes into force in Germany.

Good news for jobseekers who like to brag about their drinking exploits on Facebook: A new law in Germany will stop bosses from checking out potential hires on social networking sites. They will, however, still be allowed to google applicants.

Lying about qualifications. Alcohol and drug use. Racist comments. These are just some of the reasons why potential bosses reject job applicants after looking at their Facebook profiles.

According to a 2009 survey commissioned by the website CareerBuilder, some 45 percent of employers use social networking sites to research job candidates. And some 35 percent of those employers had rejected candidates based on what they found there, such as inappropriate photos, insulting comments about previous employers or boasts about their drug use.

But those Facebook users hoping to apply for a job in Germany should pause for a moment before they hit the “deactivate account” button. The government has drafted a new law which will prevent employers from looking at a job applicant’s pages on social networking sites during the hiring process.

According to reports in the Monday editions of the Die Welt and Süddeutsche Zeitung newspapers, Interior Minister Thomas de Maizière has drafted a new law on data privacy for employees which will radically restrict the information bosses can legally collect. The draft law, which is the result of months of negotiations between the different parties in Germany’s coalition government, is set to be approved by the German cabinet on Wednesday, according to the Süddeutsche Zeitung.

Although the new law will reportedly prevent potential bosses from checking out a candidate’s Facebook page, it will allow them to look at sites that are expressly intended to help people sell themselves to future employers, such as the business-oriented social networking site LinkedIn. Information about the candidate that is generally available on the Internet is also fair game. In other words, employers are allowed to google potential hires. Companies may not be allowed to use information if it is too old or if the candidate has no control over it, however.

Toilets to Be Off-Limits

The draft legislation also covers the issue of companies spying on employees. According to Die Welt, the law will expressly forbid firms from video surveillance of workers in “personal” locations such as bathrooms, changing rooms and break rooms. Video cameras will only be permitted in certain places where they are justified, such as entrance areas, and staff will have to be made aware of their presence.

Similarly, companies will only be able to monitor employees’ telephone calls and e-mails under certain conditions, and firms will be obliged to inform their staff about such eavesdropping.

The new law is partially a reaction to a number of recent scandals in Germany involving management spying on staff. In 2008, it was revealed that the discount retail chain Lidl had spied on employees in the toilet and had collected information on their private lives. National railway Deutsche Bahn and telecommunications giant Deutsche Telekom were also involved in cases relating to surveillance of workers.

Online data privacy is increasingly becoming a hot-button issue in Germany. The government is currently also working on legislation to deal with issues relating to Google’s Street View service, which is highly controversial in the country because of concerns it could violate individuals’ privacy.


Full article and photo:,1518,713240,00.html

Too good to live

People hate generosity as much as they hate mean-spiritedness

SELFISHNESS is not a good way to win friends and influence people. But selflessness, too, is repellent. That, at least, is the conclusion of a study by Craig Parks of Washington State University and Asako Stone, of the Desert Research Institute in Nevada. Dr Parks and Dr Stone describe, in the latest edition of the Journal of Personality and Social Psychology, how and why the goody two-shoes of this world annoy everyone else to distraction.

In the first of their experiments they asked the participants—undergraduate psychology students—to play a game over a computer network with four other students. In fact, these others (identified only by colours, in a manner reminiscent of the original version of the film, “The Taking of Pelham 123”) were actually played by a computer program.

Participants, both real and virtual, were given ten points in each round of the game. They could keep these, or put some or all of them into a kitty. The incentive for putting points in the kitty was that their value was thus doubled. The participant was then allowed to withdraw up to a quarter of the points contributed by the other four into his own personal bank, while the other four “players” did likewise. The incentive to take less than a quarter was that when the game ended, after a number of rounds unknown to the participants, a bonus would be given to each player if the kitty exceeded a certain threshold, also unspecified. When the game was over the points were converted into lottery tickets for meals.

The trick was that three of the four fake players contributed reasonably to the kitty and took only a fair share, while the fourth did not. Sometimes this maverick behaved greedily, because the experiment had been designed to study the expected ostracism of cheats. As a control, though, the maverick was sometimes programmed to behave in an unusually generous way.

After the game was over, the participants were asked which of the other players they would be willing to have another round with. As the researchers expected, they were unwilling to play again with the selfish. Dr Parks and Dr Stone did not, however, expect the other result—that participants were equally unwilling to carry on with the selfless.

Follow-up versions of the study showed that this antipathy was not because of a sense that the selfless person was incompetent or unpredictable—two kinds of people psychologists know are disliked in this sort of game. So the researchers organised a fourth experiment. This time, once the game was over, they asked the participants a series of questions designed to elucidate their attitudes to the selfless “player”.

Most of the responses fell into two categories: “If you give a lot, you should use a lot,” and “He makes us all look bad.” In other words, people were valuing their own reputations in the eyes of the other players as much as the practical gain from the game, and felt that in comparison with the selfless individual they were being found wanting. Too much virtue was thus seen as a vice. Perhaps that explains why so many saints end up as martyrs. They are simply too irritating.


Full article:

Red menace

How the ‘strange and horrible’ tomato conquered Italy, and America

August in New England is the height of tomato season, when fat red beefsteaks, purple and green heirlooms, and tiny, sweet Sungolds beckon at the farmers market. They’re wonderful in crisp salads, as refreshing gazpachos, and all on their own. Perhaps most of all, tomatoes are synonymous with Italian cuisine.

Without the tomato, pizza would be bread and cheese, spaghetti would seem naked. The North End without red sauce? Impossible. But the tomato’s role in Italian food is fairly recent, according to David Gentilcore, a professor of early modern history at the University of Leicester in the United Kingdom.

In his new book, “Pomodoro! A History of the Tomato in Italy,” Gentilcore traces the tomato from its origins in the New World, where it was domesticated by the Maya, then cultivated by the Aztecs. It likely entered Europe via Spain, after conquistador Hernan Cortes’s conquest of Mexico. When it arrived on the scene in Italy, it was strictly a curiosity for those who studied plants — not something anyone faint of heart would consider eating. In 1628, Paduan physician Giovanni Domenico Sala called tomatoes “strange and horrible things” in a discussion that included the consumption of locusts, crickets, and worms. When people ate tomatoes, it was as a novelty. “People were curious about new foods, the way gourmets are today with new combinations and new uses of high technology in preparation,” Gentilcore said. Yesterday’s tomato is today’s molecular gastronomy.

As tends to happen with food trends, tomatoes caught on. They gradually entered the diet, bringing color to pizza and pasta. They became a major industry — today worth $2.2 billion in Italy, with thousands of acres of land devoted to Roma, San Marzano, and other varieties. They crossed the Atlantic with immigrants, and recolonized the New World in the form of Sunday gravy.

Gentilcore may now know more about the tomato’s long journey through Italy than anyone else, and has a particular eyeall details that really surprise. “I like provoking,” he says. “There are historians who write textbooks to propound myths about history. Then there are historians who like to break down myths and preconceptions. I’m not saying I’m doing that. It’s too grandiose. But I can’t resist the odd thing.”

Gentilcore spoke with Ideas by phone from Arezzo, Italy, where he is currently doing research.

IDEAS: When did the tomato become an integral part of Italy’s cuisine?

GENTILCORE: You can’t imagine Italian food without it. And yet most of these dishes, such as pasta al pomodoro, are fairly recent — from the 1870s or ’80s. Italian immigrants arriving in New York City or Boston were the first generation to eat these dishes as daily things. Making a rich meat sauce with maybe the addition of tomato paste, that Sunday gravy style, is something that happens only in the 20th century.

IDEAS: Why was the tomato initially regarded with such horror?

GENTILCORE: The tomato was associated with the eggplant, which was regarded with suspicion. It’s a vine. Anything that grows along the ground was seen as a plant of low status, something you only give to peasants. And the tomato was thought to hinder digestion because it was cold and watery. When ideas about digestion changed, something like a tomato was not harmful anymore.

IDEAS: It’s been called the “love apple.” Was it seen as an aphrodisiac?

GENTILCORE: Francisco Hernandez, a personal physician to King Philip II of Spain, was sent to the New World to write a huge compendium on animals and plants. He was dismayed and disgusted by the appearance of the tomatillo, which was considered the same thing. He compared it to female genitalia. If we look at what constitutes an aphrodisiac in that period, there’s no way the tomato could’ve made it. Foods then were classified by qualities — “cold,” “wet,” “hot,” “dry.” Aphrodisiac food is usually classified as hot and moist and nourishing. Tomatoes were viewed as cold and moist.

IDEAS: Would an aphrodisiac at the time have been considered a good thing or a bad thing?

GENTILCORE: The medical advice was to stay away from these things. In some cases, it made them all the more attractive. Truffles, for example. For the elites who could afford them, that was part of their attraction. Practically all fruits and vegetables were considered harmful. Melons in particular were really dangerous. The only way to eat something cold and moist like melon was to wrap it in prosciutto or ham, which is hot and dry. It was a way of balancing the food.

IDEAS: What explains the tomato’s rise?

GENTILCORE: Tomatoes took off in Italy because they became an industry, mostly for export. Italians were too poor to buy such things. Most of the country’s processed tomatoes are exported. In Italy, up until the 1950s, there was a large part of the country, even where they produce tomatoes, where they wouldn’t eat the stuff.

IDEAS: What about ketchup? It’s such an American foodstuff.

GENTILCORE: When they started making ketchup in Italy, they didn’t call it that. It was the time of Fascism, so they couldn’t use anything with American overtones. They called it “salsa rubra.” They tried to give it an Italian-sounding name. On a worldwide level, most tomato paste these days probably ends up in ketchup.

IDEAS: Last year a tomato blight hit the United States. What if that happened in Italy?

GENTILCORE: I wouldn’t say it would be a national disaster. So much of what goes into cans in Italy anyway is imported tomatoes. You can have tomatoes from, say, Turkey, imported in an unfinished state, half-processed. This is supposed to be done away with, with new labeling legislation, but I don’t know if it has been.

IDEAS: So the great Italian tomato industry is in some sense really the great Italian canning industry?

GENTILCORE: The fact is that Italy exports more tomatoes than it produces. It’s got to be coming from somewhere. The sums don’t add up. China is also a source of imports, or at least it was. Nowadays the can will specify “sourced from Italian tomatoes,” and you have to trust it. Italians are very keen to buy Italian. They don’t want to buy from Turkey, much less China.

IDEAS: Thus San Marzano tomatoes aren’t just delicious, they’re a matter of national pride.

GENTILCORE: It’s seen as one of these great local southern Italian varieties. But this variety was developed for export. The San Marzano variety was essentially created for the British market, but today to say something like that is tantamount to heresy.

IDEAS: Next thing you’ll be telling me Italians are eating those horrible, pale, golf ball tomatoes we get in the United States in the winter.

GENTILCORE: We are starting to get those in Italy. There’s a demand to eat tomatoes year round. These make money. In July, August, and September, the problem is tomatoes are a cutthroat business. If it weren’t for subsidies, I don’t know what farmers would do. In winter, it’s more of a big business. The Mafia has infiltrated the distribution, especially in the shipping or trucking.

IDEAS: Where do the winter tomatoes come from?

GENTILCORE: The Netherlands. They’re the ones who masterminded this whole technology for growing tomatoes under glass. These tunnels stretch for miles. The vines stretch for 15 meters. They’re absolutely enormous. There’s even a special breed of bee that pollinates these plants which doesn’t exist in the wild. It’s a strange, futuristic world being harvested mechanically. If we really wanted traditional tomatoes with very rich flavors, we’d have to be prepared to pay a lot more than we do.


Full article and photo:

The Spirit Level: how ‘ideas wreckers’ turned book into political punchbag

Bestseller with cross-party support arguing that equality is better for all comes under attack from thinktanks

Richard Wilkinson and Kate Pickett argue that all levels of society, not just the poorest, benefit from more equality.

It was an idea that seemed to unite the political classes. Everyone from David Cameron to Labour leadership candidates Ed and David Miliband have embraced a book by a pair of low-profile North Yorkshire social scientists called The Spirit Level.

Their 274-page book, a mix of “eureka!” insights and statistical analysis, makes the arresting claim that income inequality is the root of pretty much every social ill – murder, obesity, teenage pregnancy, depression. Inequality even limits life expectancy itself, they said.

The killer line for politicians seeking to attract swing voters was that greater equality is not just better for the poor but for the middle classes and the rich too.

Its authors, Richard Wilkinson and Kate Pickett, proclaimed their work a new kind of “evidence-based politics” and it has sold 36,000 copies in the UK, more than Barack Obama’s Change We Can Believe In.

Cameron quoted the book in a pre-election address envisioning the “big society”, the former Labour foreign secretary Jack Straw took it on holiday and Michael Gove, the education secretary, said it was “a fantastic analysis”.

For a book which concludes that either taxes must rise on the rich or their incomes must fall to increase equality, it was an astonishing level of cross-party support.

But this summer, something has snapped and if The Spirit Level were a punchbag, the stuffing would be coming out at the seams. A posse of rightwing institutes has laid into the work with a wave of brutal attacks.

Professor Wilkinson has admitted that an idea he hoped would escape the “leftwing ghetto” to transcend party politics and make Britain a happier, less-divided, more sociable, healthier and safer place has been made unpalatable for Conservatives by “wreckers” from the right.

Following George Osborne’s June budget, which warned of spending cuts so deep most observers are resigned to growing income inequality, a pair of the Conservative party’s favourite thinktanks took aim.

With the success of the cuts programme so important to the government’s credibility, The Spirit Level’s argument that any increase in inequality means more crime, poorer education, more disease and violence was a dangerous idea to let stand.

So on 7 July, the Taxpayers Alliance, a campaign group for lower taxes and lower spending which is also bankrolled by wealthy Conservative donors, branded the book “flimsy” and issued a damning report.

“On almost no measure does the central claim of the Spirit Level, that income inequality decreases life expectancy, stand up to scrutiny,” said Matthew Sinclair, TPA research director. “In some area the authors appear to be promoting utterly absurd ideas.”

Just 24 hours later Policy Exchange, often described as Cameron’s favourite thinktank, weighed in with its own 123-page assault called Beware False Prophets.

Its author, sociologist Peter Saunders, said The Spirit Level could “contaminate an important area of political debate with wonky statistics and spurious correlations … Very little of Wilkinson and Pickett’s statistical evidence actually stands up, and their causal argument is full of holes”.

Wilkinson, an experienced academic with professorships at the universities of Nottingham, London and York, branded Saunders’s attack “a hatchet job” and his analysis of the effect of ethnicity “racist”, a charge denied by Saunders.

Right wing columnists weighed in too. This week Toby Young called it “junk food for the brain” in the Spectator. Ed West, in the Daily Telegraph, said “the real agenda is massive government expansion”.

Wilkinson was shocked by what he believes is part of a worrying trend in political discourse, also happening in the US, where a few people, often attached to right wing institutes, have set themselves up as professional wreckers of ideas.

“Do they even believe what they are saying?” he said today. “I suppose it doesn’t matter if their claims are right or wrong; it is about sowing doubt in people’s minds.”

The authors fear the attacks have scuppered any chance of removing the inequality debate “from the left wing ghetto”.

Wilkinson said: “It is now something for the left and we would rather have avoided that. People on the right will feel relieved knowing they don’t have to treat this seriously and will be happy to know it has been rubbished.”

The Taxpayers Alliance said it knew about the imminent Policy Exchange report, but denies acting in concert with its fellow thinktank. But the two reports taken with the 170-page Spirit Level Delusion, published in May by writer Christopher Snowdon with the Democracy Institute, a rightwing thinktank in Washington DC, meant Wilkinson and Pickett were on the ropes.

Snowdon said he spent six months drafting his attack on the Spirit Level after he “realised it was influential and informing debate” and because he believes it is fundamentally flawed.

He does not believe that The Spirit Level’s claim that the psychological effects on society of income inequality are so great to cause widespread social ills. “I don’t think people outside the intelligensia worry about inequality,” Snowdon said. “The working class don’t worry about how much Wayne Rooney is earning.”

The attacks challenge the Spirit Level’s interpretation and selection of statistics in concluding the causal link between inequality and social ills and dispute Wilkinson and Pickett’s dismissal of other factors, including race and culture, as possible explanations for the relationship.

As Labour enters the autumn conference season searching for a big idea, as well as a leader to unite around, Wilkinson retains hope that his idea could still shape the Labour leadership campaign. Gordon Brown cancelled two invitations for Wilkinson and Pickett to explain their findings to the Cabinet at the end of last year and again in January, but David Miliband, the favourite to become the Labour leader, is a fan.

“The moral case against unjustified inequalities has always been strong, and motivated me and millions of others around the world,” Miliband said. “What is arresting about Richard Wilkinson’s work is his concern with a different argument – the self interest argument. It is in some ways counterintuitive. But it has profound implications.”

Level headed

What the book says

The authors, Richard Wilkinson and Kate Pickett, argue that most of the important health and social problems of the rich world are more common in unequal societies. Using data from 23 rich countries and 50 US states, they found problems are anything from three to 10 times as common in more unequal societies. Again and again, the Scandinavian countries and Japan are at one end of the scale as the most equal, while the US, UK and Australia are at the other.

A key explanation is the psychological impact of inequality which, they say, causes stress and anxiety. For example, maths and literacy scores are lower in more unequal countries, affected by the issues of health, anxiety and depression and consequent drug and alcohol use. The way parents react to relative poverty also affects the way they treat their children, affecting education.

Violence rises in more unequal societies too. Following psychological studies that say men have an incentive to achieve as high a status as possible because their sexual competitiveness depends on it, the authors explain that men use violence when their status is threatened, and more so when there is little status to defend. “The association between inequality and violence is strong and consistent. The evolutionary importance of shame and humiliation provides a plausible explanation of why more unequal societies suffer more violence.” Suicide is the only social ill that increases in more equal societies, they say.

Crucially, the authors argue that the evidence shows that all levels of society benefit from more equality, not just the poorest. On health, “at almost any level of income, it’s better to live in a more equal place”. Whether rich or poor, inequality causes stress, which causes biological reactions that put pressure on the body and increase illness.

Arguably the most profound conclusion is that economic growth among rich countries has finished its work because it is no longer increasing life expectancy and the only way to do that is to better share the wealth we have.

In its most simple terms, the book yearns for society to celebrate humankind’s ability to co-operate and support one another. Are we fighters – which increases inequality? Or are we lovers? The authors say we don’t have to see society, as the philosopher Hobbes saw it, as naturally in conflict – “every man against every man” – owing to rivalry for scarce resources.

Instead, “human beings have a unique potential to be each other’s best source of co-operation, learning, love and assistance of every kind”.


Full article and photo:

Breaking Up Is Hard to Do

False confessions, graphic testimony, framed spouses and ‘unknown blondes’: a history of the difficulty in getting divorced, and how it could now change

In 1961, as cheap, fast Mexican divorces became popular, Marilyn Monroe traveled to Ciudad Juarez to file for divorce from playwright Arthur Miller (above, in happier days).

Unhappy couples in New York have long gone to extremes to throw off the shackles of matrimony—in the worst cases, framing their spouses, producing graphic testimony about affairs, or even confessing to crimes they did not commit. All this will fade into the past if, as expected, Gov. David Paterson signs a bill making New York the last state in the country to adopt unilateral no-fault divorce.

Their counterparts in other states have had it much easier. California adopted the first no-fault divorce bill in 1970; by 1985, every other state in the nation—but one—had passed similar laws. In New York, the miserably married must still charge each other with cruel and inhuman treatment, adultery or abandonment—or wait one year after a mutually agreed legal separation—in order to divorce.

New York’s first divorce law was passed in 1787, at the initiative of a cuckold named Isaac Gouverneur, who had the good fortune of securing Alexander Hamilton as his counsel. From then until the Divorce Reform Law of 1966, adultery was considered the only grounds sufficient for divorce. The woman whose husband fled West; the wife who was physically abused; even a man who discovered on his wedding night that his bride was of “doubtful sex” did not meet the criteria for a full divorce. If they were lucky, they might obtain a legal separation—or after 1829, an annulment.

The legal situation put many distressed couples in a quandary. Some devised adulterous situations. Those with money went out of the state to divorce—to places like Indiana in the 1800s, Nevada in the 1900s, or Mexico in the 1960s. (The cheap, fast Mexican divorce drew many celebrities too, including Marilyn Monroe during her split from Arthur Miller.) Still others remained bound to spouses they could not stand.

In the early 20th century, a number of young women hired themselves out as “correspondents” in divorce cases—essentially bait for philandering husbands. In 1934, the New York Mirror published an article titled, “I Was the ‘Unknown Blonde’ in 100 New York Divorces!”—featuring one Dorothy Jarvis, who earned as much as $100 a job. Ms. Jarvis had several tactics, beyond taking her date to a hotel room and awaiting ambush. There was the “push and raid” (where she would push herself into a man’s room, dressed only in a fur coat, then whip off her outer garment), as well as the “shadow and shanghai” and the “dance and dope.”

There never was a shortage of juicy testimony. In the case of Cock v. Cock of 1818, an eyewitness testified that when Mr. Cock was away, he came to the house before sunrise to find Mrs. Cock in bed with another man, “she being undressed and he having his breeches unbuttoned and down about his feet.” Likewise, in the case against Aaron Burr, the infamous founding father, a servant deposed that she had seen “Jane McManus with her clothes all up & Coln Burr with his hands under them and his pantaloons down.” (The divorce was granted the day Mr. Burr died.)

Then there were those who were desperate enough to fight the law, most without success. The extraordinary case of Eunice Chapman, which drew national attention, was a rare instance of triumph. When she met her husband in Durham, N.Y., in 1802, Eunice Hawley was a 24-year-old beauty headed toward spinsterhood, thanks to her family’s financial failings. She was initially put off by the advances of James Chapman, a widower 15 years her senior, as she later wrote. However, he had a good business and promised her security, and after two years of his dogged pursuit, Ms. Hawley accepted his hand in matrimony.

Eight years and three children later, the marriage fell apart—according to her, because of his abuse, alcoholism and infidelity; according to him, because of her “abusive tongue.” Finally James left Eunice and their children, ages 2, 5 and 6, with no plans to return.

This might have been the end of the story, had James not encountered a religious society called the Shakers. Now famed for their spare, modern-looking designs, the Shakers were a radical sect that, in following the teachings of their English-born leader, Ann Lee, required their followers to renounce their sexuality, all private property and personal family bonds for a larger spiritual union. To James, the Shakers’ spiritual inspiration and orderly lifestyle were just what his family needed, and he hoped that his wife would agree. Eunice did not.


When Love Fades Away

Mary Pickford


Actress Mary Pickford, above, brought early attention to getting a divorce in Nevada, where she split from actor Owen Moore, below, in 1920.

Owen Moore


Kip Rhinelander


Real-estate heir Leonard Kip Rhinelander (above, at right) eloped with Alice Jones, below in October 1924—and then New York society discovered she was a descendant of West Indians. The following month, Mr. Rhinelander filed for an annulment, claiming she had deceived him about her race. After a well-publicized trial, the annulment was denied; in 1929, the couple finally agreed to a Nevada divorce.

Alice Jones



From there, the Chapmans became locked in a battle for the children who were, by the law and culture of the times, considered the rightful property of their father. Eventually James exercised his paternal prerogative: While Eunice was out, he seized the children and brought them to the Shakers near Albany, N.Y. Later, when Eunice came after them, he took the children into hiding among the Shakers in New Hampshire.

Eunice was now an abandoned woman, with no access to her children, and no secure way to rid herself of her husband—a situation made critical by the legal status of married women at the time. From the moment she wed, a woman like Eunice was considered “civilly dead” by law. She could not own property, earn her own wages, sue or be sued, make a will or sign any other contract by herself. She would remain in this state until her husband died or she managed to obtain a full divorce.

Years later, Eunice’s opponents would complain that Eunice had plenty of recourse in the existing divorce laws, since she could charge her husband with adultery. Not only did she claim to have had “ocular demonstration” of his cheating, but she had an eyewitness who could testify that he had seen James lying in a back-room bed with another woman. The problem was that even with such proof there was no guarantee that a court would take Eunice’s side. What scholar Hendrik Hartog has called a “guilty mind” was required, and Eunice would be hard pressed to present her husband as an incorrigible adulterer, in need of punishment, when he had joined a celibate sect.

So what was Eunice to do? An adultery trial would be expensive, as well as risky. And she would have to find James first, which could take years, if she could find him at all. One other legal option remained: She could petition the legislature for a special act of relief that would grant her a divorce as an exception to the existing laws. To do so, she would have to find a legislator willing to push her petition through the Capitol—no small task, since it would have to win favor not only with both houses, but also with the Council of Revision, which had veto power over the legislature. Legislative divorces were actually common practice in other states. But then, as now, New York was unusually conservative and had never issued one.

Going against the odds—and all expectation that she remain at home and accept the actions of her husband—Eunice fought a dazzling battle. She courted politicians, published tell-alls about Shaker “captivity” (which she distributed to legislators, and peddled everywhere), and made the most of what would now be called her phenomenal sex appeal. Her case drew crowds and even attracted the attention of Thomas Jefferson (who was outraged on behalf of the Shakers, not Eunice).

Some lawmakers argued that as badly as Mr. Chapman had treated his wife, the couple should not be allowed a divorce, since an end to the marriage would deprive the bad husband of the possibility of reform. Other legislators warned that permitting this divorce would ruin womankind. In an 1818 speech before the Assembly, Nathan Williams said: “By passing this bill we shall give boldness to the female character. Those who are now apparently amiable, encouraged by the success of Eunice Chapman, would become emboldened….They, like Eunice Chapman, would leave their retirement, and by familiarity with gentlemen would soon…be haunting the members—for divorces!”

Other arguments were not so different from those in circulation today. A main strike against Eunice’s case was that relaxing the divorce laws would prevent couples from working things out, leading to more divorces. Some contemporary activists would agree: The spokesman for the New York State Catholic Conference, Dennis Poust, recently suggested that the proposed changes would make it “easier to get out of marriage than it is to get out of a cell phone contract.”

After three years’ battle, in 1818, Eunice Chapman clinched unprecedented rights to custody, as well as a legislative divorce. Her triumph did not secure her children’s return—for that, mob action, a final face-off with James in New Hampshire, and yet another kidnapping would be required—but it paved the way. Others have not been so fortunate. Eunice’s case goes down as the only divorce in New York history that was granted as a direct act of legislature.

Petitioners did lobby the legislature, and at least one got part way through. Jacob Scramling sought relief after his wife, who had been presumed drowned, was found and then refused to come home. He won approval in the Assembly in 1845, but lost in the Senate. One year later, legislative divorce was abolished, leaving divorcing New Yorkers with no option but to charge each other with adultery. Only in 1966, with the passage of the Divorce Reform Law, did New York catch up with other states by admitting additional grounds such as abandonment and cruelty.

It was only three years later, however, that no-fault divorce legislation passed in California. And for 25 years, New York has stood alone in its approach to no-fault divorce. On the cusp of a historic rewriting of the laws, some critics complain that the current no-fault divorce bill is bad for women. Others champion it as a step forward. Many other couples throughout history would surely have welcomed it.

Ilyon Woo is author of “The Great Divorce.”


Full article and photos:

Riders on a swarm

Mimicking the behaviour of ants, bees and birds started as a poor man’s version of artificial intelligence. It may, though, be the key to the real thing

ONE of the bugaboos that authors of science fiction sometimes use to scare their human readers is the idea that ants may develop intelligence and take over the Earth. The purposeful collective activity of ants and other social insects does, indeed, look intelligent on the surface. An illusion, presumably. But it might be a good enough illusion for computer scientists to exploit. The search for artificial intelligence modelled on human brains has been a dismal failure. AI based on ant behaviour, though, is having some success.

Ants first captured the attention of software engineers in the early 1990s. A single ant cannot do much on its own, but the colony as a whole solves complex problems such as building a sophisticated nest, maintaining it and filling it with food. That rang a bell with people like Marco Dorigo, who is now a researcher at the Free University of Brussels and was one of the founders of a field that has become known as swarm intelligence.

In particular, Dr Dorigo was interested to learn that ants are good at choosing the shortest possible route between a food source and their nest. This is reminiscent of a classic computational conundrum, the travelling-salesman problem. Given a list of cities and their distances apart, the salesman must find the shortest route needed to visit each city once. As the number of cities grows, the problem gets more complicated. A computer trying to solve it will take longer and longer, and suck in more and more processing power. The reason the travelling-salesman problem is so interesting is that many other complex problems, including designing silicon chips and assembling DNA sequences, ultimately come down to a modified version of it.

Ants solve their own version using chemical signals called pheromones. When an ant finds food, she takes it back to the nest, leaving behind a pheromone trail that will attract others. The more ants that follow the trail, the stronger it becomes. The pheromones evaporate quickly, however, so once all the food has been collected, the trail soon goes cold. Moreover, this rapid evaporation means long trails are less attractive than short ones, all else being equal. Pheromones thus amplify the limited intelligence of the individual ants into something more powerful.


In 1992 Dr Dorigo and his group began developing Ant Colony Optimisation (ACO), an algorithm that looks for solutions to a problem by simulating a group of ants wandering over an area and laying down pheromones. ACO proved good at solving travelling-salesman-type problems. Since then it has grown into a whole family of algorithms, which have been applied to many practical questions.

Its most successful application is in logistics. Migros, a Swiss supermarket chain, and Barilla, Italy’s leading pasta-maker, both manage their daily deliveries from central warehouses to local retailers using AntRoute. This is a piece of software developed by AntOptima, a spin-off from the Dalle Molle Institute for Artificial Intelligence in Lugano (IDSIA), one of Europe’s leading centres for swarm intelligence. Every morning the software’s “ants” calculate the best routes and delivery sequences, depending on the quantity of cargo, its destinations, delivery windows and available lorries. According to Luca Gambardella, the director of both IDSIA and AntOptima, it takes 15 minutes to produce a delivery plan for 1,200 trucks, even though the plan changes almost every day.

Ant-like algorithms have also been applied to the problem of routing information through communication networks. Dr Dorigo and Gianni Di Caro, another researcher at IDSIA, have developed AntNet, a routing protocol in which packets of information hop from node to node, leaving a trace that signals the “quality” of their trip as they do so. Other packets sniff the trails thus created and choose accordingly. In computer simulations and tests on small-scale networks, AntNet has been shown to outperform existing routing protocols. It is better able to adapt to changed conditions (for example, increased traffic) and has a more robust resistance to node failures. According to Dr Di Caro, many large companies in the routing business have shown interest in AntNet, but using it would require the replacement of existing hardware, at huge cost. Ant routing looks promising, however, for ad hoc mobile networks like those used by the armed forces and civil-protection agencies.

Routing, of both bytes and lorries, is what mathematicians call a discrete problem, with a fixed, albeit large, number of solutions. For continuous problems, with a potentially infinite number of solutions—such as finding the best shape for an aeroplane wing—another type of swarm intelligence works better. Particle swarm optimisation (PSO), which was invented by James Kennedy and Russell Eberhart in the mid 1990s, is inspired more by birds than by insects. When you place a bird feeder on your balcony, it may take some time for the first bird to find it, but from that moment many others will soon flock around. PSO algorithms try to recreate this effect. Artificial birds fly around randomly, but keep an eye on the others and always follow the one that is closest to “food”. There are now about 650 tested applications of PSO, ranging from image and video analysis to antenna design, from diagnostic systems in medicine to fault detection in industrial machines.

Digital ants and birds, then, are good at thinking up solutions to problems, but Dr Dorigo is now working on something that can act as well as think: robots. A swarm of small, cheap robots can achieve through co-operation the same results as individual big, expensive robots—and with more flexibility and robustness; if one robot goes down, the swarm keeps going. Later this summer, he will be ready to demonstrate his “Swarmanoid” project. This is based on three sorts of small, simple robot, each with a different function, that co-operate in exploring an environment. Eye-bots take a look around and locate interesting objects. Foot-bots then give hand-bots a ride to places identified by the eye-bots. The hand-bots pick up the objects of interest. And they all run home.

All this is done without any pre-existing plan or central co-ordination. It relies on interactions between individual robots. According to Dr Dorigo, bot-swarms like this could be used for surveillance and rescue—for example, locating survivors and retrieving valuable goods during a fire.


Swarmanoid robots may not much resemble the creatures that originally inspired the field, but insects continue to give programmers ideas. Dr Dorigo’s group has, for instance, developed a system to allow robots to detect when a swarm member is malfunctioning. This was inspired by the way some fireflies synchronise their light emissions so that entire trees flash on and off. The robots do the same, and if one light goes out of synch because of a malfunction the other bots can react quickly, either isolating the maverick so that it cannot cause trouble, or calling back to base to have it withdrawn.

All of which is encouraging. But anyone who is really interested in the question of artificial intelligence cannot help but go back to the human mind and wonder what is going on there—and there are those who think that, far from being an illusion of intelligence, what Dr Dorigo and his fellows have stumbled across may be a good analogue of the process that underlies the real thing.

For example, according to Vito Trianni of the Institute of Cognitive Sciences and Technologies, in Rome, the way bees select nesting sites is strikingly like what happens in the brain. Scout bees explore an area in search of suitable sites. When they discover a good location, they return to the nest and perform a waggle dance (similar to the one used to indicate patches of nectar-rich flowers) to recruit other scouts. The higher the perceived quality of the site, the longer the dance and the stronger the recruitment, until enough scouts have been recruited and the rest of the swarm follows. Substitute nerve cells for bees and electric activity for waggle dances, and you have a good description of what happens when a stimulus produces a response in the brain.

Proponents of so-called swarm cognition, like Dr Trianni, think the brain might work like a swarm of nerve cells, with no top-down co-ordination. Even complex cognitive functions, such as abstract reasoning and consciousness, they suggest, might simply emerge from local interactions of nerve cells doing their waggle dances. Those who speak of intellectual buzz, then, might be using a metaphor which is more apt than they realise.


Full article and photo:

The Real-Life Murderesses’ Row

Before Velma and Roxie in ‘Chicago,’ there were Beulah and Belva and a bevy of other good-looking inmates

A gorgeous young woman in 1920s Chicago takes a short-cut out of an extramarital affair: She guns down her lover—and convinces her husband to pay for her defense. In prison, she joins Murderesses’ Row, a veritable chorus line of gals accused of knocking off their men for one flimsy reason or another. Will our Jazz Age heroine beat the rap? Just watch her lawyer give ’em the old razzle-dazzle!

Belva “Belle” Brown in her cabaret-performing days before an ill-fated marriage to William Gaertner in 1917

Before director Rob Marshall made the story a box-office success with the 2002 movie musical “Chicago,” before Bob Fosse turned it into a Broadway hit in the 1970s (a 1990s revival inspired the movie), before the tale was dramatized for the stage in 1926 by former Chicago Tribune reporter Maurine Watkins, there was the real-life arrest and prosecution of Beulah Annan for murder. Watkins covered the trial for the Tribune.

Now comes “The Girls of Murder City,” Douglas Perry’s colorful account of how it happened in 1924 that the Cook County jail came to be packed with young women accused of murder. Their stories riveted the city. Their crimes were viewed as symptoms of a troubled era. Prohibition was backfiring spectacularly. Everyone, women included, seemed to be drinking. Everyone, women included, seemed to be packing guns. Firewater and firearms—not a good combination.

“That was what Prohibition did—it pulled everyone down into the pit,” writes Mr. Perry. Chicago newspapers hired a few of their own beauties to interview and write about the jail’s so-called “sob sisters.” Among the young reporters was Maurine Watkins, a minister’s daughter from Crawfordsville, Ind., who thrilled at the chance to write about the lurid lives of these wayward women.

Watkins helped turn the sob sisters into stars. He described Belva Gaertner, widely considered the most stylish woman on the cell block, as “a handsome divorcée of numerous experiences with divorce publicity,” and Beulah Annan—young, slender and possessed of a soft Southern accent—as “the prettiest woman ever charged with murder in Chicago.” Another of the accused, Kitty Malm, was not so attractive—Watkins and other reporters referred to her as “Wolf Woman.” But even Malm tried to look her best in court: All-male juries in Cook County at the time proved incapable of convicting pretty women of murder, so these accused killers worked hard getting dolled up while awaiting trial. The city’s crime reporters, feeding readers’ appetites, worked hard to help the inmates boost their sultry images.

Maurine Watkins, who covered Belva’s 1924 murder trial for the Chicago Tribune.

The prisoners—more than a dozen in all—were known to cut each other’s hair in the latest styles and trade tips on how to apply cosmetics. An obliging reporter told readers that when one prisoner had a trial approaching, “Belva gave her some really good ideas on costuming, coiffure and general chic.”

When Kitty Malm was finally convicted of murdering a guard during a botched robbery and sentenced to life in prison, experts at the time attributed the result in part to her looks. Mr. Perry quotes a phrenologist telling reporters that murderous women “all have broad heads. You can put it down as a basic principle that broad-headed animals eat the narrow-headed ones.” Such women are “show me” people, he said, “who have to experience to understand, and the jails are full of this type. Food and sexual interests make a strong appeal to them.”

Chicago in the 1920s did seem like Murder City. Machine guns popped on Michigan Avenue. Cops were in the pockets of gangsters. Murderers were often glamorized, seldom convicted. It was easy enough for a man like gangster Al Capone or a woman like Beulah Annan, who vamped for 8photographers even in her cell, to come to believe that criminals could also be stars.

Ultimately, “The Girls of Murder City,” like “Chicago,” is less a crime story than one about celebrity. But in the stage and screen versions, the accused killers are the stars; in Mr. Perry’s book, newspaper reporters are at the heart of the action.

Maurine Watkins happily joined the pack in glamorizing these deadly women—Belva Gaertner was acquitted, though she almost certainly gunned down her beau—but the reporter clearly didn’t want her work to help set them free. In dry, sardonic language, she attacked her subjects, stripping away the lies and conceits that they wore like so much makeup. “What counts with a jury when a woman is on trial for murder?” she asked in one story after Beulah Annan had stunned the public by declaring (falsely) she was pregnant. “Youth? Beauty? And if to these she adds approaching motherhood?”

Another reporter, H.H. Robertson, writing for the Atlanta Constitution, was clearly smitten by the defendant Beulah. “Shaking her Titian hair and relaxing in a dimple smile,” Robertson wrote, “Mrs. Annan gazed coyly at her husband and other relatives and said she had learned a lesson.”

Perhaps not surprisingly, Watkins soon became fed up with the newspaper business as it was practiced in Chicago in the Roaring ’20s. In 1926, two years after the Gaertner trial ended, Watkins quit the Tribune and lit out for the East Coast. She studied playwriting at Yale University. For a class assignment she wrote “The Brave Little Woman,” later renamed “Chicago,” with the murderer-turned-celebrity Roxie Hart based on Beulah Annan and nightclub-singer-turned- murderer Velma Kelly inspired by Belva Gaertner. In the original version of the play Velma played a small role. But she had enough lines that when Gaertner attended the show’s Chicago premiere, she told a reporter: “Gee, this play’s sure got our number, ain’t it. Sure, that’s me.”

The lust for publicity—to provide it, to be its object, or to consume it—in these early days of the mass media was a new phenomenon. Watkins hoped to expose how the nascent celebrity culture had corrupted the newspaper business and the legal system. Lucky for Watkins, she managed to vent her frustrations without losing her sense of humor. “Oh God . . . God . . . Don’t let ’em hang me—don’t,” she has Roxie say after her confession. “Why, I’d . . . die!”

The play scored laughs with its audience even as it excoriated the Midwestern Murder City. “Chicago” opened on Broadway in late 1926—a solid hit, one that produced a road show and inspired at least two movies, a silent film in 1927 and “Roxie Hart” (1942) starring Ginger Rogers.

Over the years, Watkins refused requests to turn “Chicago” into a musical. But after her death in 1969, Watkins’s family sold the rights. The acclaimed choreographer Bob Fosse directed and choreographed the show, and a story taken from long-faded headlines was given new life.


Watkins in the 1920s had a taken a familiar news story and successfully turned it into entertainment. Mr. Perry faced essentially the opposite task: dragging the popular “Chicago”-style tale back to the land of fact. His biggest challenge was one he set for himself by putting Maurine Watkins at center stage. We care about her, but let’s face it: She was a reporter, not a woman on trial for her life. In “The Girls of Murder City” the courtroom scenes are dramatic and well written, but they serve only as sideshows. It’s Watkins we care about, but she faces no great danger, no perilous challenge, and her personal life is tidy.

Mr. Perry overcomes this hurdle the old-fashioned way: with vivid storytelling. “Maureen had never seen so many men in one place,” he writes of the Tribune newsroom as Watkins encountered it. “They barked into telephones, leaped up, slammed hats on their heads, and strode from the room. They whooped and hollered and smoked cigarettes.” Later: “Young men and women arrived in Chicago from across the world and promptly lost their identities—or reforged them into tougher, more vital versions of themselves.”

Sometimes Mr. Perry goes too far, telling us what his characters thought or felt: “She was wide-eyed drunk, scared into some semblance of clarity”; “her heart rate spiked”; “the shame showed on her: It lit her up, coloring her cheeks a deep, invigorating pink, flushing away her guilt.”

But such misdemeanors are easily pardoned, given the book’s merits. “The Girls of Murder City” spans several categories—true-crime, comedy, social history. It turns out that behind “Chicago” there was a sexy, swaggering, historical tale in no need of a soundtrack. Liked the movie. Loved the book.

Mr. Eig, a former Wall Street Journal staff writer, is the author, most recently, of “Get Capone: The Secret Plot That Captured America’s Most Wanted Gangster.”

Full article and photo:


See also:

‘Girl of Murder City’


Thursday, April 24, 1924

The most beautiful women in the city were murderers.

The radio said so. The newspapers, when they arrived, would surely say worse. Beulah Annan peered through the bars of cell 657 in the women’s quarters of the Cook County Jail. She liked being called beautiful for the entire city to hear. She’d greedily consumed every word said and written about her, cut out and saved the best pictures. She took pride in the coverage.

But that was when she was the undisputed “prettiest murderess” in all of Cook County. Now everything had changed. She knew that today, for almost the first time since her arrest almost three weeks ago, there wouldn’t be a picture of her in any of the newspapers. There was a new girl gunner on the scene, a gorgeous Polish girl named Wanda Stopa.

Depressed, Beulah chanced getting undressed. It was the middle of the day, but the stiff prison uniform made her skin itch, and the reporters weren’t going to come for interviews now. They were all out chasing the new girl. Beulah sat on her bunk and listened. The cellblock was quiet, stagnant. On a normal day, the rest of the inmates would have gone to the recreation room after lunch to sing hymns. Beulah never joined them; she preferred to retreat to a solitary spot with the jail radio, which she’d claimed as her own. She listened to fox-trots. She liked to do as she pleased.

It was Belva Gaertner, “the most stylish” woman on the block, who had begun the daily hymn-singing ritual. That was back in March, the day after she staggered into jail, dead-eyed and elephant-tongued, too drunk—or so she claimed—to remember shooting her boyfriend in the head. None of the girls could fathom that stumblebum Belva now. On the bloody night of her arrival, it had taken the society divorcée only a few hours of sleep to regain her composure. The next day, she sat sidesaddle against the cell wall, one leg slung imperiously over the other, heavy-lidded eyes offering a strange, exuberant glint. Reporters crowded in on her, eager to hear what she had to say. This was the woman who, at her divorce trial four years before, had publicly admitted to using a horsewhip on her wealthy elderly husband during lovemaking. Had she hoped to make herself a widow before he could divorce her? Now you had to wonder.

“I’m feeling very well,” Belva told the reporters. “Naturally I should prefer to receive you all in my own apartment; jails are such horrid places. But”—she looked around and emitted a small laugh—”one must make the best of such things.”

And so one did. Belva’s rehabilitation began right there, and it continued unabated to this day. Faith would see her through this ordeal, she told any reporter who passed by her cell. This terrible, unfortunate experience made her appreciate all the more the life she once had with her wonderful ex-husband—solid, reliable William Gaertner, the millionaire scientist and businessman who had provided her with lawyers and was determined to marry her again, despite her newly proven skill with a revolver. He believed Belva had changed.

Maybe she had, but either way, she was still quite different from the other girls at the jail. She came from better stock and made sure they all knew it. Even an inmate as ferocious as Katherine Malm—the “Wolf Woman”—deferred to Belva. Class was a powerful thing; it triggered an instinctive obeisance from women accustomed to coming through the service entrance—or, in this lot’s case, through the smashed-in window. Belva, it seemed, had just the right measure of contempt in her face to cow anybody, including unrepentant murderesses. She was not beautiful like perfect, young Beulah Annan. Her face was a sad, ill-conceived thing, all the features slightly out of proper proportion. But arrogant eyes shined out from it, and there was that full, passionate mouth, a mouth that could inspire a reckless hunger in the most happily married man. She’d proved that many times over. When Belva woke from her blackout on the morning of March 12, new to the jail, still wearing her blood-spattered slip, she’d wanly asked for food. The Wolf Woman, supposedly the tough girl of the women’s quarters, hurried to bring her a currant bun.

“Here, Mrs. Gaertner,” she’d said with a welcoming smile, eyes crinkled in understanding, “eat this and pretend it’s chicken. . . . It makes it easy to swallow.” With that, Katherine Malm set the tone. By the end of the week, the other girls were vying for the privilege of making Belva’s bed and washing her clothes.

To her credit, Belva adapted easily to her new surroundings. The lack of privacy didn’t seem to bother her. The women’s section of the jail, an L-shaped nook on the fourth floor of a massive, rotting, rat-infested facility downtown, was crowded even before her arrival, and not just because of the presence of Mrs. Anna Piculine. “Big Anna,” the press said, was the largest woman ever jailed on a murder charge. She’d killed her husband when he said he’d prefer a slimmer woman. Then there was Mrs. Elizabeth Unkafer, charged with murdering her lover after her cuckolded husband collapsed in grief at learning of her infidelity. And Mary Wezenak—”Moonshine Mary”—the first woman to be tried in Cook County for selling poisonous whiskey. Nearly a dozen others also bunked on what was now being called “Murderess’ Row,” and more were sure to come. Women in the city seemed to have gone mad. They’d become dangerous, especially to their husbands and boyfriends. After the police had trundled Beulah into jail, the director of the Chicago Crime Commission felt compelled to publicly dismiss the recent rash of killings by women. The ladies of Cook County, he said, were “just bunching their hits at this time.” He insisted there was nothing to worry about.

The newspapers certainly weren’t worried; they celebrated the crowded conditions on Murderess’ Row. Everyone in the city wanted to read about the fairest killers in the land. These women embodied the city’s wild, rebellious side, a side that appeared to be on the verge of overwhelming everything else. Chicago in the spring of 1924 was something new, a city for the future. It thrived like nowhere else. Evidence of the postwar depression of 1920–21 couldn’t be found anywhere. The city pulsed with industrial development. Factories operated twenty-four hours a day. Empty lots turned into whole neighborhoods almost overnight. Motor cars were so plentiful that Michigan Avenue traffic backed up daily more than half a mile to the Chicago River. And yet this exciting, prosperous city terrified many observers. Chicago took its cultural obsessions to extremes, from jazz to politics to architecture. Most of all, in the midst of Prohibition, the city reveled in its contempt for the law. The newly elected reform mayor, witnessing a mobster funeral attended by thousands of fascinated citizens, would exclaim later that year: “I am staggered by this state of affairs. Are we living by the code of the Dark Ages or is Chicago part of an American Commonwealth?”

It truly was difficult to tell. Gangsterism, celebrity, sex, art, music—anything dodgy or gauche or modern boomed in the city. That included feminism. Women in Chicago experienced unmatched freedoms, not won gradually—as was the case for the suffragettes—but achieved in short order, on the sly. Respectable saloons before Prohibition didn’t admit women; speakeasies welcomed them. Skirts appeared to be higher here than anywhere else. Even Oak Park high school girls brazenly petted with boys, forcing the wealthy suburb’s police superintendent to threaten to arrest the parents of “baby vamps.” Religious leaders—and newspapers—drew a connection between the new freedoms and the increasing numbers of inmates in Cook County Jail’s women’s section.


Full article:

What a Shame That Guilt Got a Bad Name

Studies show that humiliation is not an effective motivator, but it’s good twin, guilt, can encourage us to behave well.

Authorities in China recently made a surprising announcement: They want to see an end to public shaming of miscreants by the police.

It’s a step in the right direction that shame is falling out of favor as an official punishment in China. Thankfully, here, too, it remains the exception rather than the rule. Most of us have little appetite for bringing back the town stocks, and “perp walks” can end up parading an innocent suspect. The ugliness of shame makes us want to avert our eyes wherever we find it.

Yet in rejecting the cruelty of public humiliation, it’s important that we not make the mistake of tossing aside guilt as well. Despite the bad reputation it has acquired since perhaps Freud, few emotions are more socially productive or personally beneficial. Let’s not hold it against guilt that many people can’t distinguish it from its evil twin, shame.

What’s the difference between the two? Among psychologists, perhaps the most widely held view is that guilt consists of bad feelings about something you have done, whereas shame involves bad feelings about something that you are. The two may go together, of course; the guilt I may feel over stealing might lead to shame when I come to regard myself as a crook.

Nonetheless, there are good grounds for maintaining a sharp distinction, for while shame is bad for us, guilt is surprisingly useful. With its focus on behavior and responsibility, guilt promotes self-control across the board, and empathy as well. Shame, by contrast, appears almost wholly destructive—inspiring sufferers to lash out not just at others but at themselves. Shame is a well-known trigger for suicide, and studies have linked it with substance abuse.

The practice of public shaming, used extensively during China’s brutal Cultural Revolution, may sound creepy to Americans, yet we are no strangers to it in other forms. The scarlet letter imposed on Hester Prynne, after all, was not meant to signify her love for the Anaheim Angels. Even now, in some U.S. jurisdictions, authorities publish the names of prostitution customers, and some states require convicted drunk drivers to use special license plates that tell world of their transgression.

Psychologists June Tangney and Ronda Dearing, in their book “Shame and Guilt,” tell us that, “Guilt about specific behaviors appears to steer people in a moral direction—fostering constructive, responsible behavior in many critical domains. Shame, in contrast, does little to inhibit immoral action. Instead, painful feelings of shame seem to promote self-destructive behaviors (hard drug use, suicide) that can be viewed as misguided attempts to dampen or escape this most punitive moral emotion.”

To the extent shame can deter behavior, it does so only by means of fear rather than the development of a reliable moral compass. Guilt, on the other hand, depends on no one else; the guilt-stricken torment themselves because they know they’ve done something wrong.

In a fascinating long-term study, Ms. Tangney and Ms. Dearing assessed shame-proneness in 380 fifth-graders and then followed up years later. “No apparent benefit was derived from the pain of shame,” they report. “There was no evidence that shame inhibits problematic behaviors. Shame does not deter young people from engaging in criminal activities; it does not deter them from unsafe sex practices; it does not foster responsible driving habits; and in fact it seems to inhibit constructive involvement in community service.”

But it was a different story with guilt, which was assessed in the same study. Guilt-prone fifth-graders grew into teenagers who were more likely to apply to college, less likely to try heroin or suicide, and less likely to drive under the influence of drugs or alcohol. They were also less likely to be arrested, had fewer sex partners, and were more likely to use birth control. “People who have the capacity to feel guilt about specific behaviors,” the authors write, “are less likely than their non-guilt-prone peers to engage in destructive, impulsive, and/or criminal activities.”

This is not to say that shame has no value. Politicians and institutions can be shamed into changing bad behavior, and individuals can use the threat of shame’s domesticated cousin—embarrassment—to alter their own behavior. Psychologists have found, for instance, that you are likelier to carry out your New Year’s resolutions if you tell everyone about them in advance. The fear of embarrassment can be a good motivator.

But so can guilt, which is why its bad name is so unfortunate. For a long time now, parents have been indicted by their offspring for inculcating guilt, and therapists consulted to get over it. Unfortunately, we had the wrong suspect all along, just like one of those perps splashed all over the news and later, quietly, exonerated.

Mr. Akst is a writer in Tivoli, N.Y.


Full article:

A healthy relationship

The mere presence of women seems to bring health benefits to men

FOR hormone-addled teenagers, finding a date can often seem to be a matter of life and death. As it turns out, that may not be so far from the truth. In a paper in the August issue of Demography, a team of researchers led by Nicholas Christakis of Harvard University reports that men who reach sexual maturity in an environment with few available women are at risk of dying sooner than their luckier confrères. The team points out that this finding may have important implications for public health in countries such as India and China, where sex ratios are skewed against women.

The idea that a dearth of available women hurts male longevity has been around for some time. There are several reasons why such a hypothesis makes sense. It is now well established that marriage has a beneficial effect on health and survival. Since women are traditionally the caregivers, these benefits accrue especially to men. If there are fewer potential mates around, men may delay marriage or forgo it entirely, losing out on these nuptial niceties. In addition, with more men and fewer single women, the intense competition for a mate is likely to be stressful. Such early-life stress is known to have effects on health that can last for years.

As reasonable as it all sounds, the hypothesis that a skewed sex ratio leads to shorter male lifespan has never been confirmed in humans. To put it to the test, Dr Christakis and his team made use of two unusual sets of demographic data. The first, known as the Wisconsin Longitudinal Study, consists of a third of all those who graduated from high school in the state of Wisconsin in 1957—about 10,000 people. The male-to-female ratio in each person’s graduating class is known, and provides an indicator of the ratio during the sexually formative years of the study’s participants. The second set of data consists of 7½m white men who were enrolled in America’s Medicare programme in 1993. The researchers found the year and state in which each participant’s Social Security number was issued, which typically happened between his 15th and 25th birthdays. The sex ratio of his contemporaries was then calculated from state-level census data.

In the Wisconsin sample, Dr Christakis looked at those who had died before their 65th birthday. For the women, there was no significant relationship between their school’s sex ratio and their age of death. For the men, however, a significant relationship did emerge. A percentage-point increase in the male-to-female ratio of a man’s graduation class led to a percentage-point increase in his likelihood of dying before the age of 65. The Social Security data, moreover, suggest that a lack of women during men’s teenage years still haunts their health decades later.

The average white American male who was 65 in 1993 could expect to live another 15 years. Dr Christakis found, however, that those who had come of age around the most available women, however, had a life-expectancy three months longer than that of the least favoured. Three months may not seem a huge difference, but according to Dr Christakis it is comparable to the benefit an elderly person can expect from exercising or losing some surplus weight.

In an American context, these results are, perhaps, no more than an interesting curiosity: at the age of 15, boys outnumber girls by about 4% and the ratio shrinks towards equality thereafter. In China, however, it is estimated that there are now 20% more men of marriageable age than women—the result of selective abortion and infanticide consequent upon the country’s “one-child” policy. That bodes ill for the future health of China’s menfolk.


Full article and photo:

From Gutenberg to Zoobert

The final chapters in the history of the printed book have yet to be written.

In the hit 1998 film “You’ve Got Mail,” Meg Ryan’s independent bookstore couldn’t compete with the big chain-store competitor. Underdog-rooting moviegoers couldn’t have known how lucky the independent stores were, having enjoyed so many decades of being the only booksellers. The megastores, which became dominant in the 1980s, have been undermined by technology in less than a generation.

Last week, Barnes & Noble, whose more than 700 stores make it the largest bricks-and-mortar book chain, put itself up for sale. Its market capitalization is less than $1 billion, compared with Amazon’s $55 billion. This reflects both the better economics of Web sales of print books and the increasingly uncertain future of print books in an e-book world.

The creative destruction in the book business has led even Andy Ross to have some sympathy for Barnes & Noble. Mr. Ross was the owner of Cody’s Books, a well-known independent bookstore located near the Berkeley campus of the University of California. Starting in the 1980s, Mr. Ross instigated numerous antitrust and other lawsuits against Barnes & Noble. He owned Cody’s Books for some 30 years before competition from the big stores closed it down in 2008.


“The only thing anyone is talking about in the book business is e-books,” Mr. Ross told me last week. “I see it as being similar to the music industry. There is going to be a tipping point where e-books become the dominant medium, thus ending 500 years of the Gutenberg Age.” Mr. Ross points out that “the future of physical bookstores is pretty bleak, both for chains and independents.”

Technology has made the physical scale of Barnes & Noble a liability. Amazon, now the world’s largest bookseller, launched the Kindle e-reader less than three years ago and already sells more Kindle editions than hardbacks. Amazon projects it will sell more Kindle books than paperbacks within a year. Apple’s iPad, launched just a few months ago, is already a big seller. Google plans its own eBook store.

As in other industries, consumers benefit from technological tumult. They get lower prices, greater choice and one-click buying. Amazon can charge less for printed books because it doesn’t have retail outlets, inventory, returns, printing or shipping costs.

Now, the iPad is pointing the way to a new kind of book. With color and Web access, e-books on the iPad are a new genre. These are called enhanced, multimedia or “transmedia” versions of books, with video, audio and interactivity.

In my family, which includes two young boys, the most popular iPad application is Zoobert, a narrated, interactive cartoon about a happy monster who lives in a sock drawer. Kids are instructed to “shake shake shake” the iPad to help Zoobert decide what to do next. The iPad version of the Dr. Seuss classic “Green Eggs and Ham” includes clever tools for spelling and reading.

Textbook publishers offer e-books with video, interactive testing and built-in research links. Travel publishers such as Lonely Planet have created e-editions that are more convenient and have more information than printed versions.

To those of us who see the technology glass as half full, it’s important to note the costs. Lower sales of print books pressure publishers, which usually get lower profits on e-books. This could mean fewer opportunities for aspiring authors until new business models emerge. Just as technology undermined the economics of local newspapers with online alternatives to classified advertising and upended the music industry by de-bundling physical albums into digital songs, it will take time for book publishers and authors to find new revenues.

It’s also worth noting the role independent bookstores played in the free flow of information. Cody’s Books was the scene in 1989 of an early act of Islamist terrorism in the U.S. A firebomb was thrown through its store window, in which Salman Rushdie’s “The Satanic Verses” was displayed. This happened a month after Iran issued a fatwa calling for Mr. Rushdie’s murder. Mr. Ross and his staff unanimously voted to keep the book on display even as many chains, including Barnes & Noble, withdrew it from their shelves.

Still, Mr. Ross, now a literary agent, is optimistic. He points to “new competitive pressure among e-book companies to get better deals for authors.” The multimedia e-book, he says, “means a lot of potential for creativity,” changing what it means to be a book.

At a time when distracting digital technologies threaten to reduce people’s attention span, it may take an evolution in the art form of a book to retain our interest in long-form story telling. Books that combine text with other media could be more informative and perhaps lead to a new kind of literature.

It’s ideas that count, not how they’re transmitted. Independent bookstores gave way to chains, which are fast giving way to Web-based retailers. At least for now, the printed book will live alongside the e-book. These are new pages in the history of the book, whose final chapters are yet to be written.


Full article and photos:

What, Me Study?

Why so many colleges are education-free zones.

If you have a child in college, or are planning to send one there soon, Craig Brandon has a message for you: Be afraid. Be very afraid.

“The Five-Year Party” provides the most vivid portrait of college life since Tom Wolfe’s 2004 novel, “I Am Charlotte Simmons.” The difference is that it isn’t fiction. The alcohol-soaked, sex-saturated, drug-infested campuses that Mr. Brandon writes about are real. His book is a roadmap for parents on how to steer clear of the worst of them.

Many of the schools Mr. Brandon describes are education-free zones, where students’ eternal obligations—do the assigned reading, participate in class, hand in assignments—no longer apply. The book’s title refers to the fact that only 30% of students enrolled in liberal-arts colleges graduate in four years. Roughly 60% take at least six years to get their degrees. That may be fine with many schools, whose administrators see dollar signs in those extra semesters.

In an effort to win applicants, Mr. Brandon says, colleges dumb down the curriculum and inflate grades, prod students to take out loans they cannot afford, and cover up date rape and other undergraduate crime. The members of the faculty go along with the administration’s insistence on lowering standards out of fear of losing their jobs.

As a former education reporter and a former writing instructor at Keene State College in New Hampshire, Mr. Brandon has both an insider’s and an outsider’s perspective on college life. While his focus is on the 10% of America’s 4,431 liberal-arts colleges that he categorizes as “party schools,” he applies many of his criticisms more widely—even to the nation’s top-tier universities.

Mr. Brandon is especially bothered by colleges’ obsession with secrecy and by what he sees as their misuse of the Federal Educational Rights and Privacy Act, which Congress passed in 1974. Ferpa made student grade reports off-limits to parents. But many colleges have adopted an expansive view of Ferpa, claiming that the law applies to all student records. Schools are reluctant to give parents any information about their children, even when it concerns academic, disciplinary and health matters that might help mom and dad nip a problem in the bud.

Such policies can have tragic consequences, as was the case with a University of Kansas student who died of alcohol poisoning in 2009 and a Massachusetts Institute of Technology student who committed suicide in 2000. In both instances there were warning signs, but the parents were not notified. Ferpa’s most notorious failure was Seung-Hui Cho, the mentally ill Virginia Tech student who murdered 32 people and wounded 25 others during a daylong rampage in 2007. Cho’s high school did not alert Virginia Tech to Cho’s violent behavior, professors were barred from conferring with one another about Cho, and the university did not inform Cho’s parents about their son’s troubles—all on the basis of an excessively expansive interpretation of Ferpa, Mr. Brandon says. He recommends that parents have their child sign a Ferpa release form before heading off to college.

There are several omissions in “The Five-Year Party.” One is the role of college trustees, who share the blame for the failure of the institutions over which they have oversight. Mr. Brandon also gives the faculty a pass. It is hard to believe that professors are as powerless or as cowed as they are portrayed here. The book’s chief villains are a new breed of college administrators, whom Mr. Brandon says have more in common with Gordon Gekko than Aristotle.

Oddest of all is Mr. Brandon’s failure to demand that students take responsibility for their conduct. He depicts them as victims of schools that either coddle them or take advantage of them and of a culture that discourages them from growing up. Mr. Brandon estimates that only 10% of the students at party schools are interested in learning. If that is right, colleges will have little incentive to shape up until their customers—students and parents—demand better.

No one who has been following the deterioration of higher education in recent years will be surprised by the portrait of campus life in “The Five-Year Party.” The author’s contribution is to compile news reports and scholarly studies into one volume, along with original reporting on campuses across the country. The galley proofs that went out to reviewers included an appendix listing 400-plus “party schools,” including many well-known private and state institutions. For whatever reason, that appendix does not appear in the book’s final version. Mr. Brandon does, however, point the finger at many schools in specific examples in the text.

“The Five-Year Party” is a useful handbook for parents to pack when they take their teenager on a college tour, and its list of suggested questions is smart. My favorite: How many of the school’s professors send their own children there? More broadly, Mr. Brandon urges parents not to assume that their child is college material and to consider community colleges and vocational schools, whose curriculums tend to focus on teaching specific job skills.

Mr. Brandon’s ideas for policy reform are uneven. A proposal for legislation that caps tuition increases to the rate of inflation may be unconstitutional if applied to private institutions and is a bad idea in any case; Washington shouldn’t be dictating what schools can charge. Requiring students to pass a test administered by the College Board in order to get a diploma is another bad idea. It would be expensive and subject to ideological abuse. Repealing Ferpa might be the best place to start: The adults who pay the bills need to know what is happening to their kids on campus.

Ms. Kirkpatrick is a former deputy editor of the Journal’s editorial page.


Full article and photo:

Op-Classic, 1982: A Soviet Fable

Comrades: As is well known, I, Leonid Brezhnev, unlike President Reagan, have no obligation to report annually on the state of the union of the Soviet Socialist Republic. This shows the wisdom of Marxist-Leninist thought, for the state of our union is not quite as good as we would like it to be.

We have trouble on our eastern border with China. We have trouble on our western border with Poland. We are not doing as well as expected in Afghanistan. In the Mediterranean, the so-called Communist Party in Italy has not been excessively loyal and the Pope of Rome, meddling in Poland, has not been helpful. But we have a cover for every pot.

Our relations with the United States and other imperialist warmongers are at a sensitive point, for they do not appreciate in Washington our peaceful intentions and have lately been taking the ridiculous view that their military power should match our military power. This is obviously intolerable.

Our efforts to support national liberation movements in Africa, Southeast Asia and Latin America have made some progress but they are expensive. Fidel Castro has been useful in Angola, the Horn of Africa, El Salvador and the rest of Central America, but comrades I must tell you his speeches seem to go on longer than his influence or his results.

At home, our industrial production has matched our agricultural production. The results are not satisfactory, but if we have a few troubles, it should not be forgotten that we have more experience in handling troubles than any other nation on earth, and if we are faithful to our old Russian proverbs, I assure you all will be well.

* To the people of the Soviet Union, I say: Counting other people’s money will never make you rich…. With a good wife and enough cabbage soup, don’t look for more…. Where there’s honey, there will be flies…. The future is his who knows how to wait….

* To the Polish people, I say: Since when does the fiddle pick the tune? Do not slander us. Slander, like coal, will either dirty your hands or burn it…. Once in the pack, you may not have to bark, but you must at least wag your tail….

* To Lech Walesa: Do not defy us or you’ll be sent to count the birches in Siberia.

* To those who grieve for Poland, I say: If you’re tired of a friend, lend him money…. Debt and misery live on the same road…. When you live close to the graveyard, you can’t weep for everyone….

Comrades: We must count our blessings. Our enemies were strong when they were our allies and we were working to-gether in adversity. Prosperity is now their problem, and they don’t quite know it. They concentrate on their mistakes, battering themselves with their failures, whereas we concentrate on our opportunities, minimizing our troubles that make us invincible.

There are many hopeful signs. As Lenin predicted, our adversaries – I should not call them ”enemies” – are divided. They think they ca nprotect their separate national capitalistic interests, rather than defending the ir common civilization. If they do the first, they are not likely to achieve the second. This is our opportunity.

Also, Washington has chosen to challenge us on military grounds, where we are strong, in geographical areas close to our borders where they are weak. This has divided the Western alliance which, of course, is the main objective of our policy.

We see in the rebellion of the rising young generation in Western Europe a great opportunity. It has no memory of the two world wars. It is naturally alarmed by the power of nuclear weapons, as our own young people are, and is protesting more against Washington’s missiles than our missiles. This is encouraging isolation and even passivism both in America and in Western Europe, and if it succeeds, it will either bring about arms control or do our work for us.

So we are not without hope. President Reagan is a puzzle. He threatens us, but he lifts the grain embargo and sends us the bread we need. The bread of strangers can be very hard, but while the bells of Moscow often ring, sometimes they don’t ring for dinner.

As is well known, the imperialists are determined to destroy the glorious Soviet revolution, which is the hope of the world, but they recognize that co-existence is better than no existence, and they keep talking at Geneva and elsewhere, which is mildly hopeful.

We are in the Soviet Union a rich and powerful nation. We have the resources Western Europe needs. We have gas. We have gold, and as was said in Russia long ago: ”A gold hammer will break down an iron door.” So I do not despair.

Yet we will not be bullied. Not everyone who snores is sleeping. But I have said to Mr. Reagan: ”Wag your tongue as much as you please, but don’t wag your gun…. A bad compromise is better than a good battle. Better for all of us to turn back than lose our way…. Life is unbearable, but death is not so pleasant either….”

Comrades: I remind you that all that trembles does not fall…. If you can tickle yourself, you can laugh when you please.

James Reston, New York Times


Full article:

Five Best Books on Doctors’ Lives

Abraham Verghese prescribes these books on doctors’ lives

1. The Life of Sir William Osler

By Harvey Cushing

Oxford, 1925

This two-volume work tops my list not just because William Osler is endlessly fascinating but because his biographer was the pioneering neuro surgeon Harvey Cushing, himself the subject of more than one biography. Cushing won the 1926 Pulitzer Prize for this meaty but immensely readable work. It captures not only the character of the charismatic physician and teacher who shaped American academic medicine but also a late 19th-century era when Europe and America were waking to germ theory and antisepsis. Osler went from naughty Canadian schoolboy to Regius Professor at Oxford, his last position. He was brilliant, inspiring and kind but also a practical joker: Under the pseudonym of Edgerton York Davis of Caughnawaga, Quebec, he once submitted a case report of “penis captivus,” claiming that an amorous couple was unable to disengage. It astonished Osler no end that a medical editor published the piece.

2. Mortal Lessons

By Richard Selzer

Simon & Schuster, 1976

I read “Mortal Lessons” as a medical student and was astonished by the prose, the introspection, the lyricism of this practicing surgeon. Richard Selzer is the model “physician-writer,” if there is such a thing, in that he does so much more than cater to readers’ sometimes prurient interest in things medical; his language is baroque and musical, his epiphanies profound and personal. Here he is writing about the stomach: “Yet, interrupt for a time the care and feeding of this sack of appetite, do it insult with no matter how imagined a slight, then turns the worm to serpent that poisons the intellect for thought, the soul for poetry, the heart for love.”

3. The Puzzle People

By Thomas Starzl

University of Pittsburgh, 1992

From humble beginnings in Le Mars, Iowa, where he was born in 1926, Thomas Starzl became one of the most recognizable names in American medicine, truly the father of modern transplantation, the liver transplant in particular. As he recalls in this engrossing memoir—which is essentially a history of transplantation itself—his first few liver transplants were failures, and he was vilified by the media as engaging in human experimentation. Had Starzl given up at that point, hundreds of patients now living with a new liver wouldn’t be. He perfected the technically complex operation to remove the damaged liver and put in the new, but he also advanced our understanding of rejection and how to overcome it.

4. Adventures in Two Worlds

By A.J. Cronin

McGraw-Hill, 1952

Doctors often speak of a book that “called” them to medicine. The novels of A.J. Cronin, such as “The Keys to the Kingdom” and “The Citadel,” had that effect on many budding doctors of earlier generations. Even better is Cronin’s “Adventures in Two Worlds,” a memoir by this gifted writer and doctor. As a young physician in the 1920s, he worked in a gritty Welsh mining town and became a medical inspector of mines. The hard lives of the coal miners sharpened his sense of injustice. But we also learn that he was concerned with matters of faith and temptation. Retiring from medicine in 1926 due to ill health, he began writing novels—work with themes that were also the themes of his life.

5. Henry Kaplan and the Story of Hodgkin’s Disease

By Charlotte Jacobs

Stanford, 2010

Charlotte Jacobs, an oncologist and biographer, tells the story of the man who was instrumental in making Hodgkin’s lymphoma, a cancer of the lymph glands, a curable condition. In Dr. Jacobs’s capable hands we experience the thrill of clinical research and the hard slogging of clinical trials, which are the only way to tell if treatment is beneficial. We also meet the maverick doctors—Kaplan’s colleagues and rivals—who helped bring about the cure’s discovery. Most people know about Jonas Salk and the polio cure, but Kaplan and the Hodgkin’s-disease tale is even more compelling—and wonderfully told in these pages. A budding Kaplan out there, one hopes, might read this book (or one of the others on this list) and be “called” to medicine. It’s a great journey, and I’d do it all over again in a heartbeat.

Dr. Verghese is a professor of medicine at Stanford University. His books inclued the novel “Cutting for Stone” and the memoir “My Own Country.”


Full article:

Five Best Books about Appeasement

Bruce Bawer selects powerful books about appeasement

1. Guilty Men

By “Cato”
Frederick A. Stokes, 1940

This brief, impassioned j’accuse, written under the pseudonym Cato by British journalists Michael Foot, Peter Howard and Frank Owen, was churned out and published at lightning speed in July 1940, a month after the British escape at Dunkirk from the German army advancing through France. It was a fateful moment, as Foot recalled in a 1988 preface, when the “shameful” era of Prime Minister Neville Chamberlain’s feckless leadership had ended and “English people could look into each other’s eyes with recovered pride and courage.” To read “Guilty Men” now is to feel Englishmen’s shock at the might of the Nazi war machine and to share the authors’ rage at the obtuseness of the appeasers (Chamberlain and 14 others are listed) who sweet-talked Hitler in Munich, agreeing in 1938 to let him annex part of Czechoslovakia, and mocked Winston Churchill for assailing conciliation and urging rearmament. This urgent piece of journalism made appeasement and Chamberlain’s infamous claim, upon returning from Munich, of having secured “peace in our time” synonymous with naïveté and cowardice.

2. Munich, 1938

By David Faber
Simon & Schuster, 2009

Many fine histories of the fruitless attempts to avoid war by appeasing Hitler in the 1930s have been written, but none is more riveting—and more packed with revealing detail—than David Faber’s meticulously researched “Munich, 1938.” In one vividly reconstructed episode after another, Faber brings to life the fatuity of trying to placate a bellicose dictator. Meeting Hitler, British cabinet member Lord Halifax obsequiously extols his “achievements” and comes away praising his charm—while Hitler comes away reassured that he’s dealing with craven fools who won’t thwart his plans. Echoes of the present abound: In an episode that recalls the 2005 Danish cartoon crisis, British authorities, informed that Hitler doesn’t like the caricatures of him in the Evening Standard newspaper, order the offending cartoonist to stop. Rich in illuminating portraits and dramatic confrontations—and effective in its alternating narratives of Downing Street shilly-shallying and ruthless plotting at Berchtesgaden—this book makes it impossible to buy the revisionist argument (served up most famously in A.J.P. Taylor’s 1961 “Origins of the Second World War”) that appeasement was actually a smart policy.

3. Chamberlain and Appeasement

By R.A.C. Parker
Macmillan, 1993

Among the more readable academic works on the appeasement of Hitler and perhaps the most sensible of those that take revisionist arguments seriously, this study by an Oxford historian rejects the standard view that Chamberlain was a coward or fool, arguing that “in 1938 and after Chamberlain was probably wrong and Churchill probably right; but Chamberlain had good reasons for his moves into disaster.” Though ultimately unpersuasive, Parker does a skillful job of putting the best possible face on Chamberlain’s actions and, in doing so, offers a useful window on revisionist thinking. Not that Parker, in the end, is a true revisionist: As he concludes, “Chamberlain’s powerful, obstinate personality and his skill in debate probably stifled serious chances of preventing the Second World War.”

4. I Saw Poland Betrayed

By Arthur Bliss Lane
Bobbs-Merrill, 1948

Arthur Bliss Lane writes in “I Saw Poland Betrayed” that, just as Chamberlain imagined that the notoriously deceitful Hitler respected him and would never lie to him, so Franklin Roosevelt thought that his personal charm “was particularly effective on Stalin” and that FDR could therefore trust him to keep his word. Lane originally shared Roosevelt’s credulity; this candid, absorbing memoir recounts his stint as U.S. ambassador to Poland in 1944-47, during which he gradually saw that there was “no difference between Hitler’s and Stalin’s aims [and] methods.” He concluded that the Teheran, Yalta and Potsdam agreements—and America’s failure to take a tougher stand against the establishment in Eastern Europe of police states run by Soviet puppets—amounted to appeasement of the same kind that Chamberlain had practiced. Lane quit the Foreign Service to write this book and dedicated the rest of his life (he died in 1956) to spreading the grim truth about life behind the Iron Curtain.

5. The Tyranny of Guilt

By Pascal Bruckner
Princeton, 2010

It’s clear why democracies appeased Hitler and Stalin—they preferred making concessions to waging war. But why do current European leaders kowtow to tinhorn tyrants abroad and to the bullies who run European Muslim communities, none of whom wield the kind of power that those dictators did? In this eloquent book—virtually every line of which is an aphorism worth quoting—French intellectual Pascal Bruckner finds the answer to today’s appeasement largely in yesterday’s: remorse over Europe’s failure to prevent world war, the Shoah and the Gulag (not to mention remorse over colonialism) has led Europeans to view their civilization as intrinsically destructive and thus not worth defending. But by choosing guilt over responsibility, Bruckner argues, they’re only repeating past errors. The lesson of the 20th century, he says, isn’t that peace is worth any price; it’s that “democracies have to be powerfully armed in order not to be defeated by the forces of tyranny.”

Mr. Bawer is the author of “Surrender: Appeasing Islam, Sacrificing Freedom,” recently published in paperback.


Full article:

Five Best Books on Statesmanship

1. Hands Off

By Dexter Perkins

Little, Brown, 1941

All presidents since George Washington have dealt with diplomatic problems and conducted foreign policy. Washington, in his farewell address (“it is our true policy to steer clear of permanent alliances”), sought to advise successors who would have to deal with Napoleon, Barbary pirates and armed conflict with Britain, but it was James Monroe who was the first chief executive to establish a working principle of U.S foreign policy designed for indefinite use. Dexter Perkins’s “Hands Off” is a graceful account of the origins and later history of the Monroe Doctrine—which, in 1823, declared the Western hemisphere off-limits to European colonial expansion. The doctrine did more than survive Monroe’s two terms; it became a permanent feature of American policy down to the present day—an astonishing achievement for a parenthetical section of an annual message to Congress.

2. The Diary of James K. Polk

Edited by Allan Nevins

Longmans, 1929

It is a genuine curiosity that few presidents have kept diaries. The best that we have—the daily reflections of James Knox Polk—is a window into the workings of the 19th-century presidency as well as into the soul of a deliberately enigmatic man. The Tennessean Polk was an austere, detached, deeply jaundiced chronicler, a cold-blooded analyst of men and events, so it is a paradoxical delight to find him pouring out his nightly frustrations in plain language: “Mr. Buchanan is an able man, but is in small matters without judgment and sometimes acts like an old maid.” The president also records, with satisfaction, his frequent triumphs. The Polk presidency dealt with diplomatic matters—e.g., settling the U.S.-Canada boundary with Britain—and the state of relations with our troublous southern neighbor. Here, in a candid presidential account, is the annexation of Texas, the war with Mexico, and the addition of Arizona, New Mexico, and California to the United States.

3. Theodore Roosevelt and the Rise of America to World Power

By Howard K. Beale

Johns Hopkins, 1956

In the mid-1950s, when historians were beginning to contend with the origins of Pax Americana and Theodore Roosevelt’s reputation was at its lowest ebb, historian Howard K. Beale delivered a series of lectures—collected in this book— designed to answer two questions. First, to what extent is history influenced by individuals such as presidents; second, how did Roosevelt, in his embrace of global power, arrive at his convictions and become the first commander in chief actively to project American power around the globe? Beale’s account is comprehensive, accessible and fair-minded. But it is also a product of its times, the mid-1950s. In its sense of the “tragedy” of TR’s engagement with the world, leading by stages to the Cold War, the book is a template for academic dissent from American policy.

4. The Intimate Papers of Colonel House

Edited by Charles Seymour

Houghton Mifflin, 1926

Edward Mandell House was a wealthy, Cornell-educated Texas banker and businessman and an early backer of Woodrow Wilson. House declined a cabinet appointment, preferring to remain accessible to Wilson and “serve wherever and whenever possible.” As a trusted presidential adviser and troubleshooter—to an unprecedented degree—House negotiated for peace in Europe during the early phases of World War I, later serving as America’s liaison with the Allies in London. House helped draft Wilson’s Fourteen Points plan to end the war and worked closely with the president on the Versailles Treaty and the Covenant of the League of Nations. But then he was abruptly dismissed from the prickly Wilson’s inner circle. “The Intimate Papers of Colonel House” offers the most exhaustive, authoritative account we have of President Wilson’s thoughts and actions.

5. For the Survival of Democracy

By Alonzo L. Hamby

Free Press, 2004

No single president is more important to American foreign policy in our time than Franklin D. Roosevelt, and there is no better account of FDR’s stout defense of democracy against the twin dangers of Nazism and Soviet Communism than Alonzo L. Hamby’s great work. His thesis concerns the extent to which the Depression blighted the modern world by scuttling post-World War I prosperity, weakening democratic capitalism in Europe and America at its moment of greatest peril, and rendering a second military cataclysm inevitable. FDR had to contend with the Depression while awakening his countrymen to Hitler’s menace and stiffening the spines of European democrats. The consensus on the New Deal remains unsettled; but Roosevelt’s global leadership in the late 1930s and throughout World War II made America the superpower it remains.

Mr. Terzian, literary editor of the Weekly Standard, is the author of “Architects of Power: Roosevelt, Eisenhower, and the American Century.”


Full article:

Why Dictators Hate to See Us Moved by Music

Iran’s ultimate supremo, the Ayatollah Ali Khamenei, gave a remarkable endorsement to music this week, declaring it “not compatible with the highest values of the sacred regime of the Islamic Republic.” He didn’t exactly mean to praise song, but if music is a threat to the sort of murderous theocracy over which Mr. Khamenei presides, well then here’s to music.

The ayatollah didn’t just denounce Western music (which he has done before) but music-making of any and every sort. Instead of wasting time practicing scales, he declared, “It’s better that our dear youth spend their valuable time in learning science and essential and useful skills.” The kind of enrichment he’s interested in isn’t the sort you get in a concert hall.


Songs made me spend.

Though the lament is framed as a concern about youngsters frittering away their precious time, I suspect Mr. Khamenei’s real complaint is rooted in the same disquiet authoritarians have long felt about music—that it affects people profoundly and can’t be controlled.

There’s no doubt that music affects us—perhaps in surprising ways. A new study published in a journal called The Arts in Psychotherapy tested which was more effective in treating mild depression, reclining on a sofa talking to a psychiatrist or simply listening to Bach, Corelli and Mozart. Few of the patients involved (who were drawn from a clinic in Oaxaca, Mexico) were initially interested in spending any time with old Wolfgang Amadeus, but in the end they not only came to enjoy the experience, but benefited from it. The overwhelming majority of the patients listening to classical music reported feeling better two months into the therapy. Not even half of those getting traditional tell-me-about-that psychotherapy saw any improvement.

In an effort to explain these results, the researchers surmised that listening, at least to music with structure and challenging complexity, facilitates “brain development and/or plasticity.” It might have helped, in treating depression, that the patients were given a cheerful earful: Bach’s “Italian Concerto,” as opposed to, say, Samuel Barber’s “Adagio for Strings” or the doleful “Dido’s Lament” by Henry Purcell.

Just how directly can music manipulate our emotions, and even our actions? The website of the science magazine Miller-McCune has been gathering up recent research attempting to demonstrate that music is instrumental. There is a French study just out from the journal Psychology of Music showing that jeunes femmes are more willing to give out their phone numbers after listening to romantic tunes. The young ladies exposed to “neutral” compositions were less open to propositions.

This follows on a study done last year showing that men buy more roses when a florist pipes in love songs. Also from France comes a study published in the International Journal of Hospitality Management suggesting that if a restaurant plays an empathetic soundtrack stacked with “pro-social” ditties customers tend to tip better.

These represent a modern behavioral-science version of John Dryden’s bold declaration, “What passion cannot Music raise and quell!” All of which suggests we would be wise to be wary of music, sucker bait that it is.

But if music is such an effective tool for manipulating people, why isn’t it wholeheartedly embraced by the tyrants in Tehran?

Where would the manipulative arts be without incidental music to give the consumer a shove, whether it’s the anxious undercurrent reality TV uses in the tense build-up to sending someone home; the ominous serial-killer chords employed by slasher flicks and political attack-ads; or the achingly mournful indie-rock that “Grey’s Anatomy” relies on every week to make its viewers reach for the third Kleenex. What propagandist would pass up such power?

The rub is that, though music may be controlling, it isn’t controllable. Even the honest artist looking only to achieve an immediate effect in his audience can’t know just what it is that any one person is feeling.

Composer Paul Hindemith thought that the best his brethren could do was evoke in listeners the “memories of feelings.” He argued that an audience couldn’t be expected to actually “feel” all of the various emotions of a symphony that jumps around from mood to mood.

What is it, anyway, to feel a response to music? We all have the experience, but it remains something of a mystery. In the new book “The Music Instinct: How Music Works and Why We Can’t Do Without It,” Philip Ball notes: “Very often, we may recognize music as having a particular emotional quality without it actually awakening that emotion in us.”

If we do have some visceral, emotive response to a song, it may not have that much to do with the music itself. The emotions triggered by hearing the song you danced to at your wedding are likely to depend on whether your marriage is still intact. This poses problems for the propagandist, who can’t count on his patriotic anthem not to curdle in the ears of abused people trying to divorce themselves from the state.

For all the problems with music in our time—cookie-cut karaoke pop, sterile atonal highbrow modernity, singing competitions, Justin Bieber, Auto-Tune—the art still somehow manages to make despots nervous. That calls for a fanfare.

Eric Felten, Wall Street Journal


Full article and photo:

Hard to find

Why it’s increasingly difficult to make discoveries – and other insights from the science of science

If you look back on history, you get the sense that scientific discoveries used to be easy. Galileo rolled objects down slopes. Robert Hooke played with a spring to learn about elasticity; Isaac Newton poked around his own eye with a darning needle to understand color perception. It took creativity and knowledge to ask the right questions, but the experiments themselves could be almost trivial.

Today, if you want to make a discovery in physics, it helps to be part of a 10,000-member team that runs a multibillion dollar atom smasher. It takes ever more money, more effort, and more people to find out new things.

But until recently, no one actually tried to measure the increasing difficulty of discovery. It certainly seems to be getting harder, but how much harder? How fast does it change?

This type of research, studying the science of science, is in fact a field of science itself, and is known as scientometrics. Scientometrics may sound self-absorbed, a kind of inside baseball for scientists, but it matters: We spend billions of dollars annually on research, and count on science to do such things as cure cancer and master space travel, so it’s good to know what really works.

From its early days of charting the number of yearly articles published in physics, scientometrics has broadened to yield all sorts of insights about how we generate knowledge. A study of the age at which scientists receive grants from the National Institutes of Health found that over the past decades, older scientists have become far more likely to receive grants than younger ones, suggesting that perhaps younger scientists are being given fewer chances to be innovative. In another study, researchers at Northwestern University found that high-impact research results are more likely to come from collaborative teams — often spanning multiple universities — rather than from a single scientist. In other words, the days of the lone hero scientist are vanishing, and you can measure it. Scientometrics has even given bragging tools to scientists, such as the “h-index” that measures the impact of your papers on other researchers.

With a scientometric frame of mind, I approached the question of quantifying how discovery gets harder over time. I looked at three specific areas of science, ignoring areas in which there is truly nothing left to discover — for example, in 1880, Ivar Sandström discovered the parathyroid gland, and the final major internal organ was discovered. I looked at discoveries of species of mammals, minor planets (that is, asteroids), and chemical elements. Assuming that size is a good proxy for how easy it is to discover something, I plotted the average size of discovery over time. (The smaller a creature or an asteroid is, the harder it is to discover; in chemistry, the reverse is true, and the largest elements are the hardest to create and detect.)

What I found, using this simple proxy for difficulty, in each field — biology, astronomy, chemistry — was a curve with the same basic shape. In every case, the ease of discovery went down, and in every case it was a curve called an exponential decay.

What this means is that the ease of discovery doesn’t drop by the same amount every year — it declines by the same fraction each year. For example, the discovered asteroids get 2.5 percent smaller each year. So while the ease of discovery drops off quickly as early researchers pick the low-hanging fruit, it can continue to “decay” a long time, becoming slightly harder without ever quite becoming impossible. Think about Zeno’s Paradox, where the runner keeps on getting halfway closer to the finish line of the race, and thus never quite makes it to the end.

The fact that discovery can become extremely hard does not mean that it stops, of course. All three of these fields have continued to be steadily productive. But it does tell us what kind of resources we may need to continue discovering things. To counter an exponential decay and maintain discovery at the current pace, you need to meet it with a scientific effort that obeys an exponential increase. To find a slightly smaller mammal, or a slightly heavier chemical element, you can’t just expend a bit more effort. Sometimes you have to expend orders of magnitude more.

Sometimes, as with particle physics, this requires throwing more money at the problem — lots more money. We can also see this with some diseases, which are requiring billions of dollars to make progress toward cures. But another way to increase effort is to have more scientists working on a problem. And scientometrics has something to say about this, too. One of the first quantities to be studied in the field of scientometrics was the number of scientists over time. The first PhD’s in the United States were granted by Yale University in 1861. Since that time, the number of scientists in the United States and throughout the world has increased rapidly, even exponentially in some cases, and the rate of growth has been actually faster than the growth of the general population. In fact, if you uttered the statement “90 percent of all the scientists who have ever lived are alive today” nearly any time in the past 300 years, you’d be right. Of course, growth like this is not sustainable — a long exponential increase in the number of scientists means that at some point the number of scientists would need to exceed the number of humans on earth. It doesn’t take scientometrics to tell us we shouldn’t hold our breath for that.

But that doesn’t mean discovery will inevitably slow down. Just as science grows exponentially more difficult in some cases, affordable technology can also proceed along a similar curve, and sometimes make science a lot easier. An exponential increase in computer processing power means that problems once considered hard, like visualizing fractals, proving certain mathematical theorems, or simulating entire populations, can now be done quite easily. And sometimes discoveries can be done by being clever and more innovative, without much money. When Stanley Milgram did his famous “six degrees of separation” experiment, the one that showed everyone on earth was much more closely linked than we imagine, he did it by using not much more than postcards. And scientometrics was there, showing us all how influential that research has been.

Samuel Arbesman is a postdoctoral fellow in the Department of Health Care Policy at Harvard Medical School and is affiliated with the Institute for Quantitative Social Science at Harvard University. He is a regular contributor to Ideas.  


Full article and photo:

Free money

Here’s an idea for foreign aid: Just hand over the cash

Maria Nilza, 36, a mother of four in Brazil, prepared a meal using ingredients bought with her “Bolsa Familia” social plan card. “Bolsa Familia” was created by Brazilian President Luiz Inacio Lula da Silva to help poor people with a monthly pension ranging from 50 to 95 Brazilian reales (23 to 40 US dollars).

There are all sorts of things very poor people living in poor countries don’t have. They lack secondary-school educations, usually, and good medical care. They lack steady work and life insurance, bank accounts and competent legal representation, adequate fertilizer for their crops, adequate protein in their diets, reliable electricity, clean water, indoor plumbing, low-interest loans, incubators for their premature babies, vaccinations and good schools for their children.

But the central thing they lack is money. That is what makes them, by definition, poor: International aid organizations define the “very poor” as those who live on less than a dollar a day. Despite this, the global fight that governments and nongovernmental organizations have waged against poverty in the developing world has focused almost entirely on changing the conditions in which the poor live, through dams and bridges and other massive infrastructure projects to bring commerce and electricity to the countryside, or the construction and staffing of schools and clinics, or subsidizing fertilizer and medicine, or giving away mosquito nets or cheap portable water filters.

In the last decade, however, the governments of the nations where most of the world’s poorest actually live have begun to turn to an idea that seems radical in its simplicity: Solve poverty and spur development by simply giving out money. In Brazil and Mexico, India, China, South Africa, and dozens of other nations, hundreds of millions of poor people are now receiving billions of dollars in cash grants. The programs vary widely, but typically the money — disbursed through banks, post offices, state lottery offices, and even, in rural Africa, ranging armored cars with ATMs on them — goes directly to the poor, rather than being spent on particular projects by government or international aid officials.

The regular infusions of cash augment the paltry budgets of poor households, alleviating the pinch of deprivation, but proponents also see them as a long-term path out of poverty, and even a catalyst for economic growth. Research has credited cash transfers with improving the health and education of poor children, and there is also evidence that cash transfers nurture microenterprises, improve crop yields, and allow the poor to begin to save and invest. On a broader scale, some development experts argue that giving the poor more money to spend expands consumption and markets, and can boost local and national economies. Cash transfers don’t just lift people out of poverty, in other words, they lift entire countries as well. In the process, they may render superfluous large swaths of today’s aid industry.

“Cash transfers are a major success story in development in the last 10 to 15 years,” says Francisco Ferreira, a World Bank economist who helped design Brazil’s transfer program, Bolsa Familia. “They’ve spread incredibly fast from relatively modest beginnings in Bangladesh and Brazil.”

Still, many economists and development experts emphasize that cash transfers are only a limited part of what developing nations need to do to actually develop — giving a family money it can spend on fertilizer or school books or medication can only do so much if the schools and clinics are understaffed and the roads to get crops to markets are impassable.

More fundamentally, cash transfers have triggered a discussion about the extent to which the poor can be relied upon to wisely spend money — theirs and the government’s. The oldest and most closely studied of the cash-transfer programs, in Mexico and Brazil, attach requirements to the money: Eligibility is conditioned on family members going for regular preventive health checkups, or enrolling their children in school. Other programs, however, like South Africa and Bolivia’s pension and child-support grants, simply give out the money with no strings attached.

The debate over whether to attach conditions, or whether to give the money at all, is taking place even as images of the devastation in Haiti — and the chronic and extreme poverty that exacerbated it — are still fresh in the minds of the American public. At the same time, economists and other development experts are beginning to examine just how the world’s poorest people actually spend their money. What they’re finding is that, because the stakes are so high, the very poor are often quite financially savvy. For the staunchest supporters of cash transfers, it’s more evidence that just giving the poor money is, dollar for dollar, among the best uses for aid.

“We’re arguing, basically, that poor people are poor because they don’t have money. It’s not that they’re stupid or need education. They actually know what to do with the money,” says Joseph Hanlon, a development expert at England’s Open University and coauthor of a new book on cash transfers entitled “Just Give Money to the Poor.” “You can’t pull yourself up by your bootstraps if you don’t have boots, and cash transfers are providing boots.”

Cash transfers are new in the context of international aid, but in certain forms they are as old as the modern state. Pensions, after all, are a cash transfer. And while we’re accustomed to thinking of pensions as relative luxuries that wealthy nations can provide for their citizens, economic historians argue that, in many cases, they have served as precursors of economic growth and development. The sociologist Samuel Valenzuela, in work Hanlon and his coauthors cite in their book, compares Chile and Sweden, two countries that at the beginning of the 20th century were identical in terms of population, natural resources, and development. Sweden instituted pensions and universal health care, Chile did not, and as a result, Valenzuela argues, the growth in Sweden’s gross domestic product per capita since then has far outpaced Chile’s.

One of the most prominent proponents of direct cash transfers as a poverty-fighting measure was none other than Milton Friedman, father of the neoclassical Chicago school of economics. In 1962, Friedman proposed replacing all government welfare programs in the United States with what he called a “negative income tax,” a cash handout indexed to income. (Today’s earned income tax credit grows out of the idea, though it is conditional, only going to people with jobs.)

The turn toward cash transfer programs in poorer nations builds off this history, but it also grows out of frustration at decades’ worth of failed development strategies. In the 1980s and 1990s, for example, organizations like the World Bank pressured developing countries to trim social benefits to help balance their budgets. Whether these policies brought growth and stability remains a deeply contested question, but they certainly did little to improve the lot of those nations’ poorest. At the same time, economists were finding that the billions of dollars being spent on aid — whether it was to build clinics or fly in food — were having little appreciable effect on poverty, even when the money wasn’t pocketed by corrupt government officials.

“There was a real discouragement with many of the things we had tried that didn’t seem to work, and a real appetite for experimentation,” says Norbert Schady, an economist at the Inter-American Development Bank and coauthor of a recent World Bank book on cash transfers.

The cash transfer programs spawned by that frustration were instituted not by first-world development agencies but by developing nation governments themselves. Brazil and Mexico started their programs in the late 1990s. Mexico’s, called Progresa, initially targeted 300,000 families; renamed Oportunidades by President Vicente Fox in 2002, it now reaches nearly a quarter of the country’s population. The average cash grant is $38 per household per month, more than a quarter of the average household income of the rural poor, and the total cost of the program runs to 0.3 percent of gross domestic product. There are no restrictions on how the money can be spent, but eligibility is contingent on a set of conditions: children’s enrollment in school, mothers’ regular visits to health clinics with their children, attending talks on health and nutrition, and participating in communal labor projects.

Bolsa Familia in Brazil also reaches roughly a quarter of the country’s population, some 50 million people. Eligibility is linked to income, with poor households receiving a family grant of just over $30 a month, with additional grants for each child. As in Mexico, families are required to take their children to health clinics and enroll them in school, though failing to do so doesn’t result in disqualification as it does in Mexico, but instead triggers a visit from a social worker.

In countries like South Africa and Bolivia, the grant takes a different form: a noncontributory pension, given with no strings attached (South Africa also has an unconditional child support grant). The pensions bring a measure of security to the lives of the nations’ elderly citizens, and in countries where three or even four generations often live under one roof, it also gives a significant boost to a household’s pot of money. Pensions have been found, for example, to lead to an increase in school attendance — with more money, households can forgo the income of a working child and instead send him or her to school. In Bolivia, farmers using their pension to buy fertilizer were able to increase household consumption by an amount double that of the pension.

There is a growing consensus that cash transfer programs can effect dramatic short-term changes at the level of the household. Progresa/Oportunidades and Bolsa Familia in particular have been carefully studied: The households receiving the grants were compared with otherwise identical households without them as the programs were rolled out. In both cases, the findings have been dramatic: Children in households receiving the grants were better fed, healthier, even taller (a sign of better nutrition). They were more likely to attend school and more likely to stay in school up through high school. Studies of cash grants in Ecuador and Nicaragua that Schady of the Inter-American Development Bank did with other economists found that the grants modestly boosted children’s cognitive development as well. Kids in grant households had a larger vocabulary, better short- and long-term memory, and were better behaved.

The grants also seem to bring a measure of gender equality to previously male-dominated households. In most countries, the grants are given to women — they are the primary guardians of the children, and grant officials see them as more likely to spend the money on household necessities — giving them control of a family’s purse strings.

The broader question, though, is whether the programs can truly break intergenerational cycles of poverty, not only mitigating the lives of the poor, but lifting them out of poverty altogether. The focus on children in many transfer programs is an attempt to do this, by narrowing the gaps in health and education that make climbing into the middle class so much harder for poor children.

There is evidence that the grant money is being put to long-term use. Studies have found that, while the grants are mostly spent on better food and child-related expenses, many households do manage to set aside a portion to save or invest or seed a micro-enterprise. Researchers looking at Oportunidades found that recipients tended to regularly put some of the cash into assets like working animals to improve the productivity of their farms, fabric to make textiles to sell, or tools or machinery for carpentry or repair businesses. Having a reliable, regular source of extra income unleashed the inner entrepreneur in many of the recipients.

Proponents like Hanlon, and his coauthors Armando Barrientos and David Hulme of England’s University of Manchester, argue that cash transfers can also grow economies. They point to a 2009 study of Bolsa Familia that found that the program stimulated the national economy. By putting money in the pockets of millions of the poor, it juiced the demand for goods and services and created jobs nationwide.

Simone Cecchini, an economist at the United Nations Economic Commission for Latin America and the Caribbean, makes a similar point. “In some rural areas, where there was not much circulation of cash, we see a sort of multiplier effect, with small businesses benefiting from the money and generating more income.”

But even among the architects of cash transfer programs, there is caution about claiming too much for the programs. A pilot program in New York City modeled on Oportunidades was shut down earlier this year when the results were disappointing. Even the oldest of the programs are barely more than a dozen years old, not enough time to see whether they can effect intergenerational change. And some of the research on longer-term effects has been inconclusive: More children are in school thanks to cash transfer programs, but it’s been hard to show that the students are benefiting from the additional schooling.

“There’s less evidence of impact on the final objectives: Do [the children] learn more in schools, is their health care really better over time, that sort of thing,” says Ferreira.

And while a great part of the appeal of cash transfer programs is their simplicity and resulting lack of bureaucracy — simply handing out money is much easier than allocating the right medications to the ill or targeting subsidies to the particular farmers who need them — it’s also true that significantly supplementing the income of a nation’s poor can get expensive. Dean Karlan, a Yale University development economist, suggests that there may be even simpler, cheaper ways to achieve some of the individual ends of cash transfer programs. For example, work by the economists Michael Kremer and Edward Miguel suggests that, in rural areas in poor countries, deworming children is the most cost-effective way to improve school attendance — the children miss fewer days out sick.

Regardless of the final verdict on these macro claims, however, the success of these programs has shown development economists something they didn’t necessarily expect: that very poor people are, by and large, careful shepherds of money, and giving them more of it is not a recipe for indolence, debauchery, and waste. And though the biggest cash transfer programs are conditional ones, among development experts today there is widespread agreement that the conditions themselves are not the key to the programs’ success.

Poor people, like rich people, do fall victim to biases, both inborn (like privileging today’s spending over tomorrow’s) and cultural (undervaluing, in some parts of the world, the future payoff of educating daughters). Cash-transfer programs have shown, however, that most of the time poor people freely choose to do the very things with their money that the architects of conditional cash transfer programs are trying to pressure them to do. Hanlon points to the fact that school attendance among poor households went up more in South Africa, where the cash transfer was unconditional, than in Mexico, where school attendance was a condition of the grant.

Jonathan Morduch, an economist at New York University, is coauthor of the book “Portfolios of the Poor,” an in-depth look at how the world’s poorest actually manage their financial lives. “By and large they’re doing smart things with their money, they’re thinking hard about how to best spend it, whether, for example, to keep a kid in school or put the money into their business, that sort of thing,” he says.

These are very difficult decisions, of course, where small sums can have life-altering ramifications. That difficulty helps explain the care with which those choices are usually made — and the dramatic effects that providing a little financial breathing room can provide.

Drake Bennett is the staff writer for Ideas.


Full article and photo:

Veiled Threats?

In Spain earlier this month, the Catalonian assembly narrowly rejected a proposed ban on the Muslim burqa in all public places — reversing a vote the week before in the country’s upper house of parliament supporting a ban. Similar proposals may soon become national law in France and Belgium.  Even the headscarf often causes trouble.  In France, girls may not wear it in school.  In Germany (as in parts of Belgium and the Netherlands) some regions forbid public school teachers to wear it on the job, although nuns and priests are permitted to teach in full habit.  What does political philosophy have to say about these developments?   As it turns out, a long philosophical and legal tradition has reflected about similar matters.

Let’s start with an assumption that is widely shared: that all human beings are equal bearers of human dignity.  It is widely agreed that government must treat that dignity with equal respect.   But what is it to treat people with equal respect in areas touching on religious belief and observance?

We now add a further premise: that the faculty with which people search for life’s ultimate meaning — frequently called “conscience” ─  is a very important part of people, closely related to their dignity.   And we add one further premise, which we might call the vulnerability premise: this faculty can be seriously damaged by bad worldly conditions.  It can be stopped from becoming active, and it can even be violated or damaged within.  (The first sort of damage, which the 17th-century American philosopher Roger Williams compared to imprisonment, happens when people are prevented from outward observances required by their beliefs.  The second sort, which Williams called “soul rape,” occurs when people are forced to affirm convictions that they may not hold, or to give assent to orthodoxies they don’t support.)

The vulnerability premise shows us that giving equal respect to conscience requires tailoring worldly conditions so as to protect both freedom of belief and freedom of expression and practice.  Thus the framers of the United States Constitution concluded that protecting equal rights of conscience requires “free exercise” for all on a basis of equality.  What does that really mean, and what limits might reasonably be placed upon religious activities in a pluralistic society?  The philosophical architects of our legal tradition could easily see that when peace and safety are at stake, or the equal rights of others, some reasonable limits might be imposed on what people do in the name of religion.  But they grasped after a deeper and more principled rationale for these limits and protections.

Here the philosophical tradition splits.  One strand, associated with another 17-century English philosopher, John Locke, holds that protecting equal liberty of conscience requires only two things: laws that do not penalize religious belief, and laws that are non-discriminatory about practices, applying the same laws to all in matters touching on religious activities.   An example of a discriminatory law, said Locke, would be one making it illegal to speak Latin in a Church, but not restricting the use of Latin in schools.  Obviously, the point of such a law would be to persecute Roman Catholics.  But if a law is not persecutory in this way, it may stand, even though it may incidentally impose burdens on some religious activities more than on others. If people find that their conscience will not permit them to obey a certain law (regarding military service, say, or work days), they had better follow their conscience, says Locke, but they will have to pay the legal penalty.  A modern Lockean case, decided by the U. S. Supreme Court in 1993, concerned an ordinance passed by the city of Hialeah, Fla., which made “ritual animal sacrifice” illegal, but permitted the usual ways of killing animals for food.  The Court, invalidating the law, reasoned that it was a deliberate form of persecution directed at Santeria worshippers.

Another tradition, associated with Roger Williams, the founder of the colony of Rhode Island and the author of copious writings on religious freedom, holds that protection for conscience must be stronger than this.  This tradition reasons that laws in a democracy are always made by majorities and will naturally embody majority ideas of convenience.  Even if such laws are not persecutory in intent, they may turn out to be very unfair to minorities.  In cases in which such laws burden liberty of conscience ─ for example by requiring people to testify in court on their holy day, or to perform military service that their religion forbids, or to abstain from the use of a drug required in their sacred ceremony ─ this tradition held that a special exemption, called an “accommodation,” should be given to the minority believer.

On the whole, the accommodationist position has been dominant in U. S. law and public culture ─ ever since George Washington wrote a famous letter to the Quakers explaining that he would not require them to serve in the military because the “conscientious scruples of all men” deserve the greatest “delicacy and tenderness.”  For a time, modern constitutional law in the U. S. applied an accommodationist standard, holding that government may not impose a “substantial burden” on a person’s “free exercise of religion” without a “compelling state interest” (of which peace and safety are obvious examples, though not the only ones).   The landmark case articulating this principle concerned a woman, Adell Sherbert, who was a Seventh-Day Adventist and whose workplace introduced a sixth workday, Saturday.   Fired because she refused to work on that day, she sought unemployment compensation from the state of South Carolina and was denied on the grounds that she had refused “suitable work.”  The U. S. Supreme Court ruled in her favor, arguing that the denial of benefits was like fining Mrs. Sherbert for her nonstandard practices: it was thus a denial of her equal freedom to worship in her own way.  There was nothing wrong in principle with choosing Sunday as the day of rest, but there was something wrong with not accommodating Mrs. Sherbert’s special religious needs.

I believe that the accommodationist principle is more adequate than Locke’s principle, because it reaches subtle forms of discrimination that are ubiquitous in majoritarian democratic life.  It has its problems, however.  One (emphasized by Justice Scalia, when he turned our constitutional jurisprudence toward the Lockean standard in 1990) is that it is difficult for judges to administer.  Creating exemptions to general laws on a case by case basis struck Scalia as too chaotic, and beyond the competence of the judiciary.  The other problem is that the accommodationist position has typically favored religion and disfavored other reasons people may have for seeking an exemption to general laws.  This is a thorny issue that requires lengthy discussion, for which there is no room here.  But we don’t need it, because the recent European cases all involve discriminatory laws that fail to pass even the weaker Lockean test.  Let’s focus on the burqa; arguments made there can be adapted to other cases.

Five arguments are commonly made in favor of proposed bans.  Let’s see whether they treat all citizens with equal respect.  First, it is argued that security requires people to show their faces when appearing in public places.  A second, closely related, argument says that the kind of transparency and reciprocity proper to relations between citizens is impeded by covering part of the face.

What is wrong with both of these arguments is that they are applied inconsistently.   It gets very cold in Chicago – as, indeed, in many parts of Europe.  Along the streets we walk, hats pulled down over ears and brows, scarves wound tightly around noses and mouths.  No problem of either transparency or security is thought to exist, nor are we forbidden to enter public buildings so insulated.  Moreover, many beloved and trusted professionals cover their faces all year round: surgeons, dentists, (American) football players, skiers and skaters. What inspires fear and mistrust in Europe, clearly, is not covering per se, but Muslim covering.

A reasonable demand might be that a Muslim woman have a full face photo on her driver’s license or passport.  With suitable protections for modesty during the photographic session, such a photo might possibly be required.  However, we know by now that the face is a very bad identifier.  At immigration checkpoints, eye-recognition and fingerprinting technologies have already replaced the photo.  When these superior technologies spread to police on patrol and airport security lines, we can do away with the photo, hence with what remains of the first and second arguments.

A third argument, very prominent today, is that the burqa is a symbol of male domination that symbolizes the objectification of women (that they are being seen as mere objects).  A Catalonian legislator recently called the burqa a “degrading prison.”  The first thing we should say about this argument is that the people who make it typically don’t know much about Islam and would have a hard time saying what symbolizes what in that religion.  But the more glaring flaw in the argument is that society is suffused with symbols of male supremacy that treat women as objects.  Sex magazines, nude photos, tight jeans — all of these products, arguably, treat women as objects, as do so many aspects of our media culture.  And what about the “degrading prison” of plastic surgery?  Every time I undress in the locker room of my gym, I see women bearing the scars of liposuction, tummy tucks, breast implants.  Isn’t much of this done in order to conform to a male norm of female beauty that casts women as sex objects? Proponents of the burqa ban do not propose to ban all these objectifying practices.  Indeed, they often participate in them.  And banning all such practices on a basis of equality would be an intolerable invasion of liberty.  Once again, then, the opponents of the burqa are utterly inconsistent, betraying a fear of the different that is discriminatory and unworthy of a liberal democracy.  The way to deal with sexism, in this case as in all, is by persuasion and example, not by removing liberty.

Once again, there is a reasonable point to be made in this connection.  When Turkey banned the veil long ago, there was a good reason in that specific context: because women who went unveiled were being subjected to harassment and violence.  The ban protected a space for the choice to be unveiled, and was legitimate so long as women did not have that choice.  We might think of this as a “substantial burden” justified (temporarily) by a “compelling state interest.”  But in today’s Europe women can dress more or less as they please; there is no reason for the burden to religious liberty that the ban involves.

A fourth argument holds that women wear the burqa only because they are coerced.  This is a rather implausible argument to make across the board, and it is typically made by people who have no idea what the circumstances of this or that individual woman are.   We should reply that of course all forms of violence and physical coercion in the home are illegal already, and laws against domestic violence and abuse should be enforced much more zealously than they are.  Do the arguers really believe that domestic violence is a peculiarly Muslim problem?  If they do, they are dead wrong.  According to the U. S. Bureau of Justice Statistics, intimate partner violence made up 20 percentof all nonfatal violent crime experienced by women in 2001. The National Violence Against Women Survey, cited on the B.J.S. Web site,  reports that 52 percent of surveyed women said they were physically assaulted as a child by an adult caretaker and/or as an adult by any type of perpetrator.  There is no evidence that Muslim families have a disproportionate amount of such violence.  Indeed, given the strong association between domestic violence and the abuse of alcohol, it seems at least plausible that observant Muslim families will turn out to have less of it.

Suppose there were evidence that the burqa was strongly associated, statistically, with violence against women.  Could government could legitimately ban it on those grounds?  The U. S. Supreme Court has held that nude dancing may be banned on account of its contingent association with crime, including crimes against women, but it is not clear that this holding was correct.  College fraternities are very strongly associated with violence against women, and some universities have banned all or some fraternities as a result.  But private institutions are entitled to make such regulations; a total governmental ban on the male drinking club (or on other places where men get drunk, such as soccer matches) would certainly be a bizarre restriction of associational liberty.  What is most important, however, is that anyone proposing to ban the burqa must consider it together with these other cases, weigh the evidence, and take the consequences for their own cherished hobbies.

Societies are certainly entitled to insist that all women have a decent education and employment opportunities that give them exit options from any home situation they may dislike If people think that women only wear the burqa because of coercive pressure, let them create ample opportunities for them, at the same time enforce laws making primary and secondary education compulsory, and then see what women actually do.

Finally, I’ve heard the argument that the burqa is per se unhealthy, because it is hot and uncomfortable.  (Not surprisingly, this argument is made in Spain.)  This is perhaps the silliest of the arguments.  Clothing that covers the body can be comfortable or uncomfortable, depending on the fabric.   In India I typically wear a full salwaar kameez of cotton, because it is superbly comfortable, and full covering keeps dust off one’s limbs and at least diminishes the risk of skin cancer.  It is surely far from clear that the amount of skin displayed in typical Spanish female dress would meet with a dermatologist’s approval.  But more pointedly, would the arguer really seek to ban all uncomfortable and possibly unhealthy female clothing?  Wouldn’t we have to begin with high heels, delicious as they are?  But no, high heels are associated with majority norms (and are a major Spanish export), so they draw no ire.

All five arguments are discriminatory.  We don’t even need to reach the delicate issue of religiously grounded accommodation to see that they are utterly unacceptable in a society committed to equal liberty.  Equal respect for conscience requires us to reject them.

Martha Nussbaum teaches law, philosophy, and divinity at The University of Chicago. She is the author of several books, including “Liberty of Conscience: In Defense of America’s Tradition of Religious Equality” (2008) and “Not for Profit: Why Democracy Needs the Humanities” (2010).


Full article and photo:

How facts backfire

Researchers discover a surprising threat to democracy: our brains

It’s one of the great assumptions underlying modern democracy that an informed citizenry is preferable to an uninformed one. “Whenever the people are well-informed, they can be trusted with their own government,” Thomas Jefferson wrote in 1789. This notion, carried down through the years, underlies everything from humble political pamphlets to presidential debates to the very notion of a free press. Mankind may be crooked timber, as Kant put it, uniquely susceptible to ignorance and misinformation, but it’s an article of faith that knowledge is the best remedy. If people are furnished with the facts, they will be clearer thinkers and better citizens. If they are ignorant, facts will enlighten them. If they are mistaken, facts will set them straight.

In the end, truth will out. Won’t it?

Maybe not. Recently, a few political scientists have begun to discover a human tendency deeply discouraging to anyone with faith in the power of information. It’s this: Facts don’t necessarily have the power to change our minds. In fact, quite the opposite. In a series of studies in 2005 and 2006, researchers at the University of Michigan found that when misinformed people, particularly political partisans, were exposed to corrected facts in news stories, they rarely changed their minds. In fact, they often became even more strongly set in their beliefs. Facts, they found, were not curing misinformation. Like an underpowered antibiotic, facts could actually make misinformation even stronger.

This bodes ill for a democracy, because most voters — the people making decisions about how the country runs — aren’t blank slates. They already have beliefs, and a set of facts lodged in their minds. The problem is that sometimes the things they think they know are objectively, provably false. And in the presence of the correct information, such people react very, very differently than the merely uninformed. Instead of changing their minds to reflect the correct information, they can entrench themselves even deeper.

“The general idea is that it’s absolutely threatening to admit you’re wrong,” says political scientist Brendan Nyhan, the lead researcher on the Michigan study. The phenomenon — known as “backfire” — is “a natural defense mechanism to avoid that cognitive dissonance.”

These findings open a long-running argument about the political ignorance of American citizens to broader questions about the interplay between the nature of human intelligence and our democratic ideals. Most of us like to believe that our opinions have been formed over time by careful, rational consideration of facts and ideas, and that the decisions based on those opinions, therefore, have the ring of soundness and intelligence. In reality, we often base our opinions on our beliefs, which can have an uneasy relationship with facts. And rather than facts driving beliefs, our beliefs can dictate the facts we chose to accept. They can cause us to twist facts so they fit better with our preconceived notions. Worst of all, they can lead us to uncritically accept bad information just because it reinforces our beliefs. This reinforcement makes us more confident we’re right, and even less likely to listen to any new information. And then we vote.

This effect is only heightened by the information glut, which offers — alongside an unprecedented amount of good information — endless rumors, misinformation, and questionable variations on the truth. In other words, it’s never been easier for people to be wrong, and at the same time feel more certain that they’re right.

“Area Man Passionate Defender Of What He Imagines Constitution To Be,” read a recent Onion headline. Like the best satire, this nasty little gem elicits a laugh, which is then promptly muffled by the queasy feeling of recognition. The last five decades of political science have definitively established that most modern-day Americans lack even a basic understanding of how their country works. In 1996, Princeton University’s Larry M. Bartels argued, “the political ignorance of the American voter is one of the best documented data in political science.”

On its own, this might not be a problem: People ignorant of the facts could simply choose not to vote. But instead, it appears that misinformed people often have some of the strongest political opinions. A striking recent example was a study done in the year 2000, led by James Kuklinski of the University of Illinois at Urbana-Champaign. He led an influential experiment in which more than 1,000 Illinois residents were asked questions about welfare — the percentage of the federal budget spent on welfare, the number of people enrolled in the program, the percentage of enrollees who are black, and the average payout. More than half indicated that they were confident that their answers were correct — but in fact only 3 percent of the people got more than half of the questions right. Perhaps more disturbingly, the ones who were the most confident they were right were by and large the ones who knew the least about the topic. (Most of these participants expressed views that suggested a strong antiwelfare bias.)

Studies by other researchers have observed similar phenomena when addressing education, health care reform, immigration, affirmative action, gun control, and other issues that tend to attract strong partisan opinion. Kuklinski calls this sort of response the “I know I’m right” syndrome, and considers it a “potentially formidable problem” in a democratic system. “It implies not only that most people will resist correcting their factual beliefs,” he wrote, “but also that the very people who most need to correct them will be least likely to do so.”

What’s going on? How can we have things so wrong, and be so sure that we’re right? Part of the answer lies in the way our brains are wired. Generally, people tend to seek consistency. There is a substantial body of psychological research showing that people tend to interpret information with an eye toward reinforcing their preexisting views. If we believe something about the world, we are more likely to passively accept as truth any information that confirms our beliefs, and actively dismiss information that doesn’t. This is known as “motivated reasoning.” Whether or not the consistent information is accurate, we might accept it as fact, as confirmation of our beliefs. This makes us more confident in said beliefs, and even less likely to entertain facts that contradict them.

New research, published in the journal Political Behavior last month, suggests that once those facts — or “facts” — are internalized, they are very difficult to budge. In 2005, amid the strident calls for better media fact-checking in the wake of the Iraq war, Michigan’s Nyhan and a colleague devised an experiment in which participants were given mock news stories, each of which contained a provably false, though nonetheless widespread, claim made by a political figure: that there were WMDs found in Iraq (there weren’t), that the Bush tax cuts increased government revenues (revenues actually fell), and that the Bush administration imposed a total ban on stem cell research (only certain federal funding was restricted). Nyhan inserted a clear, direct correction after each piece of misinformation, and then measured the study participants to see if the correction took.

For the most part, it didn’t. The participants who self-identified as conservative believed the misinformation on WMD and taxes even more strongly after being given the correction. With those two issues, the more strongly the participant cared about the topic — a factor known as salience — the stronger the backfire. The effect was slightly different on self-identified liberals: When they read corrected stories about stem cells, the corrections didn’t backfire, but the readers did still ignore the inconvenient fact that the Bush administration’s restrictions weren’t total.

It’s unclear what is driving the behavior — it could range from simple defensiveness, to people working harder to defend their initial beliefs — but as Nyhan dryly put it, “It’s hard to be optimistic about the effectiveness of fact-checking.”

It would be reassuring to think that political scientists and psychologists have come up with a way to counter this problem, but that would be getting ahead of ourselves. The persistence of political misperceptions remains a young field of inquiry. “It’s very much up in the air,” says Nyhan.

But researchers are working on it. One avenue may involve self-esteem. Nyhan worked on one study in which he showed that people who were given a self-affirmation exercise were more likely to consider new information than people who had not. In other words, if you feel good about yourself, you’ll listen — and if you feel insecure or threatened, you won’t. This would also explain why demagogues benefit from keeping people agitated. The more threatened people feel, the less likely they are to listen to dissenting opinions, and the more easily controlled they are.

There are also some cases where directness works. Kuklinski’s welfare study suggested that people will actually update their beliefs if you hit them “between the eyes” with bluntly presented, objective facts that contradict their preconceived ideas. He asked one group of participants what percentage of its budget they believed the federal government spent on welfare, and what percentage they believed the government should spend. Another group was given the same questions, but the second group was immediately told the correct percentage the government spends on welfare (1 percent). They were then asked, with that in mind, what the government should spend. Regardless of how wrong they had been before receiving the information, the second group indeed adjusted their answer to reflect the correct fact.

Kuklinski’s study, however, involved people getting information directly from researchers in a highly interactive way. When Nyhan attempted to deliver the correction in a more real-world fashion, via a news article, it backfired. Even if people do accept the new information, it might not stick over the long term, or it may just have no effect on their opinions. In 2007 John Sides of George Washington University and Jack Citrin of the University of California at Berkeley studied whether providing misled people with correct information about the proportion of immigrants in the US population would affect their views on immigration. It did not.

And if you harbor the notion — popular on both sides of the aisle — that the solution is more education and a higher level of political sophistication in voters overall, well, that’s a start, but not the solution. A 2006 study by Charles Taber and Milton Lodge at Stony Brook University showed that politically sophisticated thinkers were even less open to new information than less sophisticated types. These people may be factually right about 90 percent of things, but their confidence makes it nearly impossible to correct the 10 percent on which they’re totally wrong. Taber and Lodge found this alarming, because engaged, sophisticated thinkers are “the very folks on whom democratic theory relies most heavily.”

In an ideal world, citizens would be able to maintain constant vigilance, monitoring both the information they receive and the way their brains are processing it. But keeping atop the news takes time and effort. And relentless self-questioning, as centuries of philosophers have shown, can be exhausting. Our brains are designed to create cognitive shortcuts — inference, intuition, and so forth — to avoid precisely that sort of discomfort while coping with the rush of information we receive on a daily basis. Without those shortcuts, few things would ever get done. Unfortunately, with them, we’re easily suckered by political falsehoods.

Nyhan ultimately recommends a supply-side approach. Instead of focusing on citizens and consumers of misinformation, he suggests looking at the sources. If you increase the “reputational costs” of peddling bad info, he suggests, you might discourage people from doing it so often. “So if you go on ‘Meet the Press’ and you get hammered for saying something misleading,” he says, “you’d think twice before you go and do it again.”

Unfortunately, this shame-based solution may be as implausible as it is sensible. Fast-talking political pundits have ascended to the realm of highly lucrative popular entertainment, while professional fact-checking operations languish in the dungeons of wonkery. Getting a politician or pundit to argue straight-faced that George W. Bush ordered 9/11, or that Barack Obama is the culmination of a five-decade plot by the government of Kenya to destroy the United States — that’s easy. Getting him to register shame? That isn’t.

Joe Keohane is a writer in New York.


Full article:

Thoughts on a Declaration

In advance of the July 4 holiday, the editors asked contributors to The Stone, “What is the philosophical theme, or themes, in the Declaration of Independence that should be recalled in today’s America?”: Responses from Arthur C. Danto, Todd May and J.M. Bernstein are below..

The Pursuit of Happiness, Then and Now
By Arthur C. Danto

Philosophers are especially sensitive to the way that Thomas Jefferson cuts and pastes the words of previous philosophers to make their meanings come out somewhat differently in the Declaration of Independence. This is particularly true of the trio of fundamental human rights famously identified by John Locke. Locke specified that humans enjoyed three basic rights: life, liberty, and property. Jefferson replaces property with “the pursuit of happiness,” which is a borrowing from Aristotle’s ethical writings. Aristotle takes it for granted that humans in general aspire to happiness, but does not consider it a right. The shift from property to happiness seems crucial to a Declaration of Independence, since it is the pattern of thwarting the pursuit of happiness that goes against our humanity, and brings into play the right to revolution. The Americans were not concerned with revolution in the sense of overthrowing the British monarchy but “to throw off such Government, and to provide new guards for their future security.”

The term happiness in current usage does not go nearly as deep as Jefferson’s Aristotelian usage. There would be something frivolous in getting rid of a government on the grounds that it makes us unhappy. In a two party system, it must generally be true that there will be an unhappy minority. The remedy is to vote the ruling party out since their power explains our unhappiness. But the Greek word for happiness is eudemonia, which refers to what is fitting for us as humans — it rests on our essential qualities. The list of injuries Jefferson establishes rests upon a claim that the pattern of conduct laid at the feet of the monarch amount to violations of our humanity.

It is this then that validates the Declaration. July 4 radically changes the nature of the conflict. England had been at war with the American colonies for over a year by that point. Until then it was not a war of independence. It was a revolt against the ruling power, which might end in amnesty, leaving the colonial status intact. But changing the war into a fight for independence required a philosophical transformation of its character. It would sound in today’s terms ridiculous to say that the Americans were fighting for happiness. But they were fighting for philosophical recognition of what it meant to be treated as human. They were fighting for human dignity.

So Jefferson’s emendation was fundamental to the moral character of his cause. Violating property rights would in effect have meant robbing them of the fruits of their labor, in Locke’s view. Putting aside the concept of property enabled Jefferson to table the problem of slavery. The classical tradition gave Jefferson a different basis, mainly because it allowed him to stress the philosophical character of being human. Today the pursuit of happiness sounds poetic; it gives us license to take up painting and the like. It is a lesser right than it was in Jefferson’s time, and is no longer the battle cry that it was to the classically trained.

Arthur C. Danto is Johnsonian Professor Emeritus of Philosophy at Columbia University, and was the art critic for The Nation from 1984 to 2009. He is the author of several books on analytical philosophy and the philosophy of art; and winner of the the National Book Critics Prize for Criticism in 1990, as well as Le Prix Philosophie for “The Madonna of the Future.”

Extending Equality
By Todd May

What might it mean to say, in our time, that “all men are created equal”? For many living during the late 1700s and 1800s, it meant that all white males possessed certain natural rights, although the content of those rights was subject to some dispute. Not in dispute, however, were two assumptions: the limited subject of those rights and their natural character, the latter of which was marked in the Declaration by the phrase “endowed by their Creator.”

For those of us in the early 21st century, the limitation on the subject of those rights has been expanded: in particular, women and people of color are treated as more nearly (although not entirely) equal. In addition, doubt has been cast on the naturalness of what were considered natural rights. Most philosophers now agree that the rights we have are not rooted in nature or in a divine being but in our social practices, our ways of living together.
However, there is one group in particular that, here in the United States, seems to remain markedly less equal than others: undocumented workers.(There is also the situation of gays and lesbians, which, fortunately, seems to be improving.)

When I say that they are treated as markedly less than equal, I do not mean simply that they are refused the rights of citizens. What rights they should have is something I would like to address another time. What I mean is that they are often treated as less than fully human.

The public picture routinely painted of undocumented workers is not one of people who have left their country in search of employment. It is instead one of criminals or even monsters intent on gaming the system and terrorizing the population. Accusations of free-riding, although of questionable accuracy (consider, for instance, that an undocumented worker with false papers will pay Social Security taxes but never receive Social Security) are accepted without debate. Other, more heinous insinuations of stealing, rape, and other crimes are part of the daily fare of immigration discussion.

In response to this picture, legislation is being proposed that treats undocumented workers (and worse, their children) as beneath the reach of basic human rights. Denial of non-emergency public health care and education are either enacted or on the table in several state legislatures (not to mention the draconian laws recently passed in Arizona). There may be vigorous debate regarding the rights of undocumented workers to vote or run for office. But when we say that they cannot receive public health care or have their children educated in our schools because it is a waste of taxpayer money, it is hard to argue that we really believe that all people are created equal.

On this July 4, in particular, we could do worse than to reflect on the most commonly quoted phrase in the Declaration. We could do worse than ask what it might mean for us and for our attitudes. After all, by learning to treat others as equal to us, do we not in turn elevate our own humanity?

Todd May is a professor of philosophy at Clemson University. He is the author 10 books, including “The Philosophy of Foucault” and “Death,” and is at work on a book about friendship in the contemporary period.

Song of Freedom
By J.M. Bernstein

When Janis Joplin achingly sang that “Freedom’s just another word for nothing left to lose,” she (or the song’s composer, Kris Kristofferson) was critiquing a widely held ideal of independence: namely, the aspiration toward maximum liberty from all binding attachments and obligations. Isn’t it obvious, the argument goes, that each promise, and each unbreakable emotional bond, entails a loss of true freedom, an abrogation of true independence? Joplin’s refutation is simple and elegant: in actuality, absolute freedom is a picture of perfect emptiness, since if you have nothing left to lose, you have nothing.

However much the ideal of unencumbered freedom has become associated with the Declaration of Independence, freedom from binding attachments is no part of its philosophical underpinnings. In protesting against British tyranny, the American colonists were not proclaiming an ideal of individual freedom from government. On the contrary, they were pleading the cause for a vital conception of political community.

No words are more redolent of this ambition than the concluding sentence of the Declaration: “And for the support of this declaration, with a firm reliance on the protection of Divine Providence, we mutually pledge to each other our lives, our fortunes and our sacred honor.” What stands behind “The Declaration,” providing it with all the support it can possibly have, is the “mutual pledge” of its signatories. Their pledging to one another everything — not just their fortunes and honor as individuals, but their very lives — is the ethical substance of the document. It is how the American “we” steps onto the world stage.

Too often in the reading of “The Declaration” its background assumptions — the resounding words of its preamble — are unduly privileged. What we take to be self-evident, that all men are equal and endowed with unalienable rights, is intended to be explanatory about why we have systems of government and what they are meant to do — protect those rights.

However, it is neither the rights themselves nor their self-evidence that the preamble is emphasizing — they were commonplace notions of the time; and, even if they were not, a list of self-evident moral truths would still be idle in practice if no one paid attention to them.

As a posse of philosophers has argued, following the lead of Hannah Arendt’s “On Revolution,” the ground note of the preamble is Jefferson’s “incongruous phrase” “We hold,” with its implication that the self-evident truths that follow were somehow lacking in authority despite their divine sanction. It is that “we” taking those truths as definitive of the human condition that made them the very “we” that founded this nation. Holding, pledging, and binding themselves to those truths gave them a political identity, a political “we,” and gave those truths political authority and significance.

Ever since Lincoln revived the Declaration to provide a corrective to the Constitution, it has been easy to forget what a work of collective self-making the Declaration is. And while the words of the preamble were indeed fateful in the overthrow of slavery, the remainder of the document does not mention individual liberty or individual rights; rather, it is concerned with who “we” Americans already are as a political community, and how the British king and Parliament have committed “repeated injuries and usurpations” that violently attack the integrity of our political community.

At present, we hear much talk of how government is failing, how it, that “thing,” the government is betraying the people, as if there were some absolute divide between the people and government, as if there were some notion of absolute freedom that was compromised by its attachments to political community. There is, finally, no “people” apart from the government, and no government apart from the people, there is no “I” without this “we,” and no “we” without each “I.” When the founders pledged their lives, fortunes, and sacred honor to each other, thus creating the “we” of America, they understood that such a pledge was the condition under which life, liberty, and happiness could be pursued; without that pledge, there would be nothing left to lose. Janis and the founders are here in profound agreement.

J.M. Bernstein is University Distinguished Professor of Philosophy at the New School for Social Research and the author of five books. He is now completing a book entitled “Torture and Dignity.”


Full article:

Philosophy App

In his “Hitchhiker’s Guide to the Galaxy,” the science fiction writer Douglas Adams introduces Deep Thought — a computer the size of a small city, designed millions of years ago by a race of hyperintelligent pan-dimensional beings searching for the meaning of life. The super computer is described as a “so amazingly intelligent that even before the data banks had been connected up it had started from ‘I think therefore I am’ and got as far as the existence of rice pudding and income tax before anyone managed to turn it off …”

We’re a little way off from a handheld Deep Thought, but since life and meaning continue to perplex, a new philosophy application for smart phones might be the next best thing. — a popular online resource for questions philosophical — has launched an app — AskPhil —for iPhones, iPods and Android phones.

Alexander George, a professor of philosophy at Amherst College, launched in 2005 (he discusses the site in his post for The Stone, “The Difficulty of Philosophy”). He describes the AskPhil app in an Amherst press release: “When philosophical questions occur to people away from their desks or computer screens they’ll now have the opportunity through their mobile devices to see quickly whether other people have already asked that question and whether it’s received interesting responses.” deploys a panel of over 30 professional philosophers to tackle the questions which have vexed mankind for generations, including problems of logic, love and ethics.

Unlike Deep Thought, AskPhil does not deliver, or purport to deliver, definitive answers. Rather the panelists respond with thoughtful clarifications; they introduce concepts and sometimes suggest useful further reading. They address the questions posed as opposed to answering them.

And they do so relatively quickly. Adams’ hyperintelligent beings asked Deep Thought for the Ultimate Answer to Life the Universe and Everything. Deep Thought took but a brief seven and a half million years to respond. Its definitive answer: 42.

As the super computer kindly pointed out, the Ultimate Answer is baffling, because no one actually knew the Ultimate Question of Life the Universe and Everything that it was a response to. And at least there’s now an iPhone app to help with that.

But is this a good thing? Does this sort of merging of handy technology with deep thought (the lowercase, human kind, that is) enrich philosophical activity or does it fragment and devalue it?

Natasha Lennard, New York Times


Full article:

Lost in the Clouds?

Those in the ivory tower might think themselves enlightened; those on the ground find them irrelevant.
— Mahmood, Richmond

Philosophers are little men in little offices who write unreadable papers about symbolic logic or metaethics. That’s all.
— Ace-K

These sentiments — posted by readers in response to “What Is a Philosopher?” by Simon Critchley — touch on a common complaint: that the concerns of philosophers are far removed from daily lives of most people. Here we offer two more views on the matter: one from Alexander George, a professor of philosophy at Amherst College who runs; and another from Frieda Klotz, an editor of a forthcoming book on Plutarch.

The Difficulty of Philosophy

By Alexander George

One often hears the lament: Why has philosophy become so remote, why has it lost contact with people?

The complaint must be as old as philosophy itself.  In Aristophanes’ “Clouds,” we meet Socrates as he is being lowered to the stage in a basket.  His first words are impatient and distant: “Why do you summon me, o creature of a day?”  He goes on to explain pompously what he was doing before he was interrupted: “I tread the air and scrutinize the sun.”  Already in Ancient Greece, philosophy had a reputation for being troublesomely distant from the concerns that launch it.

Is the complaint justified, however?  On the face of it, it would seem not to be.  I run, a Web site that features questions from the general public and responses by a panel of professional philosophers.  The questions are sent by people at all stages of life: from the elderly wondering when to forgo medical intervention to successful professionals asking why they should care about life at all, from teenagers inquiring whether it is irrational to fear aging to 10-year-olds wanting to know what the opposite of a lion is. The responses from philosophers have been humorous, kind, clear, and at the same time sophisticated, penetrating, and informed by the riches of the philosophical traditions in which they were trained.  The site has evidently struck a chord as we have by now posted thousands of entries, and the questions continue to arrive daily from around the world.  Clearly, philosophers can — and do — respond to philosophical questions in intelligible and helpful ways.

But admittedly, this is casual stuff.  And at the source of the lament is the perception that philosophers, when left to their own devices, produce writings and teach classes that are either unhappily narrow or impenetrably abstruse.  Full-throttle philosophical thought often appears far removed from, and so much more difficult than, the questions that provoke it.

It certainly doesn’t help that philosophy is rarely taught or read in schools.  Despite the fact that children have an intense interest in philosophical issues, and that a training in philosophy sharpens one’s analytical abilities, with few exceptions our schools are de-philosophized zones.  This has as a knock-on effect that students entering college shy away from philosophy courses.  Bookstores — those that remain — boast philosophy sections cluttered with self-help guides.  It is no wonder that the educated public shows no interest in, or perhaps even finds alien, the fully ripened fruits of philosophy.

While all this surely contributes to the felt remoteness of philosophy, it is also a product of it: for one reason why philosophy is not taught in schools is that it is judged irrelevant.  And so we return to the questions of why philosophy appears so removed and whether this is something to lament.

This situation seems particular to philosophy.  We do not find physicists reproached in the same fashion.  People are not typically frustrated when their questions about the trajectory of soccer balls get answered by appeal to Newton’s Laws and differential calculus.

The difference persists in part because to wonder about philosophical issues is an occupational hazard of being human in a way in which wondering about falling balls is not.  Philosophical questions can present themselves to us with an immediacy, even an urgency, that can seem to demand a correspondingly accessible answer.  High philosophy usually fails to deliver such accessibility — and so the dismay that borders on a sense of betrayal.

Must it be so?  To some degree, yes.  Philosophy may begin in wonder, as Plato suggested in the “Theaetetus,” but it doesn’t end there.  Philosophers will never be content merely to catalog wonders, but will want to illuminate them — and whatever kind of work that involves will surely strike some as air treading.

But how high into the air must one travel?  How theoretical, or difficult, need philosophy be?  Philosophers disagree about this and the history of philosophy has thrown up many competing conceptions of what philosophy should be.  The dominant conception today, at least in the United States, looks to the sciences for a model of rigor and explanation.  Many philosophers now conceive of themselves as more like discovery-seeking scientists than anything else, and they view the great figures in the history of philosophy as likewise “scientists in search of an organized conception of reality,” as W.V. Quine, the leading American philosopher of the 20th Century, once put it.  For many, science not only provides us with information that might be pertinent to answering philosophical questions, but also with exemplars of what successful answers look like.

Because philosophers today are often trained to think of philosophy as continuous with science, they are inclined to be impatient with expectations of greater accessibility.  Yes, philosophy does begin in wonder, such philosophers will agree.  But if one is not content to be a wonder-monger, if one seeks illumination, then one must uncover abstract, general principles through the development of a theoretical framework.

This search for underlying, unifying principles may lead into unfamiliar, even alien, landscapes.  But such philosophers will be undaunted, convinced that the correct philosophical account will often depend on an unobvious discovery visible only from a certain level of abstraction.  This view is actually akin to the conception advanced by Aristophanes’ Socrates when he defends his airborne inquiries: “If I had been on the ground and from down there contemplated what’s up here, I would have made no discoveries at all.”  The resounding success of modern science has strengthened the attraction of an approach to explanation that has always had a deep hold on philosophers.

But the history of philosophy offers other conceptions of illumination.  Some philosophers will not accept that insight demands the discovery of unsuspected general principles.  They are instead sympathetic to David Hume’s dismissal, over 250 years ago, of remote speculations in ethics: “New discoveries are not to be expected in these matters,” he said.  Ludwig Wittgenstein took this approach across the board when he urged that “The problems [in philosophy] are solved, not by giving new information, but by arranging what we have always known.”  He was interested in philosophy as an inquiry into “what is possible before all new discoveries and inventions,” and insisted that “If one tried to advance theses in philosophy, it would never be possible to debate them, because everyone would agree to them.”  Insight is to be achieved not by digging below the surface, but rather by organizing what is before us in an illuminatingly perspicuous manner.

The approach that involves the search for “new discoveries” of a theoretical nature is now ascendant.  Since the fruits of this kind of work, even when conveyed in the clearest of terms, can well be remote and difficult, we have here another ingredient of the sense that philosophy spends too much time scrutinizing the sun.

Which is the correct conception of philosophical inquiry?  Philosophy is the only activity such that to pursue questions about the nature of that activity is to engage in it.  We can certainly ask what we are about when doing mathematics or biology or history — but to ask those questions is no longer to do mathematics or biology or history.  One cannot, however, reflect on the nature of philosophy without doing philosophy.  Indeed, the question of what we ought to be doing when engaged in this strange activity is one that has been wrestled with by many great philosophers throughout philosophy’s long history.

Questions, therefore, about philosophy’s remove cannot really be addressed without doing philosophy.  In particular, the question of how difficult philosophy ought to be, or the kind of difficulty it ought to have, is itself a philosophical question.  In order to answer it, we need to philosophize — even though the nature of that activity is precisely what puzzles us.

And that, of course, is another way in which philosophy can be difficult.

Alexander George is professor of philosophy at Amherst College. A new book drawn from, “What Should I Do?: Philosophers on the Good, the Bad, and the Puzzling,” is forthcoming from Oxford University Press.

The Philosophical Dinner Party

By Frieda Klotz

What is the meaning of life? Is there a god? Does the human race have a future? The standard perception of philosophy is that it poses questions that are often esoteric and almost always daunting. So another pertinent question, and one implicitly raised by Mr. George’s discussion, is can philosophy ever be fun?

Philosophy was a way of life for ancient philosophers, as much as a theoretical study — from Diogenes the Cynic, masturbating in public (“I wish I could cure my hunger as easily” he replied, when challenged) to Marcus Aurelius, obsessively transcribing and annotating his thoughts — and its practitioners didn’t mind amusing people or causing public outrage to bring attention to their message. Divisions between academic and practical philosophy have long existed, for sure, but even Plato, who was prolific on theoretical matters, may have tried to translate philosophy into action: ancient rumor has it that he traveled to Sicily to tutor first Dionysios I, king of Syracuse, and later his son (each ruler fell out with Plato and unceremoniously sent him home).

For at least one ancient philosopher, the love of wisdom was not only meant to be practical, but also to combine “fun with serious effort.” This is the definition of Plutarch, a Greek who lived in the post-Classical age of the second century A.D., a time when philosophy tended to focus on ethics and morals. Plutarch is better known as a biographer than a philosopher. A priest, politician and Middle Platonist who lived in Greece under Roman rule, he wrote parallel lives of Greeks and Romans, from which Shakespeare borrowed liberally and Emerson rapturously described as “a bible for heroes.” At the start and end of each “life” he composed a brief moral essay, comparing the faults and virtues of his subjects. Although they are artfully written, the “Lives” are really little more than brilliant realizations of Plutarch’s own very practical take on philosophy, aimed at teaching readers how to live. 

Plutarch thought philosophy should be taught at dinner parties. It should be taught through literature, or written in letters giving advice to friends. Good philosophy does not occur in isolation; it is about friendship, inherently social and shared. The philosopher should engage in politics, and he should be busy, for he knows, as Plutarch sternly puts it, that idleness is no remedy for distress.

Many of Plutarch’s works are concerned with showing readers how to deal better with their day-to-day circumstances. In Plutarch’s eyes, the philosopher is a man who sprinkles seriousness into a silly conversation; he gives advice and offers counsel, but prefers a discussion to a conversation-hogging monologue. He likes to exchange ideas but does not enjoy aggressive arguments. And if someone at his dinner-table seems timid or reserved, he’s more than happy to add some extra wine to the shy guest’s cup.

He outlined this benign doctrine over the course of more than 80 moral essays (far less often read than the “Lives”). Several of his texts offer two interpretive tiers — advice on philosophical behavior for less educated readers, and a call to further learning, for those who would want more. It’s intriguing to see that the guidance he came up with has much in common with what we now call cognitive behavioral therapy. Writing on the subject of contentment, he tells his public: Change your attitudes! Think positive non-gloomy thoughts! If you don’t get a raise or a promotion, remember that means you’ll have less work to do. He points out that “There are storm winds that vex both the rich and the poor, both married and single.”

In one treatise, aptly called “Discussions Over Drinks,” Plutarch gives an account of the dinner-parties he attended with his friends during his lifetime. Over innumerable jugs of wine they grapple with 95 topics, covering science, medicine, social etiquette, women, alcohol, food and literature: When is the best time to have sex? Did Alexander the Great really drink too much? Should a host seat his guests or allow them to seat themselves? Why are old men very fond of strong wine? And, rather obscurely: Why do women not eat the heart of lettuce? (This last, sadly, is fragmentary and thus unanswered). Some of the questions point to broader issues, but there is plenty of gossip and philosophical loose talk.

Plutarch begins “Discussions” by asking his own philosophical question — is philosophy a suitable topic of conversation at a dinner party? The answer is yes, not just because Plato’s “Symposium” is a central philosophic text (symposium being Greek for “drinking party”); it’s because philosophy is about conducting oneself in a certain way — the philosopher knows that men “practice philosophy when they are silent, when they jest, even, by Zeus! when they are the butt of jokes and when they make fun of others.”

Precisely because of its eclecticism and the practical nature of his treatises, Plutarch’s work is often looked down on in the academic world, and even Emerson said he was “without any supreme intellectual gifts,” adding, “He is not a profound mind … not a metaphysician like Parmenides, Plato or Aristotle.” When we think of the lives of ancient philosophers, we’re far more likely to think of Socrates, condemned to death by the Athenians and drinking hemlock, than of Plutarch, a Greek living happily with Roman rule, quaffing wine with his friends.

Yet in our own time-poor age, with anxieties shifting from economic meltdowns to oil spills to daily stress, it’s now more than ever that we need philosophy of the everyday sort. In the Plutarchan sense, friendship, parties and even wine, are not trivial; and while philosophy may indeed be difficult, we shouldn’t forget that it should be fun.

Frieda Klotz is a freelance journalist living in Brooklyn. She is co-editing a book on Plutarch’s “Discussions over Drinks” for Oxford University Press.


Full article and photos:

Henry James Walked Here


The 13th-century basilica dedicated to St. Francis as seen from the fortress above Assisi.

IT was love at first sight. Henry James was 26 when he crossed the border from Switzerland and made his way, on foot, down into Italy — “warm & living & palpable,” as he wrote ecstatically to his sister on Aug. 31, 1869. The romance kindled that day lasted nearly 40 years, and played a significant part in his career; he set some of his greatest works in Italy, including “Daisy Miller,” “The Aspern Papers” and “The Wings of the Dove.” 

All three are excellent traveling companions, particularly if you’re en route to Rome and Venice — but a more direct (though of course inescapably Jamesian, and therefore at times convoluted) expression of his contagious passion for what he declared to be the “most beautiful country in the world” can be found in his travel writing. 

Henry James as tour guide? He won’t lead you step by step, waving a pennant so you don’t get lost, but he does show the way. His fine, reverberating consciousness sets off a corresponding reverberation in the sympathetic reader, who can’t help but admire the way Italy liberates an appetite for sensual experience in this most cerebral of authors. 

If you’re thinking of visiting Umbria and Tuscany, James has even thoughtfully planned out your route: in 1874, when his Italian romance was in its infancy (and the Kingdom of Italy was a newborn nation, having achieved unification only in 1861), James wrote for The Atlantic Monthly a travel essay called “A Chain of Cities,” in which he describes his springtime wanderings in Assisi, Perugia, Cortona and Arezzo, ancient hill towns well stocked with artistic treasures and expansive views — all neatly arranged within easy distance of one another. James, traveling by train, lounges and loafs along the way, examining and judging an artist’s work, or sitting on a sunny bench beneath the ramparts of a ruined fortress, or strolling aimlessly, merely savoring the flavor of “adorable Italy.” A 21st-century traveler whose schedule is fixed by the tyranny of airline reservations may be tempted to pick up the pace (certainly a possibility if you’ve rented a car), but accident and adventure, the kind of chance encounter that loitering invites, are just as important, in the search for the essence of a place, as methodical contemplation. 

James’s principal interests are scenery and art, though he occasionally casts his eye — while holding his nose — on the unwashed populace (the Puritan in him was shocked by the Italian peasant’s indifference to soap). All four towns are perched high and blessed with stunning views, but of course the views were even more gorgeous in the 19th century, before the valleys were streaked with highways, dotted with factories and warehouses and veiled by smog. 

In Assisi, James looks out over “the teeming softness of the great vale of Umbria,” and watches “the beautiful plain mellow into the tones of twilight.” Today the plain is still “teeming” (though with human activity rather than nature’s bounty), and the mellow haze in the distance looks suspiciously chemical. But if the views are less pristine, the art and the architectural monuments are far more accessible, preserved and curated with care and intelligence. Each of these towns is home to more masterpieces than you can comfortably absorb in one visit; this is an itinerary overflowing with artistic riches. 

If James insists on a measured tempo (in Perugia he warns that a visitor’s “first care must be to ignore the very dream of haste, walking everywhere very slowly and very much at random”), at least part of the reason is that in these towns there’s little choice. Most of the streets, especially in Assisi, Perugia and Cortona, are steep, narrow and crooked; haste would soon leave you panting. Arezzo is gentler, but there, too, James is right: even if you’re fit enough to race along, a leisurely stroll is infinitely more rewarding when nearly every building has half a millennium of history attached to it. 

In Assisi, James counsels, the visitor’s “first errand” is with the 13th-century basilica dedicated to St. Francis. The church, which houses the saint’s tomb — “one of the very sacred places of Italy” — is a magnet for religious pilgrims. James hits on a suggestive metaphor for the basilica’s astonishing structure: it consists of two churches, one piled on top of the other, and he imagines that they were perhaps intended as “an architectural image of the relation between heart and head.” The lower church, built in the Romanesque style, is somber, cave-like and complex, whereas the upper church, a fine example of Italian Gothic, is bright, spacious, rational. (Though he often favored head over heart, reason over emotion, James was a master at turning the tables.) Both churches are famously decorated with frescoes hugely important to the history of art, most of them traditionally ascribed to Giotto (c. 1267-1337). Studying them closely, James pays tribute to the artist’s expressive power: “Meager, primitive, undeveloped, he is yet immeasurably strong” — a judgment still valid today. 

Having trained his eager eye on these masterpieces, James saunters off to explore a palpably ancient town (“very resignedly but very persistently old”) that no longer exists. Assisi in the 21st century is pleasant and pretty, but fixed up and licked clean — after the damage from a 1997 earthquake — and wholly focused on the business of accommodating tourists and pilgrims (which seems mostly to involve selling them ice cream and religious knickknacks). James especially likes the ruined castle perched above the town, which is happily unrenovated. But he came along too soon to have glimpsed the curious monument erected, so to speak, on the steep road up to the fortress: a wire fence lovingly decorated with the discarded chewing gum of countless bored kids on their school trip to Assisi. It resembles a modernist sculpture, an abstract, folk-art Giacometti stretched along the path. I like to think of James pondering the meaning of this bizarre masticated tribute to modern adolescence. 

IN Perugia, James admires the extravagant view (“a wondrous mixture of blooming plain and gleaming river and wavily-multitudinous mountain”), and picks a fight with the city’s leading artist, Perugino (1446-1524), whose paintings are graced with serene figures that seem to James just a little too serene and neat — too “mechanically” produced. 

Although his report on this “accomplished little city” is lively and evocative, it’s possible that his preoccupation with the artist and the creative process distracted him from his travel writer’s duty to give the reader a distinct taste of a particular spot, and somewhat distorted his judgment. He may have overplayed his delight with the view, saying that he preferred it to “any other visible fruit of position or claimed empire of the eye” (strained phrases that smack of hyperbole). He pits the painter against the prospect, then pronounces his verdict: “I spent a week in the place, and when it was gone, I had had enough of Perugino, but hadn’t had enough of the View.” 

The trick, of course, is not to spend an entire week in Perugia. It’s a fascinating place, defined by the contrast between the broad, elegant Corso that runs through the center of the town like a super-wide catwalk, purpose-built for people-watching, and the tortuously cramped streets that roller-coaster around it in an exhausting topographical tangle. 

A map is essential here, but you’ll get lost anyway, defeated by the twisting, the turning, the dipping and the climbing. James’s description is peppered with adjectives that paint a grimmer picture than one sees today (he writes of “black houses … the color of buried things”), but even in this tidied-up era the medieval sections of Perugia retain their “antique queerness.” 

Two days in Perugia is plenty, unless you disagree with James about Perugino, in which case a third day might be necessary, if only to visit the church of San Pietro, an oasis of tranquillity just below the city walls where hidden away in the sacristy there are five tiny Peruginos well worth the detour — and to revisit both the Galleria Nazionale dell’Umbria, where Perugino plays a starring role, and the Collegio del Cambio, which he decorated. The latter is the moneychangers’ guildhall, and it can safely be said that no financial institution has ever bequeathed a more pleasing monument to posterity than the room Perugino created with his wonderfully calm and graceful frescoes. 

Cortona, which James calls “the most sturdily ancient of Italian towns,” is even more narrowly up-and-down than Perugia. A small town with a comically higgledy-piggledy central piazza, it’s like Assisi these days: in danger of seeming quaint or cute instead of beautiful or picturesque. But it’s calm, quiet and dignified (at least during the off-season), and if you set off for a ramble in any direction, you’ll pass several charming churches before you’ve reached the town’s well-preserved ramparts and registered the welcome shock of yet another panoramic view. 

Arriving on a festival day, James saw neither the interior of Cortona’s churches nor its museums. He expresses mild, passing regret (“the smaller and obscurer the town the more I like the museum”), before turning to the serious business of loafing. Had he known what he was missing, he might have extended his stay. The town’s artistic treasures, now stored in the Museo Diocesano, include a handful of muscular and disturbingly odd paintings by a brilliant native son, Luca Signorelli (c. 1445-1523), and a glorious Annunciation by Fra Angelico (c. 1395-1455), one of the most delicately enchanting paintings of the early Italian Renaissance. In the valley below, you’ll see the dome of a perfectly proportioned 15th-century church, Santa Maria delle Grazie al Calcinaio. 

By the time he reaches Arezzo, James has surrendered entirely to the charm of Tuscany. He mentions the museum, the “stately” duomo, and the “quaint” colonnades on the facade of Santa Maria della Pieve, but only in passing, in an apologetic aside, as if he knew that in the neighborhood there were monuments and artworks of importance to be studied, but, really, he’d rather just lounge around near the ruined castle that sits at the top of the town, just as he did in Assisi and Cortona, and sop up the “cheerful Tuscan mildness.” 

No one who has visited Arezzo on a warm day in late spring can blame him — the settled, unforced, somehow inevitable beauty of the place demands unhurried, disinterested appreciation — though some would prefer to while away the hours in the lovely Piazza Grande, a sloping, comfortably enclosed space not unlike Siena’s famous Piazza del Campo, only more intimate. 

The spectacle of Henry James morphing into a lazy, contented, “uninvestigating” tourist — especially after his strenuous intellectual engagement with Giotto’s frescoes in Assisi and Perugino’s in Perugia — gives “A Chain of Cities” a very satisfactory narrative arc. But as is so often the case, a pretty shape comes at a price: James leaves out any mention of Arezzo’s most famous work of art, “The Legend of the True Cross,” a cycle of frescoes in the Basilica di San Francesco by Piero della Francesca (c. 1415-1492), which today’s guidebooks insist is the town’s principal attraction. 

There’s no evidence that James ever saw “The Legend of the True Cross” or formed an opinion of the artist’s work (Piero’s name doesn’t crop up in James’s oeuvre, or in his correspondence), but having come this far in his company, it seems only appropriate to fill in the blanks — to add another link to the chain — and imagine James’s reaction to this rich pageant. 

In Assisi, the result of his communion with Giotto was “a great and even amused charity,” a gentle mood of indiscriminate benevolence. Here, in the hushed choir of San Francesco, he would have recognized a great artist’s bold technical advances (Piero was pioneering in his use of light and perspective), and marveled at the sleepwalker’s trance that gives Piero’s figures an ethereal spirituality even in the heat of battle. He would have envied the scope of the achievement, the variety of the scenes and the harmony of the overall composition. And he would have stumbled out into the handsome streets of placid Arezzo with his own artistic ambitions inflamed. 

ADAM BEGLEY, the former books editor of The New York Observer, is at work on a biography of John Updike. 

Out of line

An insubordinate general. A soccer mutiny. Why hierarchy matters, even in an egalitarian world.

It’s been a bad week for the chain of command. First, international soccer fans witnessed the petulant meltdown of the French World Cup team: Star player Nicolas Anelka was kicked off the team for profanely insulting the head coach in the locker room midgame, and his teammates protested his dismissal by staging a mutiny — refusing to practice last Sunday, taking the team bus back to their hotel, and leaving the abandoned coaching staff to find their own ride. The fractiously underperforming team, full of top-flight talent, didn’t make it out of the tournament’s first round.

Then, on Tuesday, General Stanley McChrystal, commander of the US-led NATO security mission in Afghanistan, was summoned to Washington to answer for derisive and arguably insubordinate comments he and his aides made to a Rolling Stone reporter about several of the senior members of the White House national security team — and about President Obama himself, the man who, the Constitution specifies, was McChrystal’s ultimate boss. Upon his arrival in Washington, McChrystal was relieved of command.

The two events were not, of course, equal in global import. One was a drama on a sports team, the other may alter the course of a war. But both caught the attention of the world as they unfolded. And for all the distinctive political and cultural strands that each separately touched on, they both triggered an immediate and visceral sense that certain widely understood rules of appropriate behavior had been violated. Notably, in all of the commentary that swirled up around the two scandals, it was virtually impossible to find voices rooting for the rebellious underdogs, for the “runaway general” or the soccer players who turned on their coach.

What was at stake in each was a very basic idea: deference to the social hierarchy. Where people stand on the social ladder is a fact that governs all sorts of daily interactions, as well as how we build organizations, police one another’s behavior, and understand our own identity. It’s also something that social scientists are taking an increasing interest in. Talk of hierarchy or social rank may sound antiquated, especially in countries like America and France that each had its own revolution two centuries ago to overthrow an aristocratic political and social order. If all men are created equal, then thinking and talking about rank seems pernicious, a recipe for inflated egos on the one hand or crippled self-esteem on the other.

But psychologists who study status and power in social settings — and a growing number are — have found that human beings, in surprising ways, actually seem to thrive on a sense of social hierarchy, and rely on it. In certain settings, having a clear hierarchy makes us more comfortable, more productive, and happier, even when our own place in it is an inferior one. In one intriguing finding, NBA basketball teams on which large salary differentials separate the stars from the utility players actually play better and more selflessly than their more egalitarian rivals.

“Status is such an important regulating force on people’s behavior, hierarchy solves so many problems of conflict and coordination in groups,” says Adam Galinsky, a psychologist at Northwestern University’s Kellogg School of Management who did the research on social hierarchies on basketball teams. “In order to perform effectively, you often need to have some pattern of deference.”

None of this means that unquestioned obedience and institutionally mandated inequality are the building blocks of the ideal society. But research into social hierarchy does suggest that a taste for rank is a key part of the bundle of traits that make human beings such a successfully social species. Even the most equable among us have this inborn human understanding, psychologists say, and a sense of when its codes have been broken. That applies not only in situations with strictly delineated chains of command like the military or a pro sports team, but in any social situation. Knowing what’s right and wrong is often just a matter of knowing who’s the boss.

The French soccer rebellion and the loose words of McChrystal have both been harshly judged in the court of public opinion. In France, a nation where everyone from firefighters to doctors routinely goes on strike, the World Cup walkout was roundly condemned, with the nation’s newspapers, its former soccer stars, its minister of sport, its finance minister, and even President Nicolas Sarkozy expressing outrage. Here in the United States, McChrystal, a hugely accomplished soldier in a country ferociously proud of its military, was criticized across the political spectrum for his words and the way he allowed them to become public.

In announcing that he had accepted McChrystal’s resignation, Obama said his decision had been a necessary one, brought on by the fact that McChrystal’s conduct “undermines the civilian control of the military that is at the core of our democratic system.” Civilian control of the military is spelled out in the country’s Constitution to prevent the military from taking over — or even unduly influencing — the elected government. But in reasserting his authority, Obama was also addressing a more basic human need to know who is in charge.

Human beings are social animals, a fact that is central to how we as a species see the world. And like other social animals, whether wolves or chickens or chimpanzees, we sort ourselves into rankings. These rankings aren’t static, they can change over time, but they impose order on social interaction: In the wild, they create a framework for dividing up vital tasks among a group, and because they clearly codify differences in power or strength or ability, they prevent every interaction from disintegrating into an outright fight over mates or resources — someone’s rank tells you how likely she is to beat you in a fight, and you’re less likely to bother her if you already know.

To the extent that explicit social hierarchies are still with us, in the popularity pecking order of high school or the restrictive membership policies of certain country clubs, they’re seen as the unfortunate vestiges of an earlier era, or the ugly outgrowth of social insecurity. Yet psychologists are finding that our tendency toward social hierarchy is at once a more deep-seated and complex impulse than we thought.

For one thing, it turns out that people are ruthlessly clear-eyed judges of their own place in the social hierarchy. This is notable because they tend to be poor judges of just about everything else about themselves. Study after study has shown that people are incorrigible self-inflaters, wildly overestimating their own intelligence, sexual attractiveness, driving skills, income rank, and the like. But not social status, that they turn out to be coldly impartial about.

For example, a team of social psychologists led by Cameron Anderson of the University of California, Berkeley ran a study in which strangers were put into groups that met once a week, and were tasked with solving various collaborative problems. After each meeting, the participants rated their own status in the group and that of their teammates. By and large, people’s self-evaluations matched up with how their peers rated them.

The explanation for this, the researchers argue, is that the costs of error are so high: Those few people who thought they ranked higher than they actually did were strongly disliked by their teammates. Overestimating one’s own intelligence or sex appeal may be simply annoying, but overestimating one’s social position can be a ticket to ostracism, and up until relatively recently in the timescale of human evolution, ostracism could have serious consequences, even death.

Other research has shown the unexpected dividends that having a clearly delineated hierarchy can pay even if it enshrines great status disparities. Studies show a host of physiological benefits to having high status, whether you’re a senior partner at a bank or the alpha male in a baboon troop. But while that may come as no surprise, there are also findings that suggest people derive psychic benefits from being low-status, as long as there’s no question about where they stand.

In a 2003 study by Larissa Tiedens and Alison Fragale, both then at Stanford University, subjects who displayed submissive body language were found to feel more comfortable around others who displayed dominant body language than around those who also displayed submissive body language — and to like those with more dominant posture better, as well. People, it seems, prefer having their evaluation of social hierarchy confirmed, even when they see themselves at the bottom of it.

These two linked findings — that people derive comfort from an established hierarchy and that they react particularly strongly to those who buck it — may help explain why McChrystal’s insubordinate comments and the French soccer mutiny were so compelling as public dramas: They were conflicts over who is in charge, and over what punishment the loser would suffer.

Perhaps the strongest, if the most surprising, evidence for the importance of clearly delineated social hierarchies is work that suggests that more inequality can make for better teams. While Celtics fans in particular have grown used to extolling the virtues of teams without superstars, where any player can be the hero on any given night, there’s some evidence that more rigid talent caste systems can actually create more teamwork. Galinsky and his fellow researchers found that NBA teams with greater pay disparities not only won more, but ranked higher in categories like assists and rebounding, suggesting a higher degree of cooperation. The clearer the status imbalance, the researchers argued, the less question there is about where one stands.

Good teamwork, in other words, requires a general acceptance of disparity. Everyone knows his job and he does it even if he’d rather have someone else’s job. This is what the military is built on, and successful teams of all kinds. And that seems to be what General McChrystal and Les Bleus forgot.

Drake Bennett is the staff writer for Ideas.


Full article and photo:

Because K2 Is There—And Harder

The peak that elite climbers consider tougher than Everest is also more likely to kill


Climbers on an ill-fated K2 expedition in August 2008 use fixed ropes to ascend a dangerously steep gully called the Bottleneck.

On Aug. 1, 2008, after waiting weeks for good weather, 30 climbers from around the world set off before dawn from their high camp on the mountain called K2 straddling the border between China and Pakistan. The climbers were headed for the summit, at 28,251 feet. The idea was to get back to the camp at 25,800 feet before nightfall. In the early afternoon, on their way to the summit, two men fell to their deaths. Other climbers, for a variety of reasons, turned back. But 18 climbers pushed on and late in the day reached the top of the second-highest mountain in the world. Only half of those would make it down alive.

Mount Everest is the world’s highest mountain, but to climbers K2 is the real trophy. Almost any idiot, willing to spend enough money, can climb Everest. The mountain is high but not technically difficult. The Everest basecamp is a well-appointed village that has become a tourist destination itself. Professional guides eliminate most of what has traditionally been the essence of mountaineering: uncertainty. An army of Sherpas does the hardest work: hauling loads and fixing lines (ropes anchored to the mountain). Climbers, if the term can still be used, need only show up, wait in relative comfort and, when the weather is good, attach their harness to a fixed line and put one foot in front of the other. With fixed lines you don’t have to think where you’re going—or worry about falling, since an “ascender” clipped to the rope will lock up and hold you in place.

K2 is a much more serious challenge. For starters, K2 is aesthetically striking: Unlike Everest, which is the highest bump on a jumble of mountains, K2 stands alone. The mountain is an enormous pyramid of rock and ice with a complicated architecture of ridges and couloirs, or gullies, many of which are menaced by seracs, or unstable hanging glaciers. Seracs can break loose without warning and create enormous ice avalanches. Another danger: K2 is farther north than Everest, hence colder, and prone to more severe spells of bad weather.

To have climbed K2, then, has long been the preserve of only the most elite mountaineers. And mistakes often prove fatal. In the summer of 1986, for example, 13 people perished on the mountain. Ed Viesturs and David Roberts—in “K2: Life and Death on the World’s Most Dangerous Mountain” (2009), a broad survey of the carnage that is the mountain’s climbing history—calculate that one in every four climbers to summit the peak dies, compared with one in 19 on Everest. And getting up is only half the battle. Descending can often be harder—and more dangerous. You’re exhausted and if you run out of daylight the going can truly be treacherous.

Aug. 1, 2008, as it happened, was the deadliest day in the mountain’s history. The climbing disaster was front-page news in the New York Times, although the paper published a picture of the wrong mountain (Gasherbrum IV, in Pakistan). Now Graham Bowley, a Times reporter, has written a detailed reconstruction of what happened—or might have happened—on K2 during those terrible hours.

Mr. Bowley conducted hundreds of interviews with most, but not all, of the survivors and others on the peak that day. “No Way Down: Life and Death on K2” is the result. The book’s title is curious, given that half of those who made it to the summit that day did, indeed, find a way down. Nevertheless, the book is brisk and engrossing. It starts with Dren Mandic, a Serb climber, early on the summit day, unclipping from a fixed line, planning to help another climber with a rope and instead falling to his death.

A high-altitude porter trying to help with the recovery of Mr. Mandic’s body then also slips to his death. Late in the day, while most of the summit climbers have yet to descend, a massive serac avalanche rips away hundreds of feet of fixed ropes that had been safeguarding a steep and exposed couloir called the Bottleneck. One climber is killed in the icefall; and most of those returning from the summit are trapped on the wrong side of the couloir. They must either climb it on their own in the dark or spend an awful night out in the open and wait for daylight. Some make it, some don’t. A few survive but are horribly maimed by frostbite.

Mr. Bowley reveals a deep sympathy for his characters and their quest—which he admits took some time for him to understand. “My immediate reaction was, Why should we care?” he writes. Eventually, though, he was inspired by their passion and commitment. “They had broken out of comfortable lives to venture to a place few of us dare go in our lives. They had confronted their mortality, immediately and up close. Some had even come back to K2 after serious injury in earlier years, attracted like flies to the light to some deeper meaning about themselves, human experience, and human achievement.”

“No Way Down” is high on drama but low on analysis. We see Rolf Bae swept away in an avalanche almost in front of his wife, Cecilie Skog, as the Norwegian couple descends the mountain. We hold our breath while Chhiring Dorje ties himself to a fellow Sherpa who has lost his ice ax; they make a nerve-racking climb down the Bottleneck. And then there are the two porters who venture back into the still-active avalanche zone to rescue a third. Only one of the three return.

But Mr. Bowley is no climber, and it shows. Ice axes have picks, not blades; shafts, not handles. Ice screws aren’t “jammed” into ice, but, well, screwed. More seriously, he neglects to examine whether the climbers’ reliance on fixed ropes and high-altitude porters (standard practice on Everest but a recent development on K2) gave them a false sense of security. In fact, too many climbers too dependent on fixed lines will remind many readers of the similar disaster on Everest in 1996, chronicled the following year in Jon Krakauer’s “Into Thin Air.”

Why did so many people think topping out on K2 near nightfall was a good idea? And why weren’t the climbers better prepared to deal with a mountain infamous for its high death toll? Many of the climbers carried satellite phones to call friends from the summit. Apparently none of them carried stoves that could melt snow to make water that could save their lives or ward off frostbite. Reasonable people could have different answers to these questions, but Mr. Bowley doesn’t even raise them.

In climbing, style is everything. Climbing without bottled oxygen, Sherpa support or fixed lines is a bigger achievement than getting to the top by any means. In writing, too, style matters. Mr. Bowley gives a moving account of high-altitude porter Jumik Bhote’s last minutes of life with two injured Korean climbers. “Bhote wanted to reassure them,” Mr. Bowley writes, “but the cold crept over him and soon he found he had no strength to speak.”

Unfortunately, Mr. Bowley doesn’t know what those moments were actually like. He says that he extrapolated the scene from interviews with others, none of whom were there either. Such re-created scenes, however deftly described, mar “No Way Down.” Some will call this fiction and feel cheated. And unless you carefully flip back and forth between the chapters and footnotes, it’s hard to tell where fact stops and speculation starts.

In their account of the 2008 disaster in their own K2 book, Messrs. Viesturs and Roberts, both accomplished mountaineers, write that the tangled and contradictory accounts from survivors on the mountain that August day make it impossible to really know what happened. “Nonclimbing journalists,” they note, referring to mainstream media accounts of mountaineering in general, “can never seem to get our stories right.”

Mr. Bowley deserves credit for diligently trying to piece together what transpired high on K2 that fateful day. It’s a pity the result isn’t as informative as it is entertaining.

Mr. Ybarra is the Journal’s extreme-sports correspondent.


Full article and photo:


See also:

‘No Way Down’


Friday, August 1, 2008, 2 a.m.

Eric Meyer uncurled his tired body from the Americans’ tent into the jolt of the minus-20-degrees morning.

He was decked out in a red down suit and his mouth and nose were covered by his sponsor’s cold weather altitude mask. A few yards in front of him stood the Swede, Fredrik Strang, Meyer’s colleague in the American team, his six-foot, two-inch frame bulbous in a purple climbing suit, and his backpack weighed down by his thirteen-pound Sony video camera.

It was pitch black. There was no moon. Meyer put on his crampons and whispered a prayer. Keep me safe. “Let’s do our best,” he said out loud to Strang.


The two men nodded to each other, then kicked their boots into the tracks in the firm snow. The tracks led up the mountain, where they could see the headlamps of the twenty-nine climbers from the eight teams, bright spots on the steadily rising Shoulder.

“Don’t let your guard down,” said Strang. He tossed his ice axe in the air and caught it, just to make sure he was awake.

For nearly two months, they had waited for this moment. Now they were ready.

More than two thousand feet above them, the summit was still hidden in the night, which was probably a good thing. Soon the sun would rise over China. As the two men filed out onto the line above Camp Four, at about 26,000 feet the final camp before the summit, their breath rasping in the low-pressure air, the winds of the past days had vanished, just as their forecasters had promised. It was going to be a perfect morning on K2, and Meyer, a forty-four-year-old anesthesiologist from Steamboat Springs, Colorado, possessed confident hope that his skills in high-altitude sickness and injury would not be needed. Meyer’s team was one of eight international expeditions that were setting off on the final day of their ascent of K2, at 28,251 feet the second-tallest mountain on earth. K2 was nearly 800 feet shorter than Everest, the world’s highest peak, but it was considered much more difficult, and more deadly.

It was steeper, its faces and ridges tumbling precipitously on all sides to glaciers miles below. It was eight degrees latitude or 552 miles farther north than Everest, its bulk straddling the border between Pakistan to the southwest and China to the northeast, and, far from the ocean’s warming air, its weather was colder and notoriously more unpredictable. Over the decades, it had led dozens of mystified climbers astray into crevasses or simply swept them without warning off its flanks during sudden storms.

Yet K2’s deadliness was part of the attraction. For a serious climber with ambition, K2 was the ultimate prize. Everest had been overrun by a circus of commercial expeditions, by people who paid to be hoisted up the slopes, but K2 had retained an aura of mystery and danger and remained the mountaineer’s mountain. The statistics bore this out. Only 278 people had ever stood on K2’s summit, in contrast to the thousands who had made it to the top of Everest. For every ten climbers who made it up, one did not survive the ordeal. In total, K2 had killed at least sixtysix climbers who were trying to scale its flanks, a much higher death rate than for Everest. And of those who had presumed to touch the snows of its summit, only 254 had made it back down with their lives.

Waiting in his tent at Camp Four the previous night, Meyer had experienced a few dark hours of disquiet when the Sherpas cried out that the other teams had forgotten equipment they had promised to bring; he could hear them hunting through backpacks for extra ropes, ice screws, and carabiners. Although ropes had been laid on the mountain from the base to Camp Four, the expeditions still had to fix the lines up through the most important section, a gully of snow, ice, and rock called the Bottleneck. The Sherpas had only just discovered that one of the best Pakistani high-altitude porters (HAPs), who was to lead the advance rope-fixing team, had coughed up blood at Camp Two and had already gone back down.

Eventually the Sherpas quieted down, and Meyer assumed they must have found what they needed. By now, everyone was waking up. In the surrounding tents, alarms were beeping, there was the sound of coughing, stretching, zipping of suits, ice screws jingling, headlamps snapping on. The panic was over.

Yet when the advance team eventually left, it seemed to Meyer, listening to the swish of boots over snow outside his tent, that they were already late, and time was the last thing they wanted to waste on the mountain.

It was past 5 a.m. as Meyer and Strang pushed ahead together up the Shoulder, a steadily rising ridge of thick snow about a mile long. They prodded the snow with their ice axes to test the way. The snow, hard-packed, didn’t crack. They skirted the crevasses spotlighted in the arcs cast by their headlamps, some of the crevasses a few feet wide. Several yards off in the dark was a row of bamboo wands topped with ribbons of red cloth. The poles had been set out to guide climbers back to Camp Four later that night. But there was only a handful. The two men didn’t say much, but every few minutes Meyer made a point of calling out to Strang, checking for warning signs of highaltitude effects: a trip, or a mumbled answer.

“How you doing?”

“I’m fine!” said Strang loudly.

After half an hour, they came to the start of the ropes laid by the advance team. The two men were surprised to find them placed so low in the route. Weird. The Bottleneck was still a long way off and these slopes were not dangerous for an experienced climber. The ropes had obviously been put there to guide the climbers on the way back down. The lead group must have calculated they would still have enough rope to reach where it was truly needed.

Meyer was carrying his own quiver of bamboo wands, which he had intended to plant at intervals for the return journey. Strang had brought three thousand feet of fluorescent Spectra fish line to attach between the poles. But now they left the equipment in their backpacks.

Not required.

Exchanging shrugs, Meyer and Strang walked on. At 6:30 a.m., the sun rose, revealing the Bottleneck. It was the first time either of them had seen the gully up close. It was awesome, more frightening than they could have anticipated. About nine hundred feet ahead of them, its base reared up from the Shoulder, rising another few hundred feet later to an angle of 40 or 50 degrees and narrowing between stairs of dirty, broken brown rocks on both the right and left sides.

It was, Meyer could see, an unreliable mix of rock, ice, and snow. Another five hundred feet on, it turned up to the left toward a horizontal section called the Traverse, a steep ice face stretching a couple of hundred feet around the mountain, and exposed to a drop of thousands of feet below.

Directly above the Bottleneck was the serac—the blunt overhanging end of a hanging glacier—a shimmering, tottering wave frozen as it crashed over the mountainside, a suspended ice mountain six hundred feet tall, as high as a Manhattan apartment building and about half a mile long. It was smooth in places but large parts of it were pitted with cracks and crevasses.

This was the way to the summit, and for the whole of the Bottleneck and most of the Traverse, the mountaineers had to climb beneath the serac. There were other ways to the summit of K2—via the north side from China, for example, or on a legendary, nearly impossible route on the south face called the Magic Line—but the path up the Bottleneck and beneath the serac was the most established route, the easiest, and possibly the safest, as long as the serac remained stable.

The glacier moved forward slowly year by year. When it reached a critical point, parts of the ice face collapsed, hurling chunks down the Bottleneck. No climber liked to imagine what would happen if they got in the way. In past decades, there had been many reports of icefalls from the glacier, but in recent years the Great Serac on K2 had been quiet. The strengthening daylight revealed the changing shapes and textures of the glacier, transforming its colors from gray to blue to white as the cold shadows receded. It revealed to Meyer and Strang the serac’s true nature, something the earlier climbers would have missed because they had entered the Bottleneck in darkness. It looked to Meyer like giant ice cubes stacked on top of each other, and the ice had pronounced fissures running down it.

“Man, that’s broken up!” Meyer said in awe.

They had studied photographs in Base Camp taken a month earlier, which had shown cracking, but this was far worse.

As the outline of the mountain emerged from the dawn, Meyer could also make out clearly for the first time the snaking line of climbers up ahead. He had expected to find an orderly procession of bodies moving up the gully and already crossing the Traverse. Instead he was met with a sight that stopped him short: an ugly traffic jam of people still in the lower sections of the Bottleneck.

Only one climber seemed to have made good progress. He was sitting near the top of the Bottleneck in a red jacket, waiting for the muddle to resolve itself below.

What had caused the delay? As Meyer and Strang approached, there were distant calls from above for more rope.

“The rope is finished!”

Eventually it became clear: The advanced group had not yet managed to fix rope to the top of the gully, and the climbers following behind had already caught up to them.

During the previous two weeks, the expeditions had convened cooperation meetings down in the tents at Base Camp. They had made an agreement detailing the sequence of who would climb when. The crack lead group of about half a dozen of the best Sherpas, HAPs, and climbers from each team would fix ropes up through the Bottleneck and the rest of the expeditions would follow rapidly through the gully without delay. The arrangement was meant to avoid overcrowding in the Bottleneck. They knew it was critical to get out from under the serac as fast as possible.

Well, thought Meyer, so much for that.

Everyone seemed to be staring at one another and wondering what to do next. After a few minutes some of the climbers at the bottom began bending to cut the ropes and pass them higher. Soon the wait was over and the line was edging on up again, though still slowly. Until that moment, Meyer had not appreciated the sheer number of people trying to climb the mountain: one of the highest concentrations of climbers to attempt a summit of K2 together on a single day.

A few were already turning back, because they were feeling cold or sick or today was not to be their lucky day. Probably about twenty-seven or so were still heading for the summit. It looked like being another busy day, like the ones in the 2004 or 2007 seasons when dozens reached the top. Meyer imagined the conditions up there. Everyone getting in the way. Koreans, Dutch, French, Serbs, and a string of other nationalities. Few speaking the same language. And they were probably so intent on avoiding one another that they were not focusing on how late it was nor were they looking up at the glacier to study it properly. Damn. They were not seeing how dangerous it looked.

Meyer watched the line of climbers struggling higher and had an uneasy feeling in his stomach. Beside him Strang said it out loud.

“S***, it’s late.”

They took off their backpacks and sat in the snow, staring up at the serac and, below it, the Bottleneck.

“There’s no way around that crowd,” said Meyer. “We’re going to get stuck behind them.”

They made a calculation. At the expeditions’ current speeds, they would reach the summit in the afternoon, perhaps early evening. Sunset. The climbers would be coming back down through the Bottleneck in the dark.

As far as Meyer was concerned, that multiplied the risk a thousandfold. It was already the most dangerous climb in the world. Descending in darkness through the Bottleneck was a no-no. He knew that everyone up there had a deadline for reaching the summit no later than three or four o’clock in the afternoon. What were they thinking?

He felt his courage drain away.

Yet turning back was hard, so bitter after the weeks of toil on the mountain. Like everyone else, he had invested thousands of dollars and nearly a year of his life preparing to come to Pakistan.

He might be able to return to Camp Four and try again the following day. But in reality, climbing up to these altitudes sucked so much out of a person, exposed a body to such pain, that they would have to descend to lower camps to recover before trying again. But the climbing season was ending. They had already pushed back this summit attempt because of bad weather. There was no time left. It was probably going to be the only shot Meyer had. If they failed today, they would have to wait another year. And who knew if he could ever return? Together, he and Strang went through all the scenarios. They had made it to the Shoulder of K2. Were they throwing away a lifetime’s chance to climb the mountain of their dreams?

Strang unpacked his camera and started to film the serac and the climbers beneath it. Meyer took some snapshots. The climbers in the Bottleneck were still barely moving.

They remembered the rain that had fallen in their first week in Base Camp, an odd event for K2 in June. Then there had been the weeks of winds and the overcast sky and snow piling up. And today the sun had risen into a clear blue heaven. It would be baking hot up there soon. If the serac was going to crack, it was because gravity was pulling it lower. The ice was also susceptible to the differences between the heat of day and the cold of night, which could cause the ice to expand and then contract, making an avalanche more likely. They didn’t trust the serac.

They packed up and slogged one and a half hours back down through the snows of the Shoulder to their tent at Camp Four. It was around 10 a.m. The day was perfect. Around them, hundreds of mountains stretched away in all directions, white and shining in the sun.

The camp was still and quiet. It perched on a flat part of the Shoulder and, relatively speaking, there was enough room for all the tents here, more than in some of the lower camps, where space was rationed and tents hung on ledges or were reinforced against the winds by ropes and poles.

They had expected to find a dozen or so climbers milling around the colored domes of nylon tents, taking in the rays, sharpening crampons, waiting around. Down at Base Camp, some mountaineers had said they thought the good weather was going to hold and so they had planned to climb up to Camp Four a day later than everyone else to avoid the main crowd and try for the summit on August 2. The second Korean team would be coming up soon, along with two Australians: one of Meyer’s colleagues, and another from the Dutch expedition who had been left out of the first summit ascent by his expedition’s leader.

But the other climbers were either inside their tents or still grappling with the slopes up from Camp Three. Meyer and Strang saw only one other person, an Italian. He had turned back earlier because of altitude sickness. Now he stuck his head out of his tent, next to theirs. His climbing jacket was plastered with “Fila” and other sponsors’ logos. He waved and then closed his tent.

Meyer could not help but peer back at the Bottleneck over his shoulder. Nearly a mile away, the climbers were distant dots, filing upward. They were higher on the gully now, about two-thirds of the way up. They were still crowded together dangerously. From this distance, they seemed to be not moving at all. Surely they would turn around soon. Did they have a death wish?

The two men ducked inside their tent. It was only four feet high, with no room to stand. They peeled off their down suits, the linings damp from sweat. They took out the radio, as big as a large cup of coffee, and crashed on top of the two sleeping bags that were spread parallel on the floor. They gulped at a bottle of melted water. It was hot in the tent. They didn’t feel like talking much. Soon they would have to start thinking about descending. It would take them a full day to get down.

About twenty minutes later, they were resting when they heard a faint cry outside. It came from far away. Strang thought he heard it again.

They went out of the tent to check the mountain but nothing had changed from when they had last looked. The line of climbers was still stuck in the Bottleneck. The radio was quiet. Then the Italian stumbled over. His name was Roberto Manni. “I see!” he said, pointing at the mountain, his face red. “I see!” Half a mile away, at the base of the Bottleneck, about six hundred feet below the main chain of climbers, a body was tumbling down the ice. A climber had fallen.

The small black figure slowed down and came to a stop just beneath some rocks. Meyer and Strang ran a few yards and stared up intently at the Bottleneck. The figure lay with its head pointing down the slope. Immediately, excited chatter started up on the radio.

“Very bad fall!” Meyer heard someone say. “He is alive. He is still moving. It is one of the Serbs.”


Full article and photo:

The Very Angry Tea Party

Sometimes it is hard to know where politics ends and metaphysics begins: when, that is, the stakes of a political dispute concern not simply a clash of competing ideas and values but a clash about what is real and what is not, what can be said to exist on its own and what owes its existence to an other. 

The seething anger that seems to be an indigenous aspect of the Tea Party movement arises, I think, at the very place where politics and metaphysics meet, where metaphysical sentiment becomes political belief.  More than their political ideas, it is the anger of Tea Party members that is already reshaping our political landscape.  As Jeff Zeleny reported last Monday in The Times, the vast majority of House Democrats are now avoiding holding town-hall-style forums — just as you might sidestep an enraged, jilted lover on a subway platform — out of fear of confronting the incubus of Tea Party rage that routed last summer’s meetings.  This fear-driven avoidance is, Zeleny stated, bringing the time-honored tradition of the political meeting to the brink of extinction.

It would be comforting if a clear political diagnosis of the Tea Party movement were available — if we knew precisely what political events had inspired the fierce anger that pervades its meetings and rallies, what policy proposals its backers advocate, and, most obviously, what political ideals and values are orienting its members.

Of course, some things can be said, and have been said by commentators, under each of these headings.  The bailout of Wall Street, the provision of government assistance to homeowners who cannot afford to pay their mortgages, the pursuit of health care reform and, as a cumulative sign of untoward government expansion, the mounting budget deficit are all routinely cited as precipitating events.  I leave aside the election of a — “foreign-born” — African-American to the presidency.

When it comes to the Tea Party’s concrete policy proposals, things get fuzzier and more contradictory: keep the government out of health care, but leave Medicare alone; balance the budget, but don’t raise taxes; let individuals take care of themselves, but leave Social Security alone; and, of course, the paradoxical demand not to support Wall Street, to let the hard-working producers of wealth get on with it without regulation and government stimulus, but also to make sure the banks can lend to small businesses and responsible homeowners in a stable but growing economy. 

There is a fierce logic to these views, as I will explain.   But first, a word about political ideals.

In a bracing and astringent essay in The New York Review of Books, pointedly titled “The Tea Party Jacobins,” Mark Lilla argued that the hodge-podge list of animosities Tea party supporters mention fail to cohere into a body of political grievances in the conventional sense: they lack the connecting thread of achieving political power.  It is not for the sake of acquiring political power that Tea Party activists demonstrate, rally and organize; rather, Lilla argues, the appeal is to “individual opinion, individual autonomy, and individual choice, all in the service of neutralizing, not using, political power.”  He calls Tea Party activists a “libertarian mob” since they proclaim the belief “that they can do everything themselves if they are only left alone.”  Lilla cites as examples the growth in home schooling, and, amidst a mounting distrust in doctors and conventional medicine, growing numbers of parents refusing to have their children vaccinated, not to mention our resurgent passion for self-diagnosis, self-medication and home therapies.

What Lilla cannot account for, and what no other commentator I have read can explain, is the passionate anger of the Tea Party movement, or, the flip-side of that anger, the ease with which it succumbs to the most egregious of fear-mongering falsehoods.  What has gripped everyone’s attention is the exorbitant character of the anger Tea Party members express.  Where do such anger and such passionate attachment to wildly fantastic beliefs come from?

My hypothesis is that what all the events precipitating the Tea Party movement share is that they demonstrated, emphatically and unconditionally, the depths of the absolute dependence of us all on government action, and in so doing they undermined the deeply held fiction of individual autonomy and self-sufficiency that are intrinsic parts of Americans’ collective self-understanding. 

The implicit bargain that many Americans struck with the state institutions supporting modern life is that they would be politically acceptable only to the degree to which they remained invisible, and that for all intents and purposes each citizen could continue to believe that she was sovereign over her life; she would, of course, pay taxes, use the roads and schools, receive Medicare and Social Security, but only so long as these could be perceived not as radical dependencies, but simply as the conditions for leading an autonomous and self-sufficient life.  Recent events have left that bargain in tatters.

But even this way of expressing the issue of dependence is too weak, too merely political; after all, although recent events have revealed the breadth and depths of our dependencies on institutions and practices over which we have little or no control, not all of us have responded with such galvanizing anger and rage.  Tea Party anger is, at bottom, metaphysical, not political: what has been undone by the economic crisis is the belief that each individual is metaphysically self-sufficient, that  one’s very standing and being as a rational agent owes nothing to other individuals or institutions.    The opposing metaphysical claim, the one I take to be true, is that the very idea of the autonomous subject is an institution, an artifact created by the practices of modern life: the intimate family, the market economy, the liberal state.  Each of these social arrangements articulate and express the value and the authority of the individual; they give to the individual a standing she would not have without them.

Rather than participating in arranged marriages, as modern subjects we follow our hearts, choose our beloved, decide for ourselves who may or may not have access to our bodies, and freely take vows promising fidelity and loyalty until death (or divorce) do us part.  There are lots of ways property can be held and distributed — as hysterical Tea Party incriminations of creeping socialism and communism remind us; we moderns have opted for a system of private ownership in which we can acquire, use and dispose of property as we see fit, and even workers are presumed to be self-owning, selling their labor time and labor power to whom they wish (when they can).  And as modern citizens we presume the government is answerable to us, governs only with our consent, our dependence on it a matter of detached, reflective endorsement; and further, that we intrinsically possess a battery of moral rights that say we can be bound to no institution unless we possess the rights of  “voice and exit.” 

If stated in enough detail, all these institutions and practices should be seen as together manufacturing, and even inventing, the idea of a sovereign individual who becomes, through them and by virtue of them, the ultimate source of authority.  The American version of these practices has, from the earliest days of the republic, made individuality autochthonous while suppressing to the point of disappearance the manifold ways that individuality is beholden to a complex and uniquely modern form of life.

Of course, if you are a libertarian or even a certain kind of liberal, you will object that these practices do not manufacture anything; they simply give individuality its due.  The issue here is a central one in modern philosophy: is individual autonomy an irreducible metaphysical given  or a social creation?  Descartes famously argued that self or subject, the “I think,” was metaphysically basic, while Hegel argued that we only become self-determining agents through being recognized as such by others who we recognize in turn. It is by recognizing one another as autonomous subjects through the institutions of family, civil society and the state that we become such subjects; those practices are how we recognize and so bestow on one another the title and powers of being free individuals.

All the heavy lifting in Hegel’s account turns on revealing how human subjectivity only emerges through intersubjective relations, and hence how practices of independence, of freedom and autonomy, are held in place and made possible by complementary structures of dependence.   At one point in his “Philosophy of Right,” Hegel suggests love or friendship as models of freedom through recognition.  In love I regard you as of such value and importance that I spontaneously set aside my egoistic desires and interests and align them with yours: your ends are my desires, I desire that you flourish, and when you flourish I do, too.  In love, I experience you not as a limit or restriction on my freedom, but as what makes it possible: I can only be truly free and so truly independent in being harmoniously joined with you; we each recognize the other as endowing our life with meaning and value, with living freedom. Hegel’s phrase for this felicitous state is “to be with oneself in the other.”

Hegel’s thesis is that all social life is structurally akin to the conditions of love and friendship; we are all bound to one another as firmly as lovers are, with the terrible reminder that the ways of love are harsh, unpredictable and changeable.  And here is the source of the great anger: because you are the source of my being, when our love goes bad I am suddenly, absolutely dependent on someone for whom I no longer count and who I no longer know how to count; I am exposed, vulnerable, needy, unanchored and without resource.  In fury, I lash out, I deny that you are my end and my satisfaction, in rage I claim that I can manage without you, that I can be a full person, free and self-moving, without you.  I am everything and you are nothing.

This is the rage and anger I hear in the Tea Party movement; it is the sound of jilted lovers furious that the other — the anonymous blob called simply “government” — has suddenly let them down, suddenly made clear that they are dependent and limited beings, suddenly revealed them as vulnerable.  And just as in love, the one-sided reminder of dependence is experienced as an injury.  All the rhetoric of self-sufficiency, all the grand talk of wanting to be left alone is just the hollow insistence of the bereft lover that she can and will survive without her beloved.  However, in political life, unlike love, there are no second marriages; we have only the one partner, and although we can rework our relationship, nothing can remove the actuality of dependence.  That is permanent.

In politics, the idea of divorce is the idea of revolution.  The Tea Party rhetoric of taking back the country is no accident: since they repudiate the conditions of dependency that have made their and our lives possible, they can only imagine freedom as a new beginning, starting from scratch.  About this imaginary, Mark Lilla was right: it corresponds to no political vision, no political reality.  The great and inspiring metaphysical fantasy of independence and freedom is simply a fantasy of destruction. 

In truth, there is nothing that the Tea Party movement wants; terrifyingly, it wants nothing.  Lilla calls the Tea Party “Jacobins”; I would urge that they are nihilists.  To date, the Tea Party has committed only the minor, almost atmospheric violences of propagating falsehoods, calumny and the disruption of the occasions for political speech — the last already to great and distorting effect.  But if their nihilistic rage is deprived of interrupting political meetings as an outlet, where might it now go? With such rage driving the Tea Party, might we anticipate this atmospheric violence becoming actual violence, becoming what Hegel called, referring to the original Jacobins’ fantasy of total freedom, “a fury of destruction”? There is indeed something not just disturbing, but  frightening, in the anger of the Tea Party.


J.M. Bernstein is University Distinguished Professor of Philosophy at the New School for Social Research and the author of five books. He is now completing a book entitled “Torture and Dignity.”


Full article and photo:

Should This Be the Last Generation?

Have you ever thought about whether to have a child? If so, what factors entered into your decision? Was it whether having children would be good for you, your partner and others close to the possible child, such as children you may already have, or perhaps your parents? For most people contemplating reproduction, those are the dominant questions. Some may also think about the desirability of adding to the strain that the nearly seven billion people already here are putting on our planet’s environment. But very few ask whether coming into existence is a good thing for the child itself. Most of those who consider that question probably do so because they have some reason to fear that the child’s life would be especially difficult — for example, if they have a family history of a devastating illness, physical or mental, that cannot yet be detected prenatally.

All this suggests that we think it is wrong to bring into the world a child whose prospects for a happy, healthy life are poor, but we don’t usually think the fact that a child is likely to have a happy, healthy life is a reason for bringing the child into existence. This has come to be known among philosophers as “the asymmetry” and it is not easy to justify. But rather than go into the explanations usually proffered — and why they fail — I want to raise a related problem. How good does life have to be, to make it reasonable to bring a child into the world? Is the standard of life experienced by most people in developed nations today good enough to make this decision unproblematic, in the absence of specific knowledge that the child will have a severe genetic disease or other problem?

The 19th-century German philosopher Arthur Schopenhauer held that even the best life possible for humans is one in which we strive for ends that, once achieved, bring only fleeting satisfaction. New desires then lead us on to further futile struggle and the cycle repeats itself.

Schopenhauer’s pessimism has had few defenders over the past two centuries, but one has recently emerged, in the South African philosopher David Benatar, author of a fine book with an arresting title: “Better Never to Have Been: The Harm of Coming into Existence.” One of Benatar’s arguments trades on something like the asymmetry noted earlier. To bring into existence someone who will suffer is, Benatar argues, to harm that person, but to bring into existence someone who will have a good life is not to benefit him or her. Few of us would think it right to inflict severe suffering on an innocent child, even if that were the only way in which we could bring many other children into the world. Yet everyone will suffer to some extent, and if our species continues to reproduce, we can be sure that some future children will suffer severely. Hence continued reproduction will harm some children severely, and benefit none.

Benatar also argues that human lives are, in general, much less good than we think they are. We spend most of our lives with unfulfilled desires, and the occasional satisfactions that are all most of us can achieve are insufficient to outweigh these prolonged negative states. If we think that this is a tolerable state of affairs it is because we are, in Benatar’s view, victims of the illusion of pollyannaism. This illusion may have evolved because it helped our ancestors survive, but it is an illusion nonetheless. If we could see our lives objectively, we would see that they are not something we should inflict on anyone.

Here is a thought experiment to test our attitudes to this view. Most thoughtful people are extremely concerned about climate change. Some stop eating meat, or flying abroad on vacation, in order to reduce their carbon footprint. But the people who will be most severely harmed by climate change have not yet been conceived. If there were to be no future generations, there would be much less for us to feel to guilty about.

So why don’t we make ourselves the last generation on earth? If we would all agree to have ourselves sterilized then no sacrifices would be required — we could party our way into extinction!

Of course, it would be impossible to get agreement on universal sterilization, but just imagine that we could. Then is there anything wrong with this scenario? Even if we take a less pessimistic view of human existence than Benatar, we could still defend it, because it makes us better off — for one thing, we can get rid of all that guilt about what we are doing to future generations — and it doesn’t make anyone worse off, because there won’t be anyone else to be worse off.

Is a world with people in it better than one without? Put aside what we do to other species — that’s a different issue. Let’s assume that the choice is between a world like ours and one with no sentient beings in it at all. And assume, too — here we have to get fictitious, as philosophers often do — that if we choose to bring about the world with no sentient beings at all, everyone will agree to do that. No one’s rights will be violated — at least, not the rights of any existing people. Can non-existent people have a right to come into existence?

I do think it would be wrong to choose the non-sentient universe. In my judgment, for most people, life is worth living. Even if that is not yet the case, I am enough of an optimist to believe that, should humans survive for another century or two, we will learn from our past mistakes and bring about a world in which there is far less suffering than there is now. But justifying that choice forces us to reconsider the deep issues with which I began. Is life worth living? Are the interests of a future child a reason for bringing that child into existence? And is the continuance of our species justifiable in the face of our knowledge that it will certainly bring suffering to innocent future human beings?

What do you think?

Readers are invited to respond to the following questions in the comment section below:

If a child is likely to have a life full of pain and suffering is that a reason against bringing the child into existence?

If a child is likely to have a happy, healthy life, is that a reason for bringing the child into existence?

Is life worth living, for most people in developed nations today?

Is a world with people in it better than a world with no sentient beings at all?

Would it be wrong for us all to agree not to have children, so that we would be the last generation on Earth?


Peter Singer is Professor of Bioethics at Princeton University and Laureate Professor at the University of Melbourne. His most recent book is “The Life You Can Save.”


Full article and photo:

‘Last Generation?’: A Response

The role of philosophers — and I take it, of The Stone — is to stimulate people to think about questions that they might not think about otherwise.  That so many people were roused to comment on my piece, “Should This Be the Last Generation?” — despite comments being closed at one point — suggests that it achieved this aim.  That said, it would have been good if some of those commenting had read the piece with a little more care, and all the way through.

The comments show two common misunderstandings of what I was trying to do.  As an example of the first, Rob Cook of New York writes that “this piece frames overpopulation and environmental destruction in the wrong terms. It assumes that developed countries cannot cut down on consumption.”  But “Last Generation?” isn’t a piece about overpopulation or environmental destruction, and it makes no assumptions about whether developed countries can cut down on consumption.  It just isn’t about that question at all.  It asks a deeper question, one not dependent on the environmental constraints on our planet, nor on the number of people now on Earth.  Even if there were only 1 million humans on this planet, all living at the level of an average citizen of, say, Switzerland, we could still ask whether it would be a good thing to have children in order to continue the species.  

Perhaps I muddied the waters by mentioning climate change.  I used the example of an imaginary response to the problem of climate change because those who believe that we have no reason to bring children into existence just because they will live good lives have — at least in theory — an easy solution to the problems our greenhouse gas emissions are causing to future generations:  if we could make sure, without killing or coercing anyone, that there would  be no future generations, we could happily continue with our polluting ways.

This was not, of course, intended to be a realistic solution to the problem of climate change.  It was, as I said, a thought experiment to test our attitudes to one of the views I had been discussing.  I expected that most people would reject this as an appallingly selfish way of allowing us to emit all the greenhouse gases without constraint, and that this would show that most of us dothink that it is a good thing to bring more human beings into the world.  That, in turn, would suggest that most of us do think life is worth living, and it leads us back to the problem of the asymmetry with which I began — why many people think it is wrong to bring a child who will live a miserable life into the world, but do not think that the fact that a child will live a good life is reason enough for bringing a child into the world.  (For the world as it is today, any suggestion that it is desirable to bring a child into the world is likely to be met by a reference to the environmental problems that the existing population is causing; but in order to focus on the philosophical issue rather than the practical one, the question can also be considered in the hypothetical situation, in which our planet has only 1 million people on it.)

This brings me to the second common misunderstanding found in the comments.  I was surprised by how many readers assumed that my answer to the title question of my essay was “Yes.”  Perhaps they stopped reading before they got to the last paragraph, in which I say that I think that life is, for most people, worth living, and that a world with people in it is better than one without any sentient beings in it. 

So yes, Jon Norsteg of Pocatello, ID, I share your view that “life is not a desert of misery with rare good patches.”  To answer Gabi of London, UK, I’m glad that I have children — and grandchildren — and as far as I can see they are all leading worthwhile lives.  Perhaps you and other readers assumed that because I gave a sympathetic account of the views defended by David Benatar in his book, “Better Never to Have Been,” I was endorsing his position.  But philosophers frequently set out views that are opposed to their own, and seek to present them in their strongest possible form, in order to see if their own views can stand up to the best counter-arguments that can be put.  Philosophy is not politics, and we do our best, within our all-too-human limitations, to seek the truth, not to score points against opponents.  There is little satisfaction in gaining an easy triumph over a weak opponent while ignoring better arguments against your views.

By the end of my essay, it should have been clear that although I think Benatar puts up a better case for his conclusions than many people would imagine could be made, I do not think that he is right.  Nevertheless, I hope those with a serious interest in these issues will  read Benatar’s book.  They may end up disagreeing with him, but I doubt that they will think his position absurd.

The claims made by some readers that my essay reveals philosophers to be gloomy, depressed people are therefore wide of the mark.   Even further astray, however, are the suggestions that those who believe that life is not worth living are somehow committed by this position to end their own lives. Mmrader of Maryland, for instance, asks:  “If you think life is so pointless and painful, with most pleasure a fleeting illusion, why are you still here?”  I don’t, of course, think life is so pointless and painful, but someone who did might still decide to stick around — might indeed think that it would be wrong not to stick around — because he had the ability to reduce the amount of pain that others experience.

I also want to assure the many readers who pointed out that humans are not the only sentient beings, that the author of “Animal Liberationhas not suddenly forgotten this important fact.  He just wanted to focus on the issue under discussion, and to avoid mixing it with the separate issues of whether nonhuman animals would be better off in a world without human beings in it, and if so, whether the gains for nonhuman animals would be sufficient to outweigh the losses to human beings.  Hence the suggestion that we compare a world with humans in it to one that has no sentient beings at all.

The sheer number of comments received is one sign that the essay raised an issue that people find perplexing.  Another sign of this is that a number of readers thought that Benatar’s position is absurd, whereas others thought it self-evident.  Since we do not often discuss such fundamental questions — D. Lee of New York City is right to suggest that there is something of a taboo on questioning whether life is worth living — I thought it would be interesting to take an online poll of readers’ opinions on some of the key questions.  But for technical reasons that proved impossible, and hence the questions appeared at the end of the essay, with an invitation to readers to answer them.  Only a few did, but many more expressed a general attitude, for or against the idea that existence is generally bad and we should not bring more children into the world.

On the negative side, I thought MClass (location: “gallivanting around Europe”) expressed it well when he or she wrote:

My life’s low points are nowhere near as severe many other people’s; but that’s actually not the point. My own conclusion was “Why take the risk?” especially with my own kids — if the natural instinct of parents is to prevent the harm and promote the good of their offspring, why would I even contemplate bringing my kids into a rickety planet to be raised by imperfect parents.

I love my kids so much that I didn’t have them.

On the positive side, many readers expressed satisfaction with the lives that their children are leading, and saw this as an indication that their decision to have children was justified.  Suzanne from Wisconsin offered the distinct idea that there is objective value in some human activities, when she writes that “it would be tragic if no one was left to love/make art, music literature, philosophy, etc.” 

I wanted to know how those commenting were split on this fundamental issue of whether life is on the whole good or bad and whether we should have children, so I asked Julie Kheyfets, a Princeton senior, to go through the 1040 comments that were posted through midnight on June 9 and classify them on the basis of the attitudes they expressed.   She found that 152 of them did not address that issue at all, and another 283 addressed it but were undecided or neutral.  Of the remainder, 145 claimed that existence is generally bad and that we should not bring more children into the world, whereas 460 held that existence is generally good and that we should bring more children into the world.  In percentage terms, excluding those who did not address the issue at all, 52 percent of respondents held a positive attitude toward human existence; 16 percent held a negative attitude and 32 percent were neutral or undecided.

Since readers have the opportunity to recommend comments by other readers, I also asked Ms. Kheyfets to count the recommendations.  This yielded a higher score for those with negative views of existence, in part because fewer readers recommended comments that were neutral or undecided.  Excluding once again those comments that did not address the issue, 1870 readers recommended comments expressing negative views of our existence or opposing bringing children into the world, which was 29 percent of all recommendations, while 3109 or 48 percent took a positive view of our existence or favored bringing children into the world, with 23 percent  of comments neutral or undecided.

None of this allows us to draw any conclusions about the attitudes of people other than those who chose to comment, or recommend comments, but at least among that group, there is more support for a negative view of human existence and against having children than one might have expected.   (I put this forward purely as an interesting fact about this group of people; I am not suggesting that it has any bearing on whether that view is sound.)

Alas, to respond adequately to the many readers who understood exactly what I was attempting to do, and had serious points to make about it, would take more time than I have available.  Several readers suggested that my essay somehow ignored evolution, but my topic was what we ought to do, and as all the best writers on evolution — from Charles Darwin to Richard Dawkins — make clear, our knowledge of evolution does not tell us what we ought to do.

Among the many significant philosophical challenges in these comments I will only mention the one that came from those who, like Abe of Boston, asked whether happiness is the only metric by which to measure life.  That’s the view of the classic utilitarian tradition that traces its roots to the work of Jeremy Bentham, but many contemporary utilitarians, including myself, do not accept that view.  I have argued for a broader view that takes into account the preferences of all sentient beings, and seeks to satisfy them to the greatest extent possible.  But other utilitarians — or more broadly, consequentialists — take a more pluralistic view, including other values like justice, love, knowledge and creativity in their account of what is intrinsically good.  And of course many philosophers are not consequentialists at all.  If I spoke mostly of happiness and suffering in the essay, that is because most people do take these as important, and in a short essay it is impossible to discuss every aspect of these large questions.

These issues do matter.  Even if relatively few people engage in ethical thinking before deciding whether to reproduce, the decisions are important for those who do.  And since public policies affect the birthrate, we ought to be giving some thought to whether it is, other things being equal, good for there to be fewer people.  Of course, in our present environmental crisis other things are not equal, but the underlying question of the value of bringing human beings into the world should still play a role in decisions that affect the size of future generations.

If you find these questions interesting, there is a lot more to read.  The current philosophical debate owes most to Derek Parfit.  Part IV of his “Reasons and Persons” presents the issues about bringing people into existence in all their baffling complexity.  And for those who asked “What is a good life, anyway?” don’t miss the brief Appendix I (that’s the letter “I”, not a roman numeral).

There are also two relevant articles in the online Stanford Encyclopedia of Philosophy. I suggest you go first to Lukas Meyer’s article on “Intergenerational Justice,” at and then for a discussion of a more specific problem related to this issue, go to the article on “The Repugnant Conclusion” by Jesper Ryberg, Torbjörn Tännsjö, and Gustaf Arrhenius.

Peter Singer is Professor of Bioethics at Princeton University and Laureate Professor at the University of Melbourne. His most recent book is “The Life You Can Save.”


Full article:

The Long Flight From Tyranny

Firsthand accounts of Burma’s refugees, living difficult half-lives on a dangerous borderland

Reading about modern Burma can be an ordeal—like a journey into the abyss. The situation in this godforsaken country is so dire—and the result of such dunderheaded thuggery—that you wonder why you do it to yourself. On the upside, at least you’re not living there.

The many refugees who live along the Thai-Burma border—would-be escapees from the military-socialist regime that has ruled in Burma since 1962—aren’t quite living there either. But given their seemingly endless state of near statelessness, they may as well be. The refugees are victims of the Burmese government’s war on political opponents and on ethnic groups within the country, notably the Karen, who have been fighting for autonomy ever since Burma was granted independence from Britain in 1948.

In June 2009, civilians take shelter along the Moei River separating Thailand and Burma as they try to escape the fighting between the Burmese army and Karen guerrillas.

The Karen conflict has waxed and waned in intensity over the years, but the past couple of decades have been especially grim. The Burmese military has sought to purge the country of Karen using every debased tool at their disposal, from burning down villages to committing systematic rape.

As Mac McClelland writes in “For Us Surrender Is Out of the Question” (Soft Skull Press, 388 pages, $15.95), a sophistic argument continues over whether the government’s purge constitutes genocide. “Or as my father put it when I tried to impress upon him the seriousness of the situation in Burma, ‘but how does it compare to Sudan?’ ” She makes a convincing case that Burma and Sudan are not so far apart on the horror scale.

But Ms. McClelland has done more than write another broad catalog of misery. She has a tale to tell. She arrived in a Burmese refugee camp in northern Thailand in 2006 to teach English for a few weeks. A profane young bisexual from Ohio, she finds herself living with a group of prim, trim Karen men who spend their days monitoring Burmese atrocities and their nights competing in push-up contests. Quickly she discovers why the Far Eastern Economic Review dubbed the Karen “the world’s most pleasant and civilized guerrilla group.”

It is a fantastic clash of cultures, which Ms. McClelland describes with saucy relish. The men are initially resistant to her exuberance and warmth and then fascinated by it. She is in turn fascinated by their combination of naïveté and experience. They may not know about French kissing—preferring a form of kissing that involves a rub of the nose followed by a sharp sniff—but they can navigate their way through a jungle to evade the murderous Burmese army. Her writing is so vivid that you can almost smell the frying pork, the cigarettes and, alas, the overflowing latrines.

Ms. McClelland weaves into her tale a detailed, irreverent modern history of Burma that scythes through many of the arguments dictating the policy of other countries toward the Burmese government. Sanctions, she writes, may be well-intentioned, but they produce all kinds of unintended effects, such as forcing Burmese textile workers out of their jobs and into the sex industry. And while the West bleats, Asian countries—notably China, Singapore and South Korea—are more than happy to do business with Burma. As long as there is money to be made in Burma, she says, “there’s unlikely to be a cohesive or constructive policy of international financial disengagement.”

Ms. McClelland credits Condoleezza Rice who, as secretary of state, in 2006 opened the door for more Karen to leave the squalid camps in Thailand and emigrate to the U.S. They are grateful for the chance at a new life, Ms. McClelland notes; but they are also struggling to accept that their dream of returning home to an independent Karen state is fading.

Zoya Phan is a Karen who was born in a jungle village in 1980 but fled as a child to the Thai border camps. Her mother was a guerrilla soldier and her father a pro-democracy activist who was murdered at his home in Thailand in 2008, allegedly by the Burmese government. Ms. Phan was fortunate to receive an Open Society Institute scholarship that saved her from the refugee camps and allowed her to study in Bangkok and later England, where she now lives.

“Undaunted” (Free Press, 284 pages, $26) is an unremittingly wretched memoir of how Ms. Phan’s family was chased from its home by the Burmese army into the refugee camps, where thousands of people have spent years with no way out, prey to the weather, violent guards and the constant fear of Burmese reprisals. If you have ever doubted the value of Western aid to such refugees, this book will change your mind. When everything was darkest for Ms. Phan, it was help from the West that gave her hope. As she writes: “I am one of the lucky ones. I am lucky I am still alive. I am lucky I haven’t been raped. I am lucky that I am not still in a refugee camp with no work, no freedom. . . . I don’t want you to feel sorry for me, I want you to feel angry, and I want you to do something about it.”

Emma Larkin’s “Everything Is Broken” (Penguin Press, 271 pages, $25.95) follows her 2005 book, “Finding George Orwell in Burma,” an account of her journey through modern Burma searching for traces of Orwell’s time there as a policeman in the 1920s. It’s hard to find any shafts of light in this one. An American journalist who writes under a pseudonym, she returned to Burma in 2008 after the cyclone Nargis had killed nearly 140,000 people and devastated swaths of the country. She entered Burma as a tourist but managed to move from village to village meeting victims of the hurricane and of the government’s inept response. Given how difficult it was for foreign governments and aid groups to penetrate Burma at the time, Ms. Larkin pulls off a formidable piece of reporting. She also does a good job of decoding the generals who run Burma, who seem driven by paranoia, mysticism and a firm belief in the jackboot as a cure-all.

Karen Connelly’s “Burmese Lessons” (Nan A. Talese/Doubleday, 382 pages, $27.95) is a memoir of Ms. Connelly’s affair with a Burmese resistant whom she meets on a reporting assignment to the Thai border in the mid-1990s. “Burmese Lessons” (which follows a superb novel by Ms. Connelly called “The Lizard Cage,” about Burma’s political prisoners) is a polished, literary memoir that includes, along the way, an account Burma’s turbulent history. The book has a bit too much of the conscience-stricken Westerner swooning over the dark-skinned rebel, but Ms. Connelly is a hugely engaging writer. Burma itself—as Ms. Connelly well knows—is rather more complicated than one difficult love affair.

Mr. Delves Broughton is the author of “Ahead of the Curve: Two Years at Harvard Business School.”


Full article and photos:

Postulates Of the Pitch

Here’s a categorical imperative: Put the ball in the net.

In a blissfully funny, vintage Monty Python sketch, there is a soccer game between Germany and Greece in which the players are leading philosophers. The always formidable Germany, captained by “Nobby” Hegel, boasts the world-class attackers Nietzsche, Heidegger and Wittgenstein, while the wily Greeks, captained by Socrates, field a dream team with Plato in goal, Aristotle on defense and—a surprise inclusion—the mathematician Archimedes.

Toward the end of the keenly fought game, during which nothing much appears to happen except a lot of thinking, the canny Socrates scores a bitterly disputed match winner. Mayhem ensues! The enraged Hegel argues in vain with the referee, Confucius, that the reality of Socrates’ goal is merely an a priori adjunct of non-naturalistic ethics, while Kant holds that, ontologically, the goal existed only in the imagination via the categorical imperative, and Karl Marx—who otherwise had a quiet game—protests that Socrates was offside.

And there, in a philosophical nutshell, we have the inspired essence of the delightfully instructive “Soccer and Philosophy,” a surprising collection of essays on the Beautiful Game, written by soccer-loving loonies who are real-life philosophers, whose number includes the book’s editor, Ted Richards. Soccer purists, incidentally, who were born in England (like myself) prefer not to refer to soccer as soccer. It is football—as cricket is cricket. Even so, there is something for everyone in this witty and scholarly book.

For those of you who remain bewildered by the mysterious global appeal of the world’s most popular sport, for example, I can guarantee that this book will bewilder you even more—but in a good way! Attend to the enduring dictum of the working-class Sophocles of England, the legendary former manager of Liverpool Football Club, Bill Shankly. One of the book’s essays quotes from his line: “Some people think football is a matter of life and death. I am very disappointed in that attitude. I can assure them it is much more serious than that.”

For those worried by dubious behavior on Wall Street, see the splendid essay “How to Appreciate the Fingertip Save,” in which Edward Winters quotes the guiding principle of Albert Camus—the existential novelist who played goalkeeper as a young man in Algeria: “All that I know of morality I learnt from football.”

Or, for those who believe that the irresistible universality of the game will be breaking through in America any day now, see the essay “The Hand of God and Other Soccer . . . Miracles?” in which Kirk McDermid cites St. Thomas Aquinas’ identification of the crucial elements that make an event truly miraculous.

Robert Northcott discusses Kierkegaard’s concept of anxiety in relation to penalty shots, but right now the Danish philosopher’s thinking is best applied to England’s dark, neurotic fear of what would be a thoroughly deserved national disgrace should the United States beat England in the teams’ opening World Cup match on Saturday.

Is there, perhaps, one too many high-flown footballing philosophy in the book? There is. (And there isn’t.) The claim that Nietzsche would have been an enthusiastic supporter of the London club, Arsenal, is curiously speculative when everyone knows that he would have rooted for the steamrollers of Europe, Inter Milan. Another of the essayists—raving about the artistic skills of one of the greatest footballers of our time, Cristiano Ronaldo—calls on the aesthetics of Plato and Aristotle to ponder: “Is Ronaldo a Modern Picasso?” To which we might be tempted to respond: “Maybe so. But could Picasso bend it like Beckham?”

And then where would we be? The answer to that is exactly where the authors of “Soccer and Philosophy” want us to be: thinking in fresh and intriguing ways about the Beautiful Game we thought we knew. “The Loneliness of the Referee,” Jonathan Crowe’s wonderful essay, is particularly appealing to all who, like myself, yell irrational abuse at that ultimate despot and strutting God of the stadium, the ref. But only when his unbelievably blind decisions go against us. The referee, in other words, is to blame for everything.

Mr. Crowe first reminds us that the existential philosopher Jean-Paul Sartre was an avid student of football—see his “Critique of Dialectical Reason,” where he remarks with undeniable wisdom: “In a football match, everything is complicated by the presence of the other team.”

But it is to Sartre’s earlier works, “Being and Nothingness” and “Existentialism and Humanism,” that Mr. Crowe appeals, revealing the loneliness of the referee in a new and sympathetic light. The referee’s ordeal is that he alone bears responsibility for his decisions and therefore the mortal fate of the game. Yet the referee who errs badly is within the rules of the game, because the rules of the game allow him to err badly. His irreversible blunders are final.

Think, if you will, of the fatal decision of poor Jim Joyce, who last week made the worst umpiring call in baseball history and ruined Armando Galarraga’s perfect game. But, unlike the forgiving, sweet baseball fans of the Detroit Tigers (and the guilt-ridden, tearful Mr. Joyce), the football fan is so passionately committed to the game—the only true game—that he never forgives or forgets (and the lonely referee never explains).

Ergo, the referee’s rationale: I whistle, therefore I am.

That does not help me much, actually. It helps the referee. It helps us understand his confident, fallible power. But from the fan’s point of view, the secular religion of football is all about mad, obsessive love and awesome bias, it is about irresistible skill and glory and, yes, a certain divine, beautiful transcendence. All the rest, according to the rewarding “Soccer and Philosophy,” is thinking aloud enthusiastically. Or, put it this way:


Mr. Heilpern is the author of “John Osborne: The Many Lives of the Angry Young Man.”


Full article and photo:

The Gulf Spill and the Limits of Science

TV has fueled unrealistic expectations of a quick fix.

My old editor used to say that no matter what people’s views might otherwise be, when the aliens land, everyone will call for the scientists. The continuing oil spill in the Gulf is a case in point. As the world watches the crude gushing relentlessly, everyone seems to be asking: Where are the scientists? And why can’t they fix this?

Perhaps we have become victims of our own fascination with the quick fixes that television science has offered. Since the days of Scottie on through Data and Geordie and the holographic doctor in the various Star Trek incarnations, pop culture has convinced us that engineers can save the day at the last minute—and literally in minutes—when presented with the challenge of staving off a warp core breach or curing a disease.

Alas, in the real world it doesn’t work that way. The progress of science more often occurs by baby steps than giant leaps. The road from basic knowledge to successful technology is a long and winding one, usually taking decades, not weeks or months.

Moreover, engineers don’t work well with vague suppositions and hypotheses. They need to have solid data and well-tested theories in order to design workable solutions to real-world problems. If the fundamental physics isn’t well understood, it is generally impossible to build a better toaster. I tried to explain this to NASA when a group of optimistic engineers suggested devoting several million dollars to funding engineering proposals to build technologies like a warp drive.

As the stakes get bigger—from the potential for devastating and catastrophic environmental disasters due to deep well oil and gas exploration to the unpredictable impacts of rapidly emerging new biotechnologies—we have to be willing to do something that goes against the grain. We must plan for the future and understand that rare events are not just possible, they are generally guaranteed. When the head of BP said we want to make certain that a disaster like Deepwater Horizon never happens again, he meant well. But never is a long time.

Only by openly and properly exposing the state of current knowledge for dealing with potential disasters can the public and politicians do reasonable risk assessment in deciding costs versus benefits. The job of scientists and engineers is not to make policy but to ensure that the state of current knowledge is known to all those involved in decision-making, including the public at large.

This means recognizing in advance not only that the Gulf disaster was possible but that something like it was inevitable. Only with this mind-set can we be prepared to do the sensible thing: Conduct the necessary research on the physics and geology of deep ocean processes in advance so that engineers will not be flying blind if and when a disaster eventually occurs.

In science it is natural to attempt to consider all possibilities as we attempt to predict the long-term future. Some of us get paid to think not just about the next decade or century, but literally billions or hundreds of billions of years ahead. This isn’t because the issues are practical, but because we might learn something about our current place in the cosmos by doing so.

Moreover, sometimes the purely hypothetical becomes practical. We are now in a position, for example, to track the orbits of all astrophysical objects in the solar system that might one day present a threat of catastrophic collision with the Earth. With enough advance notice we might even do something about it.

So it should be with the technologies that drive our current economy. With millions or billions riding on the outcomes and a populace hungry for energy, it may have been hard to argue that we need to fund basic research that might help stave off a future disaster before forging ahead with the technologies. But without the necessary knowledge base in advance we are guaranteed a repeat performance.

The economic costs associated with the next Gulf disaster, or New Orleans flood, or nuclear terrorist event will far outweigh the investment that might be made in advance to learn how to minimize the risks and to quickly and appropriately respond to problems.

There is a reason that “CSI” is on television, while in real life DNA evidence may take 20 years to be used, after the fact, to exonerate wrongfully convicted individuals. As I often tell people when I speak to a room with more Klingons than people: Star Trek is science fiction. The operative word here is fiction. The real world is far more complicated and far less well understood. That’s why we need good science now more than ever.

Mr. Krauss, a cosmologist, is director of the Origins Project at Arizona State University. His newest book, “Quantum Man: Richard Feynman’s Life in Science,” is forthcoming from W.W. Norton.


Full article:

The Future of Our Illusions

Sometimes, the reason you don’t discuss the gorilla in the room is that you never notice it’s there. That, literally, is what cognitive neuroscientists Christopher Chabris and Daniel Simons discovered at Harvard a decade ago, using an ingeniously simple approach.

First, they created a short film of students passing a basketball to one another. The clip was largely unremarkable except for the fact that, about halfway though, an actor in a gorilla suit sauntered through the group of basketball-tossers, pounded his chest and then continued walking. Total screen time: nine seconds.

The mini-movie was then shown to experiment-subjects, who were told to keep track of the total number of passes that they observed during the minute-long film. Distracted by their task, about half the viewers reported never seeing the gorilla. They were shocked to learn of its existence.

In “The Invisible Gorilla,” Messrs. Chabris and Simons argue that the illusion of attention (as they categorize the gorilla demonstration) is but one of many “everyday illusions” that obscure our perceptions and cause us to place undeserved trust in our instincts and intuition.

The illusion of memory is another everyday problem. It shows itself in vivid but embellished recollections of events, based only loosely on reality. This illusion turns out to be especially common in the case of emotionally charged events, so-called flashbulb memories, such as 9/11 or the Challenger explosion. While we clearly remember more about such terrible days than the days that preceded them, the memories are much less accurate than we suppose—our recollection just isn’t that good and often includes details that are plausible but inaccurate.

We are also beset by the illusion of knowledge—we know less than we think—and the illusion of cause, where we mistake correlation for causation. Messrs. Chabris and Simons note, for example, that the appearance of autism symptoms soon after childhood vaccinations has widely—and mistakenly—been interpreted to suggest that vaccines are responsible for autism. Although scientifically discredited, this firmly held belief has led to many skipped vaccinations—and has left many children vulnerable to preventable disease.

While these illusions (and there others, including the illusion of potential) would presumably be damaging enough, what really does us in is the illusion of confidence: We profoundly underestimate our capacity to be fooled. This susceptibility seems both pernicious and pervasive, exacerbating our other failings while remaining a criterion for professional success.

You don’t have to travel to Lake Wobegon (where all the children are above average) to find overconfidence. As Messrs. Chabris and Simons show— citing studies and statistics with admirable concision—most of us tend to overestimate our intelligence, attractiveness, sense of humor and even our driving skills. In all these areas, data suggest, most of us believe that we are above average.

Our overestimation of our abilities has especially profound consequences, the authors argue, when it causes us to lose sight of our limitations and forget how fragile our perceptions may be. Our recollection of events may be more flawed than we know, but it is overconfidence in our memories that can lead us to send an innocent man to jail based on eyewitness testimony. Similarly, our knowledge of financial systems might be less robust than we recognize, but it is overconfidence in our spreadsheet models that can lead us to ruin.

Equally troubling, Messrs. Chabris and Simons say, is our tendency to overvalue self- assurance and confuse confidence with competence. Not only do we seek confident leaders, doctors, executives, advisers and workers but we believe their confidence reflects their ability and knowledge. On the one hand, this makes sense: We tend to speak more confidently about things we know best, so when people speak with confidence we assume that outward self-assurance is evidence of an underlying capability.

The trouble, as Messrs. Chabris and Simons explain, is that confidence is a trait, a “consistent quality that varies from one person to the next, but has relatively little to do with one’s underlying knowledge or mental ability.” An intrinsically confident person might exude confidence even when he knows very little, while someone less confident might appear hesitant even when he knows a lot. Our trouble recognizing this distinction can lead us to trust the wrong people and to underestimate the aptitude of those who are most self-aware.

Because the authors are university professors (Mr. Chabris teaches at Union College, Mr. Simons at the University of Illinois), it is perhaps only natural that “The Invisible Gorilla” is organized in the manner of a survey course, taking the reader through successive illusions, each introduced through an accessible example and supported with experiments from the scientific literature.

As a thoughtful introduction to a captivating discipline, the book succeeds wonderfully. And there may be something usefully chastening about its message: that we cannot always trust what seems most certain to us, especially when our judgments are aimed at ourselves. As the authors suggest, a state of illusion seems to be part of our neurological make-up. It might be a good idea—one might think—for us to cultivate a routine skepticism toward all sorts of supposed certainties.

But perhaps we should pause before taking such life lessons to heart. It is true that readers who heed the admonitions of Messrs. Chabris and Simons may be rewarded with a clearer view of the world. But I doubt that they’d be happier, and I’m not convinced that the world would be a better place. The authors seem to recognize the frequency of illusions but may underestimate their positive value and importance in our daily lives.

It is nice to remember milestone events like your wedding or your daughter riding a bike for the first time in exquisite (if embellished) detail—such granularity helps sustain the memory, enabling us to return easily to the moment and re-experience the original joy. And what’s the harm in allowing ourselves to go through most days believing that we have a basic understanding of the world around us or in seeing our potential as greater than it is?

Unsparing awareness of our limitations might be paralyzing and would certainly make for a rather dismal life. I also wonder how many great achievements would have been possible without an entrepreneur’s excessive confidence, an artist’s grand plans, a researcher’s immodest ambitions.

The authors effectively note how illusion-filled narratives can be misleading, those we tell ourselves and those we so often read. Messrs. Chabris and Simons call out specific magazine essays, newspaper articles and the entire genre of CEO hagiography. Writers, they show, often craft their stories by assuming unproved causal relations and by also assuming that they have access to underlying truths (say, about a subject’s motives) that are, in fact, unknowable.

Yet narratives, for better or worse, are how we understand life. By deliberately rejecting so many narrative conceits, Messrs. Chabris and Simons have inadvertently robbed “The Invisible Gorilla” itself of an overarching storyline that might have made their analysis even more compelling and might have ensured that their insights, like that gorilla, would not get overlooked. But that’s just my intuition—or illusion.

Dr. Shaywitz is director of strategic and commercial planning at a drug development company in South San Francisco and an adjunct scholar at the American Enterprise Institute.


Full article and photo:


See also:

‘The Invisible Gorilla’

Chapter 1

‘I think I would have seen that’

Around two o’clock on the cold, overcast morning of January 25, 1995, a group of four black men left the scene of a shooting at a hamburger restaurant in the Grove Hall section of Boston. As they drove away in a gold Lexus, the police radio erroneously announced that the victim was a cop, leading officers from several districts to join in a ten-mile high-speed chase. In the fifteen to twenty minutes of mayhem that ensued, one police car veered off the road and crashed into a parked van. Eventually the Lexus skidded to a stop in a cul-de-sac on Woodruff Way in the Mattapan neighborhood. The suspects fled the car and ran in different directions.

One suspect, Robert “Smut” Brown III, age twenty-four, wearing a dark leather jacket, exited the back passenger side of the car and sprinted toward a chain- link fence on the side of the cul- de- sac. The first car in pursuit, an unmarked police vehicle, stopped to the left of the Lexus. Michael Cox, a decorated officer from the police antigang unit who’d grown up in the nearby Roxbury area, got out of the passenger seat and took off after Brown. Cox, who also is black, was in plainclothes that night; he wore jeans, a black hoodie, and a parka.


Cox got to the fence just after Smut Brown. As Brown scrambled over the top, his jacket got stuck on the metal. Cox reached for Brown and tried to pull him back, but Brown managed to fall to the other side. Cox prepared to scale the fence in pursuit, but just as he was starting to climb, his head was struck from behind by a blunt object, perhaps a baton or a flashlight. He fell to the ground. Another police officer had mistaken him for a suspect, and several officers then beat up Cox, kicking him in the head, back, face, and mouth. After a few moments, someone yelled, “Stop, stop, he’s a cop, he’s a cop.” At that point, the officers fled, leaving Cox lying unconscious on the ground with facial wounds, a concussion, and kidney damage.

Meanwhile, the pursuit of the suspects continued as more cops arrived. Early on the scene was Kenny Conley, a large, athletic man from South Boston who had joined the police force four years earlier, not long after graduating from high school. Conley’s cruiser came to a stop about forty feet away from the gold Lexus. Conley saw Smut Brown scale the fence, drop to the other side, and run. Conley followed Brown over the fence, chased him on foot for about a mile, and eventually captured him at gunpoint and handcuffed him in a parking lot on River Street. Conley wasn’t involved in the assault on Officer Cox, but he began his pursuit of Brown right as Cox was being pulled from the fence, and he scaled the fence right next to where the beating was happening.

Although the other murder suspects were caught and that case was considered solved, the assault on Officer Cox remained wide open. For the next two years, internal police investigators and a grand jury sought answers about what happened at the cul-de-sac. Which cops beat Cox? Why did they beat him? Did they simply mistake their black colleague for one of the black suspects? If so, why did they flee rather than seek medical help? Little headway was made, and in 1997, the local prosecutors handed the matter over to federal authorities so they could investigate possible civil rights violations.

Cox named three officers whom he said had attacked him that night, but all of them denied knowing anything about the assault. Initial police reports said that Cox sustained his injuries when he slipped on a patch of ice and fell against the back of one of the police cars. Although many of the nearly sixty cops who were on the scene must have known what happened to Cox, none admitted knowing anything about the beating. Here, for example, is what Kenny Conley, who apprehended Smut Brown, said under oath:

Q: So your testimony is that you went over the fence within seconds of seeing him go over the fence?

A: Yeah.

Q: And in that time, you did not see any black plainclothes police officer chasing him?

A: No, I did not.

Q: In fact, no black plainclothes officer was chasing him, according to your testimony?

A: I did not see any black plainclothes officer chasing him.

Q: And if he was chasing him, you would have seen it?

A: I should have.

Q: And if he was holding the suspect as the suspect was at the top of the fence, he was lunging at him, you would have seen that, too?

A: I should have.

When asked directly if he would have seen Cox trying to pull Smut Brown from the fence, he responded, “I think I would have seen that.” Conley’s terse replies suggested a reluctant witness who had been advised by lawyers to stick to yes or no answers and not volunteer information. Since he was the cop who had taken up the chase, he was in an ideal position to know what happened. His persistent refusal to admit to having seen Cox effectively blocked the federal prosecutors’ attempt to indict the officers involved in the attack, and no one was ever charged with the assault.

The only person ever charged with a crime in the case was Kenny Conley himself. He was indicted in 1997 for perjury and obstruction of justice. The prosecutors were convinced that Conley was “testilying”— outlandishly claiming, under oath, not to have seen what was going on right before his eyes. According to this theory, just like the officers who filed reports denying any knowledge of the beating, Conley wouldn’t rat out his fellow cops. Indeed, shortly after Conley’s indictment, prominent Boston- area investigative journalist Dick Lehr wrote that “the Cox scandal shows a Boston police code of silence . . . a tight inner circle of officers protecting themselves with false stories.”

Kenny Conley stuck with his story, and his case went to trial. Smut Brown testified that Conley was the cop who arrested him. He also said that after he dropped over the fence, he looked back and saw a tall white cop standing near the beating. Another police officer also testified that Conley was there. The jurors were incredulous at the notion that Conley could have run to the fence in pursuit of Brown without noticing the beating, or even seeing Officer Cox. After the trial, one juror explained, “It was hard for me to believe that, even with all the chaos, he didn’t see something.” Juror Burgess Nichols said that another juror had told him that his father and uncle had been police officers, and officers are taught “to observe everything” because they are “trained professionals.”

Unable to reconcile their own expectations—and Conley’s—with Conley’s testimony that he didn’t see Cox, the jury convicted him. Kenny Conley was found guilty of one count each of perjury and obstruction of justice, and he was sentenced to thirty- four months in jail. In 2000, after the U.S. Supreme Court declined to hear his case, he was fired from the Boston police force. While his lawyers kept him out of jail with new appeals, Conley took up a new career as a carpenter.

Dick Lehr, the journalist who reported on the Cox case and the “blue wall of silence,” never actually met with Kenny Conley until the summer of 2001. After this interview, Lehr began to wonder whether Conley might actually be telling the truth about what he saw and experienced during his pursuit of Smut Brown. That’s when Lehr brought the former cop to visit Dan’s laboratory at Harvard.

Excerpted from “THE INVISIBLE GORILLA And Other Ways Our Intuitions Deceive Us,” by Christopher Chabris and Daniel Simons.


Full article and photo:

Indonesia’s last frontier

Indonesia is a democracy. But many Papuans do not want to be part of it

THE hotel provides free mosquito repellent and closes its pool bar before dusk to prevent guests from contracting malaria. The former Sheraton still offers the best accommodation in Indonesia’s little-visited province of Papua, catering mainly to employees of its owner, Freeport-McMoRan, an American mining giant. Freeport protects its staff from more than malaria. Since July 2009 a spate of mysterious shootings along the road linking the hotel in Timika to the huge Grasberg mine up in the mountains has killed one employee, a security guard and a policeman and wounded scores of others. Workers are now shuttled from Timika to the mine by helicopter.

Before the pool bar closes, a jolly crowd of Freeport employees have their beers stored in a cool box. They take it to one of the—mostly dry—seafood restaurants in town. As in the rest of Papua, all formal businesses are run by Indonesian migrants who are predominantly Muslim. The mainly Christian Papuans sit on the pavements outside selling betel nuts and fruit.

“We are not given licences to run a business,” says a young Papuan independence activist who does not want to be named. He sits in a car with two bearded guerrilla fighters of the West Papua Revolutionary Army, the militant wing of the Free Papua Movement (OPM). For more than 40 years the OPM has fought a low-intensity war to break away from Indonesia. Partly because of restrictions on reporting it, this is one of the world’s least-known conflicts. It is getting harder to keep secret.

Unlike its independent neighbour, Papua New Guinea, which occupies the eastern half of the vast island, the western part used to be a Dutch colony. During the cold war the United Nations said there should be a plebiscite to let Papuans decide their future. But Indonesians, the Papuans say, forced roughly 1,000 Papuan leaders at gunpoint to vote unanimously for integration into their country. This “act of free will” has been contested ever since.

The two bearded rebels drive around town to evade security forces. “Indonesia might be a democracy, but not for us Papuans,” says one. “They gave us autonomy which is a joke. We are different from those Indonesians. Just look at our skin, our hair, our language, our culture. We have nothing in common with them. We beg President Obama to visit Hollandia when he comes to Indonesia in June to witness the oppression with his own eyes,” says the other, using the colonial name for Papua’s capital, Jayapura. (America’s president is due to visit the country on June 14th.) In the 1960s indigenous Papuans made up almost the whole population of Indonesia’s largest province; since then immigration from the rest of the country has reduced their share to about half.

The two rebels do not want to take responsibility for the shootings along the road to the Grasberg mine, but leave no doubt either about their sympathies or their intentions. “The Indonesian shopkeepers, the soldiers and the staff of Freeport are all our enemies. We want to kill them and the mine should be shut,” they say. “Grasberg makes lots of money but we Papuans get nothing. When we achieve independence, we shall kick out the immigrants and Freeport and merge our country with Papua New Guinea.”

The car draws up in front of the seafood restaurant where the Freeport staff are becoming ever more cheerful, unaware that rebels are watching them. Freeport is the biggest publicly traded copper company in the world, and the Grasberg mine remains its main asset. The complex, the world’s largest combined copper and gold mine, is enormously profitable. It provided $4 billion of Freeport’s operating profit of $6.5 billion in 2009. The mining facilities are protected by around 3,000 soldiers and police which were supported by Freeport with $10m last year, according to the company. In December 2009 the police shot dead Kelly Kwalik, one of the OPM’s senior commanders, whom the police blamed for a series of attacks on Freeport’s operations, a charge he repeatedly denied.

Foreign journalists are restricted in their travel to Papua. Your correspondent was lucky enough to slip through the net. In the towns, it is clear that the guerrillas generally keep a low profile. But in the central highlands they are free to operate more openly. This is their heartland. Anti-Indonesian feelings run high because of the sometimes brutal suppression of the OPM by the army.

A well-hidden rebel camp in the Baliem valley—home to a Stone Age tribe discovered and disturbed by outsiders only in the 1930s—lies a few kilometres from a small army base. The guerrillas conduct military training with villagers who use spears, bows and arrows, all without metal heads. Students with mobile phones and video cameras teach the farmers revolutionary rhetoric. They have lost faith in peaceful means of protest and hope to provoke a bloody confrontation that will push Papua on to the international agenda. So far the government has refused to talk to the fractious OPM. Unless it changes its mind, it risks being unable to prevent the young radicals from kicking off a revolution.


Full article and photo:

Executive honor

Can an ‘MBA oath’ fix what’s wrong with business?

The portrait of the American business world that has emerged from the financial crisis is rife with unapologetic amorality. Mortgage bankers encouraged people to take out home loans they couldn’t afford, distinguished investment houses peddled deals to their clients that they themselves wouldn’t consider investing in, and officers at those same institutions continued to pay themselves millions of dollars even as they were bailed out by the federal government, all while insisting — even in front of Congress — that this is simply how the game is played.

Several years back, of course, it wasn’t Wall Street, but firms like WorldCom and Enron and Tyco that shocked public sensibilities, their leaders engaging in outright fraud and larceny to pump up stock prices, their own compensation, or both. Whether inside or outside the law, the last decade has given many Americans the nagging sense that the members of the business elite play by a set of rules far removed from those of the society that ultimately supports them.

The political solution tends to be fresh regulation, such as the president’s push to rein in the big banks. But recent years have also seen the emergence of an alternative from within the business world itself — more specifically, from within the graduate schools that produce a disproportionate number of business leaders. The idea is both radical and straightforward: to make executives more moral, changing business itself by changing the way its leaders think about their responsibilities. And a subtly powerful way to do that,

according to a few management scholars and a growing number of business school students, is simply by creating an oath: a pledge of ethical behavior, modeled on medicine’s Hippocratic Oath, that would be administered as graduates embark on their careers.

The invention of the MBA oath is generally credited to Nitin Nohria and Rakesh Khurana, two professors at Harvard Business School who have written papers on the topic. In 2004, the Richard Ivey School of Business, a top Canadian business school, instituted a pledge that all students were required to take on graduation. In 2006, Thunderbird School of Management, a school of international business in Glendale, Ariz., instituted an oath of its own, and the next year Columbia Business School put in place an honor code meant to bind its students beyond graduation and into their working lives.

The oath that has gotten the most attention, though, is at Harvard. Last year, a group of Harvard business students organized a voluntary oath ceremony at the school’s graduation — over half the graduating class, about 550 freshly minted MBAs in all, took the pledge, promising to “create value responsibly and ethically.” A similar ceremony will be held later this month for the class of 2010. Some of those student founders, working with the United Nations and the World Economic Forum, have set out to seed an international movement: “The Oath Project” website now has pledges from 2,800 students at business schools in the United States and abroad, and two of the project’s founders have just published a book on the oath and its potential. The oath gained fresh prominence earlier this month, when Nohria was named dean of Harvard Business School.

“As managers, we’re in charge of a lot of resources, both human resources and capital resources, and we’ve seen that over the last few years, business has gone wrong, and a lot of people have lost jobs,” says Peter Escher, a 2009 Harvard Business School graduate, one of the founders of the school’s MBA Oath and coauthor of the new book. “An oath is a way to lay out the boundaries around the profession, to develop an agreed-upon body of knowledge and ethics.”

A spoken promise may seem like flimsy armor against the outsized financial incentives that come into play in corporate corner offices, but proponents point to evidence from behavioral science of the surprising power of norms and symbols in shaping our behavior. The sanctity of the Hippocratic Oath itself, they argue, along with the gravity with which everyone from military officers and lawyers to accountants and architects treat their codes of professional conduct, suggests how effective communal standards and ethical self-policing can be. A code, in other words, can be a powerful thing.

Those not won over by the business oath argue that its proponents have things backwards. A few words, no matter how earnestly intoned, are unlikely to change anyone’s long-term behavior — the Hippocratic Oath doesn’t create a doctor’s sense of duty, it merely reflects it, and that sense of professional obligation is the product of years of exposure to medicine’s almost tribal code of behavior.

To its adherents, the oath is part of an attempt to create such a value system for business — to turn management into a profession with aims beyond simply keeping share prices high. But to some of its critics, the oath is just fuzzy thinking. Businesses, they argue, are meant to make money for their owners and shareholders, and in so doing to help grow the economy. To the extent that business has a higher purpose, it is that. It’s up to a nation’s citizens and elected officials to rein in that behavior when it stops serving the public good. The oath, in other words, sits squarely in the middle of a larger debate over whether it is possible to articulate a set of higher principles for business at all.

Throughout Western history, most revolutions have had their oaths. The Tennis Court Oath helped bring on the French Revolution, the Protestation oath failed to avert the English Civil War. The Declaration of Independence was not only an announcement of rebellion but a solemn promise of solidarity, its signers mutually pledging to each other “our lives, our fortunes and our sacred honor.” (That pledge held.)

The idea of an MBA oath first occurred to Nohria in 1996. He was on sabbatical at the London School of Business and talking to a colleague, the management scholar Sumantra Ghoshal, about various parts of the traditional business school curriculum that hindered graduates in the real world.

In particular, Ghoshal was a vocal critic of the “maximizing shareholder value” mantra, the idea, ascendant since the early 1980s, that share price should be the dominant yardstick for measuring the performance of managers. Ghoshal and others had come to believe that that worldview allowed managers to ignore their other, equally legitimate responsibilities: everything from the welfare of their own workers to the environmental impact of their products. And with compensation linked to share price, executives were inevitably tempted to do everything they could, legal and otherwise, to goose stock prices quarter by quarter, even if that damaged the company over the long run.

As Nohria recalls, he began to wonder what sort of thing he could teach students as a counterforce.

“Out of the clear blue — my sister had just graduated from medical school — I thought, what if managers actually had something that was like that?” he says. “What if we provided something that was aspirational, some guidance as to what their responsibilities actually were?”

Although it may feel quixotic to invent an oath without the weight of tradition behind it, the Hippocratic Oath itself gained prominence in the 20th century as a response to troubling modern developments. Despite its roots in ancient Greece, only a minority of American medical students took the oath until World War II, as Max Anderson, a cofounder of Harvard’s MBA Oath, has written. But in the wake of the revelations about brutal Nazi medical experimentation, schools rushed to adopt it as a way to unequivocally set down the limits of medical ethics.

And it is not only history that gives weight to an oath, psychology suggests. For example, the behavioral economist Dan Ariely of Duke University has done a study showing that swearing on a Bible makes even atheists more honest, and that signing honor codes does make people less likely to cheat. Work by the psychologist Robert Cialdini and others has found that making a public commitment to a cause — either by speaking it aloud or writing it down — makes people firmer in their commitment, and more likely to act on it.

The effects that have been found, however, are mostly short-term ones, and there’s no evidence a pledge can shape behavior years later. For his part, Ariely (author of the forthcoming book “The Upside of Irrationality”) is skeptical that a pledge alone will change corporate decision-making. If the goal is to get executives and managers to think more broadly about societal good, a more effective measure, he argues, would be to require the reporting of, say, pollution, or the number of patents created, or the number of jobs created, in each quarterly report. The trick, Ariely says, is “to keep these things on top of people’s minds.”

And as for the Hippocratic Oath itself, its purpose may be primarily ceremonial. Asked what role the oath played in the professionalization of American medicine, Kenneth Ludmerer, a physician and historian of medicine at Washington University in St. Louis, responded, “Essentially none. It’s a nice ceremony, but its impact and role is in effect zero.’

The oath’s champions do not claim that it can transform business alone. They see i