Five Best Groundbreaking Memoirs

The Education of Henry Adams

By Henry Adams (1918)

With its narratordolefully pointing the way toward modernism, insistently (and convincingly) writing in the third person, “The Education of Henry Adams” is a one-man kaleidoscope of American history: its politics and pretenses, its turn from a patrician, Victorian society toward the unknowable chaos of the 20th century. Adams regarded his efforts at education as a lifelong exercise in passionate failure. Though 100 copies of the book were printed privately in 1907, he withheld general publication until after his death in 1918. What he didn’t write revealed an intimate truth: Adams omitted the story of his wife’s depression and suicide in 1885. Here was a seminal memoir, required reading for every student of intellectual history, in which the Rubicon of a life had been left out! Adams lifts the veil just twice: once when describing his sister’s death, and again when he returns to America and visits the bronze statue at Rock Creek cemetery in Washington, commissioned from Augustus Saint-Gaudens in his wife’s honor.

Survival in Auschwitz

By Primo Levi (1958)

From the opening sentence—”I was captured by the Fascist Militia on 13 December 1943″—this searingly quiet account by Primo Levi, an Italian chemist, of his 10 months in Auschwitz is a monument of dignity. First published in Italy in 1947 with a title that translates as “If This Is a Man,” the book became a blueprint for every such story that followed, not only as a portrait of the camp’s atrocities but also as a testament to the moments when humanity prevailed. On a mile-long trip with a fellow prisoner to retrieve a 100-pound soup ration, Levi begins to teach his friend “The Canto of Ulysses” from Dante. Completing the lesson becomes urgent, then vital: “It is late, it is late,” Levi realizes, “we have reached the kitchen, I must finish.” No candle has ever shown more brilliantly from within the caverns of evil.

Slouching Towards Bethlehem

 

Joan Didion in 1981.

By Joan Didion (1968)

If Joan Didion’s first nonfiction collection now seems tethered to the 1960s, it’s partly because so many writers would try to imitate her style: The tenor and cadence were as precise as an atomic clock. She mapped a prevailing culture from the badlands of Southern California and the streets of Haight-Ashbury to the province of her own paranoia, all of it cloaked in jasmine-scented doom. As both background character and prevailing sensibility, Didion brings the reader into her lair: “You see the point. I want to tell you the truth, and already I have told you about the wide rivers.” “Slouching Towards Bethlehem” suggested that memoir was about voice as well as facts. Didion didn’t just intimate a decade of upheaval, she announced it with a starter pistol’s report.

Dispatches

By Michael Herr (1977)

Every war has its Stephen Crane, its Robert Graves—and Vietnam had Michael Herr. He spent a year in-country in 1967, then nearly a decade turning what he saw there into a surreal narrative of the war’s geography, from its napalmed landscape to the craters of a soldier’s mind. Soldiers talked to Herr—told him things they hadn’t said before or maybe even known. “I should have had ‘Born to Listen’ written on my helmet,” he told me in London in 1988. What Herr dared to write about was war’s primal allure: “the death space and the life you found inside it.” That he created this gunmetal narrative with a blend of fact and creative memory was acknowledged from the first; his netherland of “truth” mirrored the dream-like quality of the war and influenced its literature for a decade to come.

Darkness Visible

By William Styron (1990)

Certainly there have been other literary memoirs of personal anguish, but Styron’s brutal account of his cliffwalk with suicidal despair blew the door open on the subject. Depression and alcoholism in writers had too often been viewed through a lens of romantic ruin—the destiny- ridden price of creative genius. “Darkness Visible” put an end to all that. Literary lion, second lieutenant during World War II, Styron was brought to his knees in his own brooding woods. His story hauled plenty of ideas about clinical depression out of the 19th century and into the light of day, where they belonged.

Ms. Caldwell is the author of “Let’s Take the Long Way Home: A Memoir of Friendship.” The former chief book critic of the Boston Globe, she was in 2001 awarded the Pulitzer Prize for Distinguished Criticism.

__________

Full article and photo: http://online.wsj.com/article/SB10001424052748703989304575503950688851086.html

What Ahmadinejad Knows

Iran’s president appeals to 9/11 Truthers.

Let’s put a few facts on the table.

• The recent floods in Pakistan are acts neither of God nor of nature. Rather, they are the result of a secret U.S. military project called HAARP, based out of Fairbanks, Alaska, which controls the weather by sending electromagnetic waves into the upper atmosphere. HAARP may also be responsible for the recent spate of tsunamis and earthquakes.

• Not only did the U.S. invade Iraq for its oil, but also to harvest the organs of dead Iraqis, in which it does a thriving trade.

• Faisal Shahzad was not the perpetrator of the May 1 Times Square bombing, notwithstanding his own guilty plea. Rather, the bombing was orchestrated by an American think tank, though its exact identity has yet to be established.

• Oh, and 9/11 was an inside job. Just ask Mahmoud Ahmadinejad.

The U.S. and its European allies were quick to walk out on the Iranian president after he mounted the podium at the U.N. last week to air his three “theories” on the attacks, each a conspiratorial shade of the other. But somebody should give him his due: He is a provocateur with a purpose. Like any expert manipulator, he knew exactly what he was doing when he pushed those most sensitive of buttons.

He knew, for instance, that the Obama administration and its allies are desperate to resume negotiations over Iran’s nuclear programs. What better way to set the diplomatic mood than to spit in their eye when, as he sees it, they are already coming to him on bended knee?

He also knew that the more outrageous his remarks, the more grateful the West would be for whatever crumbs of reasonableness Iran might scatter on the table. This is what foreign ministers are for.

Finally, he knew that the Muslim world would be paying attention to his speech. That’s a world in which his view of 9/11 isn’t on the fringe but in the mainstream. Crackpots the world over—some of whom are reading this column now—want a voice. Ahmadinejad’s speech was a bid to become theirs.

This is the ideological component of Ahmadinejad’s grand strategy: To overcome the limitations imposed on Iran by its culture, geography, religion and sect, he seeks to become the champion of radical anti-Americans everywhere. That’s why so much of his speech last week was devoted to denouncing capitalism, the hardy perennial of the anti-American playbook. But that playbook needs an update, which is where 9/11 “Truth” fits in.

Could it work? Like any politician, Ahmadinejad knows his demographic. The University of Maryland’s World Public Opinion surveys have found that just 2% of Pakistanis believe al Qaeda perpetrated the attacks, whereas 27% believe it was the U.S. government. (Most respondents say they don’t know.)

Among Egyptians, 43% say Israel is the culprit, while another 12% blame the U.S. Just 16% of Egyptians think al Qaeda did it. In Turkey, opinion is evenly split: 39% blame al Qaeda, another 39% blame the U.S. or Israel. Even in Europe, Ahmadinejad has his corner. Fifteen percent of Italians and 23% of Germans finger the U.S. for the attacks.

Deeper than the polling data are the circumstances from which they arise. There’s always the temptation to argue that the problem is lack of education, which on the margins might be true. But the conspiracy theories cited earlier are retailed throughout the Muslim world by its most literate classes, journalists in particular. Irrationalism is not solely, or even mainly, the province of the illiterate.

Nor is it especially persuasive to suggest that the Muslim world needs more abundant proofs of American goodwill: The HAARP fantasy, for example, is being peddled at precisely the moment when Pakistanis are being fed and airlifted to safety by U.S. Marine helicopters operating off the USS Peleliu.

What Ahmadinejad knows is that there will always be a political place for what Michel Foucault called “the sovereign enterprise of Unreason.” This is an enterprise whose domain encompasses the politics of identity, of religious zeal, of race or class or national resentment, of victimization, of cheek and self-assertion. It is the politics that uses conspiracy theory not just because it sells, which it surely does, or because it manipulates and controls, which it does also, but because it offends. It is politics as a revolt against empiricism, logic, utility, pragmatism. It is the proverbial rage against the machine.

Chances are you know people to whom this kind of politics appeals in some way, large or small. They are Ahmadinejad’s constituency. They may be irrational; he isn’t crazy.

Bret Stephens, Wall Street Journal

__________

Full article : http://online.wsj.com/article/SB10001424052748704654004575517632476603268.html

So wrong it’s right

The ‘eggcorn’ has its day

Over the past 10 days, language bloggers have been exchanging virtual high-fives at the news of an honor bestowed on one of their coinages. In its most recent quarterly update, the Oxford English Dictionary Online announced that its word-hoard now includes the shiny new term eggcorn.

An eggcorn, as regular readers of this column may recall, is — well, here’s the official new definition: “an alteration of a word or phrase through the mishearing or reinterpretation of one or more of its elements as a similar-sounding word.” If you write “let’s nip it in the butt” (instead of “bud”) or “to the manor born” (instead of “manner”), you’re using an eggcorn.

The term derives from “egg corn” as a substitution for “acorn,” whose earliest appearance comes in an 1844 letter from an American frontiersman: “I hope you are as harty as you ust to be and that you have plenty of egg corn bread which I can not get her and I hop to help you eat some of it soon.”

Why would eggcorn (as we now spell it) replace acorn in the writer’s lexicon? As the OED editors comment, “acorns are, after all, seeds which are somewhat egg-shaped, and in many dialects the formations acorn and eggcorn sound very similar.” (And, like corn kernels, acorns can be ground into meal or flour.) This coinage came to the attention of the linguists blogging at Language Log in 2003, and at the suggestion of Geoffrey Pullum, one of the site’s founders, it was adopted as the term for all such expressions.

Eggcorns needed their own label, the Language Loggers decided, because they were mistakes of a distinct sort — variants on the traditional phrasing, but ones that still made at least a bit of sense. “Nip it in the bud,” for instance, is a horticultural metaphor, perhaps not so widely understood as it once was; the newer “nip it in the butt” describes a different strategy for getting rid of some unwelcome visitation, but it’s not illogical. Hamlet said he was “to the manner born,” but the modern alteration, “to the manor born,” is also a useful formula.

And because they make sense, eggcorns are interesting in a way that mere disfluencies and malapropisms are not: They show our minds at work on the language, reshaping an opaque phrase into something more plausible. They’re tiny linguistic treasures, pearls of imagination created by clothing an unfamiliar usage in a more recognizable costume.

Even before the eggcorn era, most of us had heard (or experienced) pop-song versions of the phenomenon, like “’Scuse me while I kiss this guy” (for Jimi Hendrix’s “kiss the sky” line), but these have had their own label, mondegreen, for more than half a century. The word was coined in 1954 by Sylvia Wright, in commemoration of her mishearing of a Scottish ballad: “They have slain the Earl o’ Moray/ And laid him on the green,” went the lament, but Wright thought the villains had slain the earl “and Lady Mondegreen.”

Then there are malapropisms, word substitutions that sound similar but make no sense at all. They’re named for Mrs. Malaprop, a character in the 1775 play “The Rivals,” whose childrearing philosophy illustrates her vocabulary problem: “I would by no means wish a daughter of mine to be a progeny of learning….I would have her instructed in geometry, that she might know something of the contagious countries.”

And when the misconceived word or expression has spread so widely that we all use it, it’s a folk etymology — or, to most of us, just another word. Bridegroom, hangnail, Jerusalem artichoke — all started out as mistakes.

But we no longer beat ourselves up because our forebears substituted groom for the Old English guma (“man”), or modified agnail (“painful nail”) into hangnail, or reshaped girasole (“sunflower” in Italian) into the more familiar Jerusalem.

The border between these folk-etymologized words, blessed by history and usage, and the newer eggcorns is fuzzy, and there’s been some debate already at the American Dialect Society’s listserv, ADS-L, about whether the distinction is real. Probably there is no bright line; to me, “you’ve got another thing coming” and “wile away the hours” are eggcorns — recent reshapings of expressions I learned as “another think” and “while away” — but to you they may be normal.

But we face the same problem in deciding which senses are valid for everyday, non-eggcornish words. When does nonplussed for “unfazed” or enormity for “hugeness” become the standard sense? We can only wait and see; the variants may duke it out for decades, but if a change takes hold, the battle will one day be forgotten.

The little eggcorn is in the same situation: It’s struggling to overcome its mixed-up heritage and grow into the kind of respectable adulthood enjoyed by the Jerusalem artichoke. We’re not obliged to help it along, but while it’s here, we might as well enjoy its wacky poetry.

Jan Freeman’s e-mail address is mailtheword@gmail.com; she blogs about language at Throw Grammar from the Train (throwgrammarfromthetrain.blogspot.com).  

__________

Full article and photo: http://www.boston.com/bostonglobe/ideas/articles/2010/09/26/so_wrong_its_right

The Non-Economist’s Economist

John Kenneth Galbraith avoided technical jargon and wrote witty prose—too bad he got so much wrong

The Dow Jones Industrials spent 25 years in the wilderness after the 1929 Crash. Not until 1954 did the disgraced 30-stock average regain its Sept. 3, 1929, high. And then, its penance complete, it soared. In March 1955, the U.S. Senate Banking and Currency Committee, J. William Fulbright of Arkansas, presiding, opened hearings to determine what dangers lurked in this new bull market. Was it 1929 all over again?

John Kenneth Galbraith (1908-2006), photographed by Richard Avedon in Boston in 1993

One of the witnesses, John Kenneth Galbraith, a 46-year-old Harvard economics professor, seemed especially well-credentialed. His new history of the event that still transfixed America, “The Great Crash, 1929” was on its way to the bookstores and to what would prove to be a commercial triumph. An alumnus of Ontario Agricultural College and the holder of a doctorate in agricultural economics from the University of California at Berkeley, Galbraith had written articles for Fortune magazine and speeches for Adlai Stevenson, the defeated 1952 Democratic presidential candidate. He was a World War II price controller and the author of “American Capitalism: The Concept of Countervailing Power.” When he stepped into a crowded elevator, strangers tried not to stare: he stood 6 feet 8 inches tall.

On the one hand, Galbraith observed, the stock market was not so speculatively charged in 1955 as it had been in 1929 On the other, he insisted, there were worrying signs of excess. Stocks were not so cheap as they had been in the slack and demoralized market of 1953 (though, at 4%, they still outyielded corporate bonds). “The relation of share prices to book value is showing some of the same tendencies as in 1929,” Galbraith went on. “And while it would be a gross exaggeration to say that there has been the same escape from reality that there was in 1929, it does seem to me that enough has happened to indicate that we haven’t yet lost our capacity for speculative self-delusion.”

__________

Reading List: If Not Galbraith, Who?

Maury Klein tells a great story in “Rainbow’s End: The Crash of 1929” (Oxford, 2001), but he also attempts to answer the great question: What went wrong? For the financial specialist in search of a tree-by-tree history of the forest of the Depression, look no further than Barrie A. Wigmore’s “The Crash and Its Aftermath: A History of the Securities Markets in the United States, 1929-33” (Greenwood Press, 1985).

In the quality of certitude, the libertarian Murray Rothbard yielded to no economist. His revisionist history, “America’s Great Depression” (available through the website of the Mises Institute), contends that it was the meddling Hoover administration that turned recession into calamity. Amity Shlaes draws up a persuasive indictment of the New Deal in her “The Forgotten Man” (HarperCollins, 2007).

“Economics and the Public Welfare” by Benjamin Anderson (Liberty Press, 1979) is in strong contention for the lamest title ever fastened by a publisher on a deserving book. Better, the subtitle: “A Financial and Economic History of the United States: 1914-1946.”

“Where are the Customers’ Yachts? Or A Good Hard Look at Wall Street,” by Fred Schwed Jr. (Simon & Schuster, 1940) is the perfect antidote for any who imagine that the reduced salaries and status of today’s financiers is anything new. Page for page, Schwed’s unassuming survey of the financial field might be the best investment book ever written. Hands-down, it’s the funniest.

An unfunny but essential contribution to the literature of the Federal Reserve is the long-neglected “Theory and Practice of Central Banking” (Harper, 1936) by Henry Parker Willis, the first secretary of the Federal Reserve Board. Willis wrote to protest the against the central bank’s reinvention of itself, quite against the intentions of its founders, as a kind of infernal economic planning machine. He should see it now.

Freeman Tilden’s “A World in Debt” (privately printed, 1983) is a quirky, elegant, long out-of-print treatise by a non-economist on an all-too-timely subject. “The world,” wrote Tilden in 1936, “has several times, and perhaps many times, squandered itself into a position where a total deflation of debt was imperative and unavoidable. We may be entering one more such receivership of civilization.”

If the Obama economic program leaves you cold, puzzled or hot under the collar, turn to Hunter Lewis’s “Where Keynes Went Wrong” (Axios Press, 2009) or “The Critics of Keynesian Economics,” edited by Henry Hazlitt (Arlington House, 1977).

—James Grant

__________

Re-reading Galbraith is like watching black-and-white footage of the 1955 World Series. The Brooklyn Dodgers are gone—and so is much of the economy over which Galbraith lavished so much of his eviscerating wit. In 1955, “globalization” was a word yet uncoined. Imports and exports each represented only about 4% of GDP, compared with 16.1% and 12.5%, respectively, today. In 1955, regulation was constricting (this feature of the Eisenhower-era economy seems to be making a reappearance) and unions were powerful. There was a lingering, Depression-era suspicion of business and, especially, of Wall Street. The sleep of corporate managements was yet undisturbed by the threat of a hostile takeover financed with junk bonds.

Half a century ago, the “conventional wisdom,” in Galbraith’s familiar phrase, was statism. In “American Capitalism,” the professor heaped scorn on the CEOs and Chamber of Commerce presidents and Republican statesmen who protested against federal regimentation. “In the United States at this time,” noted the critic Lionel Trilling in 1950, “liberalism is not only the dominant but even the sole intellectual tradition.” William F. Buckley’s upstart conservative magazine, National Review, made its debut in 1955 with the now-famous opening line that it “stands athwart history, yelling Stop.” Galbraith seemed not to have noticed that history and he were arm in arm. His was the conventional wisdom.

Concerning the emphatic Milton Friedman, someone once borrowed the Victorian-era quip, “I wish I was as sure of anything as he is of everything.” Galbraith and the author of “Capitalism and Freedom” were oil and water, but they did share certitude. To Galbraith, “free-market capitalism” was an empty Rotary slogan. It didn’t exist and, in Eisenhower-era America, couldn’t. Industrial oligopolies had rendered it obsolete.

Only in the introductory economics textbooks, he believed, did the free interplay between supply and demand determine price. Fortune 500 companies set their own prices. They chaffered with their vendors and customers, who themselves were big enough to throw their weight around in the market. As a system of decentralized decision-making, there was something to be said for capitalism, Galbraith allowed. As a network of oligopolistic fiefdoms, however, it needed federal direction. The day of Adam Smith’s “invisible hand” was over or ending. “Countervailing power,” in the Galbraith formulation, was the new idea.

Corporate bureaucrats—collectively, the “technostructure”—had pushed aside the entrepreneurs, proposed Galbraith channeling Thorstein Veblen. While, under the robber baron model, the firm existed to make profits, the modern behemoth exists to perpetuate itself in power while incidentally earning a profit. Planning is what the technostructure does best—it seems to hate surprises. “This planning,” wrote Galbraith, in “The New Industrial State,” “replaces prices that are established by the market with prices that are established by the firm. The firm, in tacit collaboration with the other firms in the industry, has wholly sufficient power to set and maintain minimum prices.” What was to be done? “The market having been abandoned in favor of planning of prices and demand,” he prescribed, “there is no hope that it will supply [the] last missing element of restraint. All that remains is the state.” It was fine with the former price controller of the Office of Price Administration.

As for the stockholder, he or she was as much a cipher as the manipulated consumer. “He (or she) is a passive and functionless figure, remarkable only on his capacity to share, without effort or even without appreciable risk, in the gains from the growth by which the technostructure measures its success,” according to Galbraith. “No grant of feudal privilege has ever equaled, for effortless return, that of the grandparents who bought and endowed his descendants with a thousand shares of General Motors or General Electric or IBM.” Galbraith was writing near the top of the bull market he had failed to anticipate in 1955. Shareholders were about to re-learn (if they had forgotten) the lessons of “risk.”

In its way, “The New Industrial State” was as mistimed as “The Great Crash.” In 1968, a year after the appearance of the first edition, the planning wheels started to turn at Leasco Data Processing Corp., Great Neck, N.Y. But Leasco’s “planning” took the distinctly un- Galbraithian turn of an unsolicited bid for control of the blue-blooded Chemical Bank of New York. Here was something new under the sun. Saul Steinberg, would-be revolutionary at the head of Leasco, ultimately surrendered before the massed opposition of the New York banking community. (“I always knew there was an Establishment,” Mr. Steinberg mused—”I just used to think I was a part of it.”) But the important thing was the example Mr. Steinberg had set by trying. The barbarians were beginning to form at the corporate gates.

The cosseted, self-perpetuating corporate bureaucracy that Galbraith described in “The New Industrial State” was in for a rude awakening. Deregulation became a Washington watchword under President Carter, capitalism got back its good name under President Reagan and trade barriers fell under President Clinton. Presently came the junk-bond revolution and the growth in an American market for corporate control. Hedge funds and private equity funds prowled for under- and mismanaged public companies to take over, resuscitate and—to be sure, all too often—to overload with debt. The collapse of communism and the rise of digital technology opened up vast new fields of competitive enterprise. Hundreds of millions of eager new hands joined the world labor force, putting downward pressure on costs, prices and profit margins. Wal-Mart delivered everyday low, and lower, prices, and MCI knocked AT&T off its monopolistic pedestal. The technostructure must have been astounded.

Galbraith in his home in Cambridge, Mass., in 1981

Here are the opening lines of “American Capitalism”: “It is told that such are the aerodynamics and wing-loading of the bumblebee that, in principle, it cannot fly. It does, and the knowledge that it defied the august authority of Isaac Newton and Orville Wright must keep the bee in constant fear of a crack-up.” You keep reading because of the promise of more in the same delightful vein. And, indeed, there is much more, including a charming annotated chronology of Galbraith’s life by his son and the editor of this volume, James K. Galbraith.

John F. Kennedy’s ambassador to India, muse to the Democratic left, two-time recipient of the Presidential Medal of Freedom, celebrity author, Galbraith in life was even larger than his towering height. His “A Theory of Price Control,” which was published in 1952 to favorable reviews but infinitesimal sales, was his one and only contribution to the purely professional economics literature. Thereafter this most acerbic critic of free markets prospered by giving the market what it wanted.

Now comes the test of whether his popular writings will endure longer than the memory of his celebrity and the pleasure of his prose. “The Great Crash” has a fighting chance, because of its very lack of analytical pretense. “History that reads like a poem,” raved Mark Van Doren in his review of the 1929 book. Or, he might have judged, that eats like whipped cream.

But the other books in this volume seem destined for only that kind of immortality conferred on amusing period pieces. When, for example, Galbraith complains in “The Affluent Society” that governments can’t borrow enough, or that the Federal Reserve is powerless to resist inflation, you wonder what country he was writing about, or even what planet he was living on.

Not that the professor refused to learn. In the first edition of “The New Industrial State,” for instance, he writes confidently: “While there may be difficulties, and interim failures or retreats are possible and indeed probable, a system of wage and price restraint is inevitable in the industrial system.” A decade or so later, in the edition selected for this volume, that sentence is gone. In its place is another not quite so confident: “The history of controls, in some form or other and by some nomenclature, is still incomplete.”

At the 1955 stock-market hearings, Galbraith was followed at the witness table by the aging speculator and “adviser to presidents” Bernard M. Baruch. The committee wanted to know what the Wall Street legend thought of the learned economist. “I know nothing about him to his detriment,” Baruch replied. “I think economists as a rule—and it is not personal to him—take for granted they know a lot of things. If they really knew so much, they would have all of the money, and we would have none.”

Mr. Grant, the editor of Grant’s Interest Rate Observer, is the author, most recently, of “Mr. Market Miscalculates” (Axios, 2009)

__________

Full article and photos: http://online.wsj.com/article/SB10001424052748703556604575501883282762648.html

Uncommon knowledge

A surprise benefit of minimum wage

The minimum wage has been politically controversial for most of the last century, even though it affects a marginal share of the labor force and evidence of significant job loss is inconclusive. Now one economist would like us to consider another effect of the minimum wage: finishing high school. By curtailing low-wage/low-skill jobs, the minimum wage motivates young people to stay in school and become skilled. This effect then generates what the author calls an “educational cascade” by setting an example for the upcoming class of students. He estimates that the average male born in 1951 gained 0.2 years — and the average male born in 1986 gained 0.7 years — of high school due to the cumulative effect of the minimum wage.

Sutch, R., “The Unexpected Long-Run Impact of the Minimum Wage: An Educational Cascade,” National Bureau of Economic Research (September 2010).

Bearing false witness

False confessions and false eyewitness testimony are never-ending challenges for the judicial process. Although coercive interrogation is blamed in many of these situations, new research illustrates just how little coercion is needed. In an experiment, people played a quiz game for money. Later, they were told that the person who had sat next to them during the game was suspected of cheating. They were shown a 15-second video clip of the person sitting next to them cheating, even though the video clip was doctored and no cheating actually happened. They were asked to sign a witness statement against the cheater, but they were explicitly told not to sign if they hadn’t directly witnessed the cheating, aside from seeing it in the video. Nevertheless, almost half of those who saw the video signed the statement. Some of those who signed the statement even volunteered additional incriminating information.

Wade, K. et al., “Can Fabricated Evidence Induce False Eyewitness Testimony?” Applied Cognitive Psychology (October 2010).

The cure for sadness: pain

For most people, pain is not fun. However, a recent study finds that, when you’re not having fun, pain can help. Several hundred people were tested to see how much pain — in the form of increasing pressure or heat applied to their hands — they could tolerate. Not surprisingly, people reported being less happy after the experiment. But less happy is not necessarily the same as more unhappy. Indeed, negative emotions were also attenuated after the experiment, especially for women and people with more sensitive emotions. In other words, physical pain helped dull emotional pain.

Bresin, K. et al., “No Pain, No Change: Reductions in Prior Negative Affect following Physical Pain,” Motivation and Emotion (September 2010).

That reminds me of…me!

In a series of experiments, researchers have transformed Descartes’s famous phrase (“I think, therefore I am”) into something like this: “I am reminded of myself, therefore I will think.” People presented with a resume or product paid more attention to it if it happened to have a name similar to their own. As a result of this increased attention, a high-quality resume or product got a boost, while a low-quality resume or product was further handicapped. However, in a strange twist, people who sat in front of a mirror while evaluating a product exhibited the opposite effect: Quality didn’t matter for a product with a similar name but did matter otherwise. The authors speculate that too much self-referential thinking overloads one’s ability to think objectively.

Howard, D. & Kerin, R., “The Effects of Name Similarity on Message Processing and Persuasion,” Journal of Experimental Social Psychology (forthcoming).

Defensive sleeping

The odds that you’ll need to fend off an attacker entering your bedroom at night are pretty small. Yet, according to a recent study, our evolutionary heritage — formed when we had to survive sleeping outdoors — instills a strong preference for bedrooms designed less by the principles of Architectural Digest than by those of “Home Alone” or “Panic Room.” When shown a floor plan for a simple rectangular bedroom and asked to arrange the furniture, most people positioned the bed so that it faced the door. They also positioned the bed on the side of the room behind the door as it would be opening, and as far back from the door as possible, a position that would seem to give the occupant the most time to respond. If the floor plan included a window on the opposite side of the room from the door, people were inclined to move the bed away from the window, too.

Spörrle, M. & Stich, J., “Sleeping in Safe Places: An Experimental Investigation of Human Sleeping Place Preferences from an Evolutionary Perspective,” Evolutionary Psychology (August 2010).

Kevin Lewis is an Ideas columnist.

__________

Full article and photo: http://www.boston.com/bostonglobe/ideas/articles/2010/09/26/a_surprise_benefit_of_minimum_wage/

‘Busted’

Randy Britton e-mails: “I’ve noticed in much of the coverage of the BP oil spill that the press has taken to calling the oil well ‘busted.’ Since when is ‘busted’ the proper way to describe a broken oil well?  It seems very colloquial and not a form I would expect to see in proper journalistic forums.”

Even now that BP’s troubled oil well in the Gulf of Mexico is being permanently sealed, news reports continue to refer to the “busted well,” particularly wire services like the Associated Press and AFP. Reuters was an early adopter, reporting on efforts to contain the “busted well” on May 3. Alternatively, busted has modified oil rig, or just plain rig. A database search of coverage of the BP spill finds the first recorded use of busted came nine days into the crisis on April 29, when the MSNBC host Ed Schultz said, “The busted rig is leaking — get this — 200,000 gallons of oil a day.”

Is busted overly informal for journalists? The verb bust certainly has colloquial roots, beginning its life on the American scene as a folksy variant of burst. (The same dropping of the “r” turned curse into cuss, horse into hoss and parcel into passel.) Building on earlier use as a noun, bust busted out as a verb as early as 1806, when Meriwether Lewis, while on his famous expedition with William Clark, wrote in his journal, “Windsor busted his rifle near the muzzle.” Since then, bust has worked its way into a wide variety of American expressions.

Bust runs the gamut from slang to standard,” explain David K. Barnhart and Allan A. Metcalf in their book “America in So Many Words.” “When it is used to mean ‘to explode or fall apart or be arrested,’ bust is generally slang. In the sense of failing (especially financially) it is informal, as busting the bank in gambling lingo, while in the specialized sense of taming a horse it is standard, the only way to say busting a bronco.

Despite its potential slanginess, busted is “not actually forbidden” in the news media, as the Boston Globe language columnist Jan Freeman wrote in August. Indeed, reporters often latch onto the occasional colloquialism that seems particularly expressive, and in this case, Freeman surmises they were drawn to the term’s “criminal-cowboy-macho connotations.”

Regardless of the reasons for its current vogue, it’s notable that busted was rarely relied on by the press to describe stricken oil wells before the BP disaster — even in incidents that were highly similar, such as the 1979 blowout of the Ixtoc I well in the Gulf of Mexico. Most of the precursors I found come from more literary sources. It was appropriate, for instance, in some light verse by J.W. Foley published in The New York Times in 1904:

Dear friend, there’s a question I’d like to ask you,
(Your pardon I crave if it vexes)
Have you ever invested a hundred or two
In an oil well somewhere down in Texas?
Have you ridden in autos (I mean in your mind),

With the profits you honestly trusted
Would flow from your venture in oil stocks — to find
That the oil well was hopelessly busted?

I can’t find fault in reporters drawing on the rich history of bust and busted in American English to add a little extra oomph to their dispatches from the gulf. Calling the well busted does evoke a looser, wilder state of disrepair than broken, or the more technically accurate blown-out. But after many months of news coverage, the phrase “busted well” has now turned into little more than a cliché. That’s a far worse journalistic offense than a bit of well-placed slang.

Ben Zimmer will answer one reader question every other week.

__________

Full article: http://www.nytimes.com/2010/09/26/magazine/26onlanguage.html

Unpacking Imagination

In an age of childhood obesity and children tethered to electronic consoles, playgrounds have rarely been more important. In an age of constrained government budgets, playgrounds have rarely been a harder sell. Fortunately, the cost of play doesn’t have to be prohibitive. In creating the Imagination Playground in Lower Manhattan — a playground with lots of loose parts for children to create their own play spaces — we realized that many of the elements with the greatest value to children were inexpensive and portable. Although traditional playgrounds can easily cost in the millions to build, boxed imagination playgrounds can be put together for under $10,000. (Land costs not included!) The design below is one that my architecture firm has done in collaboration with the New York City Parks Department and KaBoom, a nonprofit organization. But it needn’t be the only one out there. There are a lot of ways to build a playground — and a lot of communities in need of one. Let a thousand portable playgrounds bloom.

David Rockwell, New York Times

__________

Full article and photo: http://www.nytimes.com/interactive/2010/09/25/opinion/20100925_opchart.html

New England’s hidden history

More than we like to think, the North was built on slavery.

In the year 1755, a black slave named Mark Codman plotted to kill his abusive master. A God-fearing man, Codman had resolved to use poison, reasoning that if he could kill without shedding blood, it would be no sin. Arsenic in hand, he and two female slaves poisoned the tea and porridge of John Codman repeatedly. The plan worked — but like so many stories of slave rebellion, this one ended in brutal death for the slaves as well. After a trial by jury, Mark Codman was hanged, tarred, and then suspended in a metal gibbet on the main road to town, where his body remained for more than 20 years.

It sounds like a classic account of Southern slavery. But Codman’s body didn’t hang in Savannah, Ga.; it hung in present-day Somerville, Mass. And the reason we know just how long Mark the slave was left on view is that Paul Revere passed it on his midnight ride. In a fleeting mention from Revere’s account, the horseman described galloping past “Charlestown Neck, and got nearly opposite where Mark was hung in chains.”

When it comes to slavery, the story that New England has long told itself goes like this: Slavery happened in the South, and it ended thanks to the North. Maybe

we had a little slavery, early on. But it wasn’t real slavery. We never had many slaves, and the ones we did have were practically family. We let them marry, we taught them to read, and soon enough, we freed them. New England is the home of abolitionists and underground railroads. In the story of slavery — and by extension, the story of race and racism in modern-day America — we’re the heroes. Aren’t we?

As the nation prepares to mark the 150th anniversary of the American Civil War in 2011, with commemorations that reinforce the North/South divide, researchers are offering uncomfortable answers to that question, unearthing more and more of the hidden stories of New England slavery — its brutality, its staying power, and its silent presence in the very places that have become synonymous with freedom. With the markers of slavery forgotten even as they lurk beneath our feet — from graveyards to historic homes, from Lexington and Concord to the halls of Harvard University — historians say it is time to radically rewrite America’s slavery story to include its buried history in New England.

“The story of slavery in New England is like a landscape that you learn to see,” said Anne Farrow, who co-wrote “Complicity: How the North Promoted, Prolonged, and Profited From Slavery” and who is researching a new book about slavery and memory. “Once you begin to see these great seaports and these great historic houses, everywhere you look, you can follow it back to the agricultural trade of the West Indies, to the trade of bodies in Africa, to the unpaid labor of black people.”

It was the 1991 discovery of an African burial ground in New York City that first revived the study of Northern slavery. Since then, fueled by educators, preservationists, and others, momentum has been building to recognize histories hidden in plain sight. Last year, Connecticut became the first New England state to formally apologize for slavery. In classrooms across the country, popularity has soared for educational programs on New England slavery designed at Brown University. In February, Emory University will hold a major conference on the role slavery’s profits played in establishing American colleges and universities, including in New England. And in Brookline, Mass., a program called Hidden Brookline is designing a virtual walking tour to illuminate its little-known slavery history: At one time, nearly half the town’s land was held by slave owners.

“What people need to understand is that, here in the North, while there were not the large plantations of the South or the Caribbean islands, there were families who owned slaves,” said Stephen Bressler, director of Brookline’s Human Relations-Youth Resources Commission. “There were businesses actively involved in the slave trade, either directly in the importation or selling of slaves on our shores, or in the shipbuilding, insurance, manufacturing of shackles, processing of sugar into rum, and so on. Slavery was a major stimulus to the Northern economy.”

Turning over the stones to find those histories isn’t just a matter of correcting the record, he and others say. It’s crucial to our understanding of the New England we live in now.

“The absolute amnesia about slavery here on the one hand, and the gradualness of slavery ending on the other, work together to make race a very distinctive thing in New England,” said Joanne Pope Melish, who teaches history at the University of Kentucky and wrote the book “Disowning Slavery: Gradual Emancipation and ‘Race’ in New England, 1780-1860.” “If you have obliterated the historical memory of actual slavery — because we’re the free states, right? — that makes it possible to turn around and look at a population that is disproportionately poor and say, it must be their own inferiority. That is where New England’s particular brand of racism comes from.”

Dismantling the myths of slavery doesn’t mean ignoring New England’s role in ending it. In the 1830s and ’40s, an entire network of white Connecticut abolitionists emerged to house, feed, clothe, and aid in the legal defense of Africans from the slave ship Amistad, a legendary case that went all the way to the US Supreme Court and helped mobilize the fight against slavery. Perhaps nowhere were abolition leaders more diehard than in Massachusetts: Pacifist William Lloyd Garrison and writer Henry David Thoreau were engines of the antislavery movement. Thoreau famously refused to pay his taxes in protest of slavery, part of a philosophy of civil disobedience that would later influence Martin Luther King Jr. But Thoreau was tame compared to Garrison, a flame-thrower known for shocking audiences. Founder of the New England Anti-Slavery Society and the newspaper The Liberator, Garrison once burned a copy of the US Constitution at a July Fourth rally, calling it “a covenant with death.” His cry for total, immediate emancipation made him a target of death threats and kept the slavery question at a perpetual boil, fueling the moral argument that, in time, would come to frame the Civil War.

But to focus on crusaders like Garrison is to ignore ugly truths about how unwillingly New England as a whole turned the page on slavery. Across the region, scholars have found, slavery here died a painfully gradual death, with emancipation laws and judicial rulings that either were unclear, poorly enforced, or written with provisions that kept slaves and the children born to them in bondage for years.

Meanwhile, whites who had trained slaves to do skilled work refused to hire the same blacks who were now free, driving an emerging class of skilled workers back to the lowest rungs of unskilled labor. Many whites, driven by reward money and racial hatred, continued to capture and return runaway Southern slaves; some even sent free New England blacks south, knowing no questions about identity would be asked at the other end. And as surely as there was abolition, there was “bobalition” — the mocking name given to graphic, racist broadsides printed through the 1830s, ridiculing free blacks with characters like Cezar Blubberlip and Mungo Mufflechops. Plastered around Boston, the posters had a subtext that seemed to boil down to this: Who do these people think they are? Citizens?

“Is Garrison important? Yes. Is it dangerous to be an abolitionist at that time? Absolutely,” said Melish. “What is conveniently forgotten is the number of people making a living snagging free black people in a dark alley and shipping them south.”

Growing up in Lincoln, Mass., historian Elise Lemire vividly remembers learning of the horrors of a slaveocracy far, far away. “You knew, for example, that families were split up, that people were broken psychologically and kept compliant by the fear of your husband or wife being sold away, or your children being sold away,” said Lemire, author of the 2009 book “Black Walden,” who became fascinated with former slaves banished to squatter communities in Walden Woods.

As she peeled back the layers, Lemire discovered a history rarely seen by the generations of tourists and schoolchildren who have learned to see Concord as a hotbed of antislavery activism. “Slaves [here] were split up in the same way,” she said. “You didn’t have any rights over your children. Slave children were given away all the time, sometimes when they were very young.”

In Lemire’s Concord, slave owners once filled half of town government seats, and in one episode town residents rose up to chase down a runaway slave. Some women remained enslaved into the 1820s, more than 30 years after census figures recorded no existing slaves in Massachusetts. According to one account, a former slave named Brister Freeman, for whom Brister’s Hill in Walden Woods is named, was locked inside a slaughterhouse shed with an enraged bull as his white tormentors laughed outside the door. And in Concord, Lemire argues, black families were not so much liberated as they were abandoned to their freedom, released by masters increasingly fearful their slaves would side with the British enemy. With freedom, she said, came immediate poverty: Blacks were forced to squat on small plots of the town’s least arable land, and eventually pushed out of Concord altogether — a precursor to the geographic segregation that continues to divide black and white in New England.

“This may be the birthplace of a certain kind of liberty,” Lemire said, “but Concord was a slave town. That’s what it was.”

If Concord was a slave town, historians say, Connecticut was a slave state. It didn’t abolish slavery until 1848, a little more than a decade before the Civil War. (A judge’s ruling ended legal slavery in Massachusetts in 1783, though the date is still hotly debated by historians.) It’s a history Connecticut author and former Hartford Courant journalist Anne Farrow knew nothing about — until she got drawn into an assignment to find the untold story of one local slave.

Once she started pulling the thread, Farrow said, countless histories unfurled: accounts of thousand-acre slave plantations and a livestock industry that bred the horses that turned the giant turnstiles of West Indian sugar mills. Each discovery punctured another slavery myth. “A mentor of mine has said New England really democratized slavery,” said Farrow. “Where in the South a few people owned so many slaves, here in the North, many people owned a few. There was a widespread ownership of black people.”

Perhaps no New England colony or state profited more from the unpaid labor of blacks than Rhode Island: Following the Revolution, scholars estimate, slave traders in the tiny Ocean State controlled between two-thirds and 90 percent of America’s trade in enslaved Africans. On the rolling farms of Narragansett, nearly one-third of the population was black — a proportion not much different from Southern plantations. In 2003, the push to reckon with that legacy hit a turning point when Brown University, led by its first African-American president, launched a highly controversial effort to account for its ties to Rhode Island’s slave trade. Today, that ongoing effort includes the CHOICES program, an education initiative whose curriculum on New England slavery is now taught in over 2,000 classrooms.

As Brown’s decision made national headlines, Katrina Browne, a Boston filmmaker, was on a more private journey through New England slavery, tracing her bloodlines back to her Rhode Island forebears, the DeWolf family. As it turned out, the DeWolfs were the biggest slave-trading family in the nation’s biggest slave-trading state. Browne’s journey, which she chronicled in the acclaimed documentary “Traces of the Trade: A Story from the Deep North,” led her to a trove of records of the family’s business at every point in slavery’s triangle trade. Interspersed among the canceled checks and ship logs, Browne said, she caught glimpses into everyday life under slavery, like the diary entry by an overseer in Cuba that began, “I hit my first Negro today for laughing at prayers.” Today, Browne runs the Tracing Center, a nonprofit to foster education about the North’s complicity in slavery.

“I recently picked up a middle school textbook at an independent school in Philadelphia, and it had sub-chapter headings for the Colonial period that said ‘New England,’ and then ‘The South and Slavery,’ ” said Browne, who has trained park rangers to talk about Northern complicity in tours of sites like Philadelphia’s Liberty Bell. “Since learning about my family and the whole North’s role in slavery, I now consider these things to be my problem in a way that I didn’t before.”

If New England’s amnesia has been pervasive, it has also been willful, argues C.S. Manegold, author of the new book “Ten Hills Farm: The Forgotten History of Slavery in the North.” That’s because many of slavery’s markers aren’t hidden or buried. In New England, one need look no further than a symbol that graces welcome mats, door knockers, bedposts, and all manner of household decor: the pineapple. That exotic fruit, said Manegold, is as intertwined with slavery as the Confederate flag: When New England ships came to port, captains would impale pineapples on a fence post, a sign to everyone that they were home and open for business, bearing the bounty of slave labor and sometimes slaves themselves.

“It’s a symbol everyone knows the benign version of — the happy story that pineapples signify hospitality and welcome,” said Manegold, whose book centers on five generations of slaveholders tied to one Colonial era estate, the Royall House and Slave Quarters in Medford, Mass., now a museum. The house features two carved pineapples at its gateposts.

By Manegold’s account, pineapples were just the beginning at this particular Massachusetts farm: Generation after generation, history at the Royall House collides with myths of freedom in New England — starting with one of the most mythical figures of all, John Winthrop. Author of the celebrated “City Upon a Hill” sermon and first governor of the Massachusetts Bay Colony, Winthrop not only owned slaves at Ten Hills Farm, but in 1641, he helped pass one of the first laws making chattel slavery legal in North America.

When the house passed to the Royalls, Manegold said, it entered a family line whose massive fortune came from slave plantations in Antigua. Members of the Royall family would eventually give land and money that helped establish Harvard Law School. To this day, the law school bears a seal borrowed from the Royall family crest, and for years the Royall Professorship of Law remained the school’s most prestigious faculty post, almost always occupied by the law school dean. It wasn’t until 2003 that an incoming dean — now Supreme Court Justice Elena Kagan — quietly turned the title down.

Kagan didn’t publicly explain her decision. But her actions speak to something Manegold and others say could happen more broadly: not just inserting footnotes to New England heritage tours and history books, but truly recasting that heritage in all its painful complexity.

“In Concord,” Lemire said, “the Minutemen clashed with the British at the Old North Bridge within sight of a man enslaved in the local minister’s house. The fact that there was slavery in the town that helped birth American liberty doesn’t mean we shouldn’t celebrate the sacrifices made by the Minutemen. But it does mean New England has to catch up with the rest of the country, in much of which residents have already wrestled with their dual legacies of freedom and slavery.”

Francie Latour is an associate editor at Wellesley magazine and a former Globe reporter.

____________

Full article and photo: http://www.boston.com/bostonglobe/ideas/articles/2010/09/26/new_englands_hidden_history/

A short history of presidential primaries

Although a Niagara of vitriol is drenching politics, the two parties are acting sensibly and in tandem about something once considered a matter of constitutional significance — the process by which presidential nominations are won.

The 2012 process will begin 17 months from now — in February rather than January. Under rules adopted by both parties’ national committees, no delegates to the national conventions shall be selected before the first Tuesday in March — except for delegates from New Hampshire, South Carolina and Nevada. Iowa may still conduct its caucuses, which do not select delegates, in February.

It is not graven on the heart of man by the finger of God that the Entitled Four shall go first, but it might as well be. Although they have just 3.8 percent of the nation’s population, they do represent four regions. Anyway, they shall have the spotlight to themselves until the deluge of delegate selections begin — perhaps in March but preferably in April.

Any Republican delegate-selection event held before the first day of April shall be penalized: The result cannot be, as many Republicans prefer, a winner-take-all allocation of delegates. March events “shall provide for the allocation of delegates on a proportional basis.” This means only that some of the delegates must be allocated proportional to the total vote.

Because Democrats are severe democrats, they have no winner-take-all events, so they do not have this stick with which to discipline disobedient states. Instead, they brandish — they are, after all, liberals — a carrot: States will be offered bonus delegates for moving their nominating events deeper into the nominating season, and for clustering their contests with those of neighboring states.

Each party wants to maximize its chance of nominating a strong candidate and — this is sometimes an afterthought — one who would not embarrass it as president. So both parties have equal interests in lengthening the nominating process to reduce the likelihood that a cascade of early victories will settle nomination contests before they have performed their proper testing-and-winnowing function.

With states jockeying for early positions, the danger has been that the process will become compressed into something similar to an early national primary. This would heavily favor well-known and well-funded candidates and would virtually exclude everyone else.

There have been other proposals. One would divide the nation into four regions voting on monthly intervals, with the order of voting rotating every four years. Another would spread voting over 10 two-week intervals, with the largest states voting last, thereby giving lesser-known candidates a chance to build strength.

Such plans, however, require cooperation approaching altruism among the states, which should not be counted on. Instead, the two parties are in a Madisonian mood, understanding that incentives are more reliable than moral exhortations in changing behavior.

Speaking of the sainted Madison, the parties’ reforms are a small step back toward what the Constitution envisioned: settled rules for something important. The nation’s Founders considered the selection of presidential candidates so crucial that they wanted the process to be controlled by the Constitution. So they devised a system under which the nomination of presidential candidates and the election of a president occurred simultaneously:

Electors meeting in their respective states, in numbers equal to their states’ senators and representatives, would vote for two candidates for president. When Congress counted the votes, the one with the most would become president, the runner-up vice president.

This did not survive the quick emergence of parties. After the presidential election of 1800, which was settled in the House after 36 votes, the 12th Amendment was adopted, and suddenly the nation had what it has had ever since — a process of paramount importance but without settled rules. The process has been a political version of the “tragedy of the commons” — by everyone acting self-interestedly, everyone’s interests are injured.

In 1952, Sen. Estes Kefauver of Tennessee won every Democratic primary he entered except Florida’s, which was won by Sen. Richard Russell of Georgia. So the nominee was . . . Illinois Gov. Adlai Stevenson. Party bosses, a species as dead as the dinosaurs, disliked Kefauver.

Today, the parties’ modest reforms — the best kind — have somewhat reduced the risks inherent in thorough democratization of the nomination process. Certainly the democratization has not correlated with dramatic improvements in the caliber of nominees. And the current president, whose campaign was his qualification for the office, is proof that even a protracted and shrewd campaign is not an infallible predictor of skillful governance.

George F. Will, Washington Post

__________

Full article and photo: http://www.washingtonpost.com/wp-dyn/content/article/2010/09/24/AR2010092402649.html

How to Raise Boys Who Read

Hint: Not with gross-out books and video-game bribes.

When I was a young boy, America’s elite schools and universities were almost entirely reserved for males. That seems incredible now, in an era when headlines suggest that boys are largely unfit for the classroom. In particular, they can’t read.

According to a recent report from the Center on Education Policy, for example, substantially more boys than girls score below the proficiency level on the annual National Assessment of Educational Progress reading test. This disparity goes back to 1992, and in some states the percentage of boys proficient in reading is now more than ten points below that of girls. The male-female reading gap is found in every socio-economic and ethnic category, including the children of white, college-educated parents.

The good news is that influential people have noticed this problem. The bad news is that many of them have perfectly awful ideas for solving it.

Everyone agrees that if boys don’t read well, it’s because they don’t read enough. But why don’t they read? A considerable number of teachers and librarians believe that boys are simply bored by the “stuffy” literature they encounter in school. According to a revealing Associated Press story in July these experts insist that we must “meet them where they are”—that is, pander to boys’ untutored tastes.

For elementary- and middle-school boys, that means “books that exploit [their] love of bodily functions and gross-out humor.” AP reported that one school librarian treats her pupils to “grossology” parties. “Just get ’em reading,” she counsels cheerily. “Worry about what they’re reading later.”

Not with ‘gross-out’ books and video-game bribes.

There certainly is no shortage of publishers ready to meet boys where they are. Scholastic has profitably catered to the gross-out market for years with its “Goosebumps” and “Captain Underpants” series. Its latest bestsellers are the “Butt Books,” a series that began with “The Day My Butt Went Psycho.”

The more venerable houses are just as willing to aim low. Penguin, which once used the slogan, “the library of every educated person,” has its own “Gross Out” line for boys, including such new classics as “Sir Fartsalot Hunts the Booger.”

Workman Publishing made its name telling women “What to Expect When You’re Expecting.” How many of them expected they’d be buying “Oh, Yuck! The Encyclopedia of Everything Nasty” a few years later from the same publisher? Even a self-published author like Raymond Bean—nom de plume of the fourth-grade teacher who wrote “SweetFarts”—can make it big in this genre. His flatulence-themed opus hit no. 3 in children’s humor on Amazon. The sequel debuts this fall.

Education was once understood as training for freedom. Not merely the transmission of information, education entailed the formation of manners and taste. Aristotle thought we should be raised “so as both to delight in and to be pained by the things that we ought; this is the right education.”

“Plato before him,” writes C. S. Lewis, “had said the same. The little human animal will not at first have the right responses. It must be trained to feel pleasure, liking, disgust, and hatred at those things which really are pleasant, likeable, disgusting, and hateful.”

This kind of training goes against the grain, and who has time for that? How much easier to meet children where they are.

One obvious problem with the SweetFarts philosophy of education is that it is more suited to producing a generation of barbarians and morons than to raising the sort of men who make good husbands, fathers and professionals. If you keep meeting a boy where he is, he doesn’t go very far.

The other problem is that pandering doesn’t address the real reason boys won’t read. My own experience with six sons is that even the squirmiest boy does not require lurid or vulgar material to sustain his interest in a book.

So why won’t boys read? The AP story drops a clue when it describes the efforts of one frustrated couple with their 13-year-old unlettered son: “They’ve tried bribing him with new video games.” Good grief.

The appearance of the boy-girl literacy gap happens to coincide with the proliferation of video games and other electronic forms of entertainment over the last decade or two. Boys spend far more time “plugged in” than girls do. Could the reading gap have more to do with competition for boys’ attention than with their supposed inability to focus on anything other than outhouse humor?

Dr. Robert Weis, a psychology professor at Denison University, confirmed this suspicion in a randomized controlled trial of the effect of video games on academic ability. Boys with video games at home, he found, spend more time playing them than reading, and their academic performance suffers substantially. Hard to believe, isn’t it, but Science has spoken.

The secret to raising boys who read, I submit, is pretty simple—keep electronic media, especially video games and recreational Internet, under control (that is to say, almost completely absent). Then fill your shelves with good books.

People who think that a book—even R.L. Stine’s grossest masterpiece—can compete with the powerful stimulation of an electronic screen are kidding themselves. But on the level playing field of a quiet den or bedroom, a good book like “Treasure Island” will hold a boy’s attention quite as well as “Zombie Butts from Uranus.” Who knows—a boy deprived of electronic stimulation might even become desperate enough to read Jane Austen.

Most importantly, a boy raised on great literature is more likely to grow up to think, to speak, and to write like a civilized man. Whom would you prefer to have shaped the boyhood imagination of your daughter’s husband—Raymond Bean or Robert Louis Stevenson?

I offer a final piece of evidence that is perhaps unanswerable: There is no literacy gap between home-schooled boys and girls. How many of these families, do you suppose, have thrown grossology parties?

Mr. Spence is president of Spence Publishing Company in Dallas.

__________

Full article and photo: http://online.wsj.com/article/SB10001424052748704271804575405511702112290.html